Merge branch 'develop' into feature/szhou/schemaless

This commit is contained in:
shenglian zhou 2021-08-23 15:15:39 +08:00
commit 513f284d50
54 changed files with 2966 additions and 2725 deletions

1
.gitignore vendored
View File

@ -1,4 +1,5 @@
build/
.ycm_extra_conf.py
.vscode/
.idea/
cmake-build-debug/

View File

@ -2,18 +2,18 @@
## <a class="anchor" id="intro"></a>TDengine 简介
TDengine是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品它不依赖任何第三方软件也不是优化或包装了一个开源的数据库或流式计算产品而是在吸取众多传统关系型数据库、NoSQL数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品在时序空间大数据处理上有着自己独到的优势。
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品它不依赖任何第三方软件也不是优化或包装了一个开源的数据库或流式计算产品而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
TDengine的模块之一是时序数据库。但除此之外为减少研发的复杂度、系统维护的难度TDengine还提供缓存、消息队列、订阅、流式计算等功能为物联网、工业互联网大数据的处理提供全栈的技术方案是一个高效易用的物联网大数据平台。与Hadoop等典型的大数据平台相比它具有如下鲜明的特点
TDengine 的模块之一是时序数据库。但除此之外为减少研发的复杂度、系统维护的难度TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点:
* __10倍以上的性能提升__定义了创新的数据存储结构单核每秒能处理至少2万次请求插入数百万个数据点读出一千万以上数据点比现有通用数据库快十倍以上。
* __硬件或云服务成本降至1/5__由于超强性能计算资源不到通用大数据方案的1/5通过列式存储和先进的压缩算法存储空间不到通用数据库的1/10。
* __全栈时序数据处理引擎__将数据库、消息队列、缓存、流式计算等功能融为一体应用无需再集成Kafka/Redis/HBase/Spark/HDFS等软件大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__无论是十年前还是一秒钟前的数据指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell, Python, R, MATLAB随时进行。
* __与第三方工具无缝连接__不用一行代码即可与Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R等集成。后续将支持OPC, Hadoop, Spark等BI工具也将无缝连接。
* __零运维成本、零学习成本__安装集群简单快捷无需分库分表实时备份。类标准SQL支持RESTful支持Python/Java/C/C++/C#/Go/Node.js, 与MySQL相似零学习成本。
* __10 倍以上的性能提升__定义了创新的数据存储结构单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至 1/5__由于超强性能计算资源不到通用大数据方案的 1/5通过列式存储和先进的压缩算法存储空间不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark BI 工具也将无缝连接。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL支持 RESTful支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
采用TDengine可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是因充分利用了物联网时序数据的特点它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM等通用型数据。
采用 TDengine可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是因充分利用了物联网时序数据的特点它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png)
<center>图 1. TDengine技术生态图</center>
@ -21,42 +21,47 @@ TDengine的模块之一是时序数据库。但除此之外为减少研发的
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个IOT大数据平台TDengine的典型适用场景是在IOT范畴而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统比如CRMERP等不在本文讨论范围内。
作为一个 IOT 大数据平台TDengine 的典型适用场景是在 IOT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRMERP 等,不在本文讨论范围内。
### 数据源特点和需求
从数据源角度设计人员可以从下面几个角度分析TDengine在目标应用系统里面的适用性。
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
|数据源特点和需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|总体数据量巨大| | | √ |TDengine在容量方面提供出色的水平扩展功能并且具备匹配高压缩的存储结构达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | √ | TDengine的性能大大超过同类产品可以在同样的硬件环境下持续处理大量的输入数据并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | √ |TDengine设计中包含专门针对大量数据源的优化包括数据的写入和查询尤其适合高效处理海量千万或者更多量级的数据源。|
|总体数据量巨大| | | √ |TDengine 在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | √ | TDengine 的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | √ | TDengine 设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
### 系统架构要求
|系统架构要求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求简单可靠的系统架构| | | √ |TDengine的系统架构非常简单可靠自带消息队列缓存流式计算监控等功能无需集成额外的第三方产品。|
|要求容错和高可靠| | | √ |TDengine的集群功能自动提供容错灾备等高可靠功能。|
|标准化规范| | | √ |TDengine使用标准的SQL语言提供主要功能遵守标准化规范。|
|要求简单可靠的系统架构| | | √ | TDengine 的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|要求容错和高可靠| | | √ | TDengine 的集群功能,自动提供容错灾备等高可靠功能。|
|标准化规范| | | √ | TDengine 使用标准的 SQL 语言提供主要功能,遵守标准化规范。|
### 系统功能需求
|系统功能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求完整的内置数据处理算法| | √ | |TDengine的实现了通用的数据处理算法但是还没有做到妥善处理各行各业的所有要求因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理或者应该考虑TDengine和关系型数据系统配合实现系统功能。|
|要求完整的内置数据处理算法| | √ | | TDengine 的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑 TDengine 和关系型数据系统配合实现系统功能。|
### 系统性能需求
|系统性能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求较大的总体处理能力| | | √ |TDengine的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | √ |TDengine的专门为IOT优化的存储和数据处理的设计一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | √ |这方面TDengine性能可以完全对标关系型和NoSQL型数据处理系统。|
|要求较大的总体处理能力| | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | √ | TDengine 的专门为 IOT 优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | √ |这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。|
### 系统维护需求
|系统维护需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求系统可靠运行| | | √ |TDengine的系统架构非常稳定可靠日常维护也简单便捷对维护人员的要求简洁明了最大程度上杜绝人为错误和事故。|
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | |TDengine作为新一代产品目前人才市场里面有经验的人员还有限。但是学习成本低我们作为厂家也提供运维的培训和辅助服务。|
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|

View File

@ -191,7 +191,7 @@ cdf548465318
1通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash
$ docker run -d -v /etc/taos:/etc/taos -p 6041:6041 tdengine/tdengine
$ docker run -d -v /etc/taos:/etc/taos -P 6041:6041 tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql

View File

@ -13,7 +13,7 @@ TDengine采用关系型数据模型需要建库、建表。因此对于一个
```mysql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
```
上述语句将创建一个名为power的库这个库的数据将保留365天超过365天将被自动删除每10天一个数据文件内存块数为4,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
上述语句将创建一个名为power的库这个库的数据将保留365天超过365天将被自动删除每10天一个数据文件内存块数为6,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
创建库之后需要使用SQL命令USE将当前库切换过来例如

View File

@ -178,15 +178,9 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
以上示例,使用了 JDBC-JNI 的 driver建立了到 hostname 为 taosdemo.com端口为 6030TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名user为 root密码password为 taosdata。
**注意**:使用 JDBC-JNI 的 drivertaos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
**注意**:使用 JDBC-JNI 的 drivertaos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库Linux 下是 libtaos.soWindows 下是 taos.dll
* libtaos.so
在 Linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so该目录包含在 Linux 自动扫描路径上,无需单独指定。
* taos.dll
在 Windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端][14]Linux 服务器安装完 TDengine 之后默认已安装 client也可以单独安装 [Linux 客户端][15] 连接远程 TDengine Server。
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client)Linux 服务器安装完 TDengine 之后默认已安装 client也可以单独安装 [Linux 客户端](https://www.taosdata.com/cn/getting-started/#%E5%AE%A2%E6%88%B7%E7%AB%AF) 连接远程 TDengine Server。
JDBC-JNI 的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。
@ -194,9 +188,9 @@ TDengine 的 JDBC URL 规范格式为:
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
url中的配置参数如下
* user登录 TDengine 用户名,默认值 root。
* password用户登录密码默认值 taosdata。
* cfgdir客户端配置文件目录路径Linux OS 上默认值 /etc/taos Windows OS 上默认值 C:/TDengine/cfg
* user登录 TDengine 用户名,默认值 'root'
* password用户登录密码默认值 'taosdata'
* cfgdir客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`
* charset客户端使用的字符集默认值为系统字符集。
* locale客户端语言环境默认值系统当前 locale。
* timezone客户端使用的时区默认值为系统当前时区。
@ -222,9 +216,9 @@ public Connection getConn() throws Exception{
以上示例,建立一个到 hostname 为 taosdemo.com端口为 6030数据库名为 test 的连接。注释为使用 JDBC-RESTful 时的方法。这个连接在 url 中指定了用户名(user)为 root密码password为 taosdata并在 connProps 中指定了使用的字符集、语言环境、时区等信息。
properties 中的配置参数如下:
* TSDBDriver.PROPERTY_KEY_USER登录 TDengine 用户名,默认值 root。
* TSDBDriver.PROPERTY_KEY_PASSWORD用户登录密码默认值 taosdata。
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR客户端配置文件目录路径Linux OS 上默认值 /etc/taos Windows OS 上默认值 C:/TDengine/cfg
* TSDBDriver.PROPERTY_KEY_USER登录 TDengine 用户名,默认值 'root'
* TSDBDriver.PROPERTY_KEY_PASSWORD用户登录密码默认值 'taosdata'
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`
* TSDBDriver.PROPERTY_KEY_CHARSET客户端使用的字符集默认值为系统字符集。
* TSDBDriver.PROPERTY_KEY_LOCALE客户端语言环境默认值系统当前 locale。
* TSDBDriver.PROPERTY_KEY_TIME_ZONE客户端使用的时区默认值为系统当前时区。

View File

@ -315,6 +315,10 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
1. 调用 `taos_stmt_init` 创建参数绑定对象;
2. 调用 `taos_stmt_prepare` 解析 INSERT 语句;
3. 如果 INSERT 语句中预留了表名但没有预留 TAGS那么调用 `taos_stmt_set_tbname` 来设置表名;
* 从 2.1.6.0 版本开始,对于向一个超级表下的多个子表同时写入数据(每个子表写入的数据较少,可能只有一行)的情形,提供了一个专用的优化接口 `taos_stmt_set_sub_tbname`,可以通过提前载入 meta 数据以及避免对 SQL 语法的重复解析来节省总体的处理时间(但这个优化方法并不支持自动建表语法)。具体使用方法如下:
1. 必须先提前调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta
2. 然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname` 来设置表名;
3. 后续子表用 `taos_stmt_set_sub_tbname` 来设置表名。
4. 如果 INSERT 语句中既预留了表名又预留了 TAGS例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值;
5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值,或者调用 `taos_stmt_bind_param` 以单行的方式设置 VALUES 的值;
6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理;
@ -358,6 +362,12 @@ typedef struct TAOS_BIND {
2.1.1.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)
当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。
- `int taos_stmt_set_sub_tbname(TAOS_STMT* stmt, const char* name)`
2.1.6.0 版本新增,仅支持用于替换 INSERT 语句中、属于同一个超级表下的多个子表中、作为写入目标的第 2 个到第 n 个子表的表名)
当 SQL 语句中的表名使用了 `?` 占位时,如果想要一批写入的表是多个属于同一个超级表的子表,那么可以使用此函数绑定除第一个子表之外的其他子表的表名。
*注意:*在使用时,客户端必须先调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname`,后续子表用 `taos_stmt_set_sub_tbname`
- `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)`
2.1.2.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)

View File

@ -217,7 +217,7 @@ taosd -C
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。2.0.15 以前的版本中,此参数的单位是字节) |
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程最大值2表示最大建立2倍CPU核数的查询线程。默认为1表示最大和CPU核数相等的查询线程。该值可以为小数即0.5表示最大建立CPU核数一半的查询线程。 |
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 |
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0关闭1缓存子表最近一行数据2缓存子表每一列的最近的非NULL值3同时打开缓存最近行和列功能。 | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0关闭1缓存子表最近一行数据2缓存子表每一列的最近的非NULL值3同时打开缓存最近行和列功能。2.1.2.0 版本开始此参数支持 03 的取值范围,在此之前取值只能是 [0, 1] | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |

View File

@ -1285,6 +1285,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
说明:(从 2.1.3.0 版本开始新增此函数输出结果行数是范围内总行数减一第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname
示例:
```mysql
taos> select derivative(current, 10m, 0) from t1;
ts | derivative(current, 10m, 0) |
========================================================
2021-08-20 10:11:22.790 | 0.500000000 |
2021-08-20 11:11:22.791 | 0.166666620 |
2021-08-20 12:11:22.791 | 0.000000000 |
2021-08-20 13:11:22.792 | 0.166666620 |
2021-08-20 14:11:22.792 | -0.666666667 |
Query OK, 5 row(s) in set (0.004883s)
```
- **SPREAD**
```mysql
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];

View File

@ -71,7 +71,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
## [Connector](/connector)
- [C/C++ Connector](/connector#c-cpp): primary method to connect to TDengine server through libtaos client library
- [Java Connector(JDBC)]: driver for connecting to the server from Java applications using the JDBC API
- [Java Connector(JDBC)](/connector/java): driver for connecting to the server from Java applications using the JDBC API
- [Python Connector](/connector#python): driver for connecting to TDengine server from Python applications
- [RESTful Connector](/connector#restful): a simple way to interact with TDengine via HTTP
- [Go Connector](/connector#go): driver for connecting to TDengine server from Go applications

View File

@ -0,0 +1,517 @@
# Java connector
## Introduction
The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally.
![tdengine-connector](page://images/tdengine-jdbc-connector.png)
The figure above shows the three ways Java applications can access the TDengine:
* JDBC-JNI: The Java application uses JDBC-JNI's API on physical node1 (pnode1) and directly calls the client API (libtaos.so or taos.dll) to send write or query requests to the taosd instance on physical node2 (pnode2).
* RESTful: The Java application sends the SQL to the RESTful connector on physical node2 (pnode2), which then calls the client API (libtaos.so).
* JDBC-RESTful: The Java application uses the JDBC-restful API to encapsulate SQL into a RESTful request and send it to the RESTful connector of physical node 2.
In terms of implementation, the JDBC driver of TDengine is as consistent as possible with the behavior of the relational database driver. However, due to the differences between TDengine and relational database in the object and technical characteristics of services, there are some differences between taos-jdbcdriver and traditional relational database JDBC driver. The following points should be watched:
* deleting a record is not supported in TDengine.
* transaction is not supported in TDengine.
### Difference between JDBC-JNI and JDBC-restful
<table>
<tr align="center"><th>Difference</th><th>JDBC-JNI</th><th>JDBC-RESTful</th></tr>
<tr align="center">
<td>Supported OS</td>
<td>linux、windows</td>
<td>all platform</td>
</tr>
<tr align="center">
<td>Whether to install the Client</td>
<td>need</td>
<td>do not need</td>
</tr>
<tr align="center">
<td>Whether to upgrade the client after the server is upgraded</td>
<td>need</td>
<td>do not need</td>
</tr>
<tr align="center">
<td>Write performance</td>
<td colspan="2">JDBC-RESTful is 50% to 90% of JDBC-JNI</td>
</tr>
<tr align="center">
<td>Read performance</td>
<td colspan="2">JDBC-RESTful is no different from JDBC-JNI</td>
</tr>
</table>
**Note**: RESTful interfaces are stateless. Therefore, when using JDBC-restful, you should specify the database name in SQL before all table names and super table names, for example:
```sql
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(now, 24.6);
```
## JDBC driver version and supported TDengine and JDK versions
| taos-jdbcdriver | TDengine | JDK |
| -------------------- | ----------------- | -------- |
| 2.0.33 - 2.0.34 | 2.0.3.0 and above | 1.8.x |
| 2.0.31 - 2.0.32 | 2.1.3.0 and above | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x and above | 1.8.x |
| 1.0.2 | 1.6.1.x and above | 1.8.x |
| 1.0.1 | 1.6.1.x and above | 1.8.x |
## DataType in TDengine and Java connector
The TDengine supports the following data types and Java data types:
| TDengine DataType | Java DataType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte[] |
| NCHAR | java.lang.String |
## Install Java connector
### Runtime Requirements
To run TDengine's Java connector, the following requirements shall be met:
1. A Linux or Windows System
2. Java Runtime Environment 1.8 or later
3. TDengine client (required for JDBC-JNI, not required for JDBC-restful)
**Note**:
* After the TDengine client is successfully installed on Linux, the libtaos.so file is automatically copied to /usr/lib/libtaos.so, which is included in the Linux automatic scan path and does not need to be specified separately.
* After the TDengine client is installed on Windows, the taos.dll file that the driver package depends on is automatically copied to the default search path C:/Windows/System32. You do not need to specify it separately.
### Obtain JDBC driver by maven
To Java delevopers, TDengine provides `taos-jdbcdriver` according to the JDBC(3.0) API. Users can find and download it through [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver). Add the following dependencies in pom.xml for your maven projects.
```xml
<dependencies>
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.34</version>
</dependency>
</dependencies>
```
### Obtain JDBC driver by compiling source code
You can download the TDengine source code and compile the latest version of the JDBC Connector.
```shell
git clone https://github.com/taosdata/TDengine.git
cd TDengine/src/connector/jdbc
mvn clean package -Dmaven.test.skip=true
```
a taos-jdbcdriver-2.0.xx-dist.jar will be released in the target directory.
## Usage of java connector
### Establishing a Connection
#### Establishing a connection with URL
Establish the connection by specifying the URL, as shown below:
```java
String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
In the example above, the JDBC-RESTful driver is used to establish a connection to the hostname of 'taosdemo.com', port of 6041, and database name of 'test'. This URL specifies the user name as 'root' and the password as 'taosdata'.
The JDBC-RESTful does not depend on the local function library. Compared with JDBC-JNI, only the following is required:
* DriverClass designated as "com.taosdata.jdbc.rs.RestfulDriver"
* JdbcUrl starts with "JDBC:TAOS-RS://"
* Use port 6041 as the connection port
For better write and query performance, Java applications can use the JDBC-JNI driver, as shown below:
```java
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
In the example above, The JDBC-JNI driver is used to establish a connection to the hostname of 'taosdemo.com', port 6030 (TDengine's default port), and database name of 'test'. This URL specifies the user name as 'root' and the password as 'taosdata'.
<!-- You can also see the JDBC-JNI video tutorial: [JDBC connector of TDengine](https://www.taosdata.com/blog/2020/11/11/1955.html) -->
The format of JDBC URL is:
```url
jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]
```
The configuration parameters in the URL are as follows:
* user: user name for logging in to the TDengine. The default value is 'root'.
* password: the user login password. The default value is 'taosdata'.
* cfgdir: directory of the client configuration file. It is valid only for JDBC-JNI. The default value is `/etc/taos` on Linux and `C:/TDengine/cfg` on Windows.
* charset: character set used by the client. The default value is the system character set.
* locale: client locale. The default value is the current system locale.
* timezone: timezone used by the client. The default value is the current timezone of the system.
* batchfetch: only valid for JDBC-JNI. True if batch ResultSet fetching is enabled; false if row-by-row ResultSet fetching is enabled. Default value is flase.
* timestampFormat: only valid for JDBC-RESTful. 'TIMESTAMP' if you want to get a long value in a ResultSet; 'UTC' if you want to get a string in UTC date-time format in a ResultSet; 'STRING' if you want to get a local date-time format string in ResultSet. Default value is 'STRING'.
* batchErrorIgnore: true if you want to continue executing the rest of the SQL when error happens during execute the executeBatch method in Statement; false, false if the remaining SQL statements are not executed. Default value is false.
#### Establishing a connection with URL and Properties
In addition to establish the connection with the specified URL, you can also use Properties to specify the parameters to set up the connection, as shown below:
```java
public Connection getConn() throws Exception{
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
// String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
Properties connProps = new Properties();
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
return conn;
}
```
In the example above, JDBC-JNI is used to establish a connection to hostname of 'taosdemo.com', port at 6030, and database name of 'test'. The annotation is the method when using JDBC-RESTful. The connection specifies the user name as 'root' and the password as 'taosdata' in the URL, and the character set to use, locale, time zone, and so on in connProps.
The configuration parameters in properties are as follows:
* TSDBDriver.PROPERTY_KEY_USER: user name for logging in to the TDengine. The default value is 'root'.
* TSDBDriver.PROPERTY_KEY_PASSWORD: the user login password. The default value is 'taosdata'.
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR: directory of the client configuration file. It is valid only for JDBC-JNI. The default value is `/etc/taos` on Linux and `C:/TDengine/cfg on Windows`.
* TSDBDriver.PROPERTY_KEY_CHARSET: character set used by the client. The default value is the system character set.
* TSDBDriver.PROPERTY_KEY_LOCALE: client locale. The default value is the current system locale.
* TSDBDriver.PROPERTY_KEY_TIME_ZONE: timezone used by the client. The default value is the current timezone of the system.
* TSDBDriver.PROPERTY_KEY_BATCH_LOAD: only valid for JDBC-JNI. True if batch ResultSet fetching is enabled; false if row-by-row ResultSet fetching is enabled. Default value is flase.
* TSDBDriver.PROPERTY_KEY_BATCH_LOAD: only valid for JDBC-RESTful. 'TIMESTAMP' if you want to get a long value in a ResultSet; 'UTC' if you want to get a string in UTC date-time format in a ResultSet; 'STRING' if you want to get a local date-time format string in ResultSet. Default value is 'STRING'.
* TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true if you want to continue executing the rest of the SQL when error happens during execute the executeBatch method in Statement; false, false if the remaining SQL statements are not executed. Default value is false.
#### Establishing a connection with configuration file
When JDBC-JNI is used to connect to the TDengine cluster, you can specify firstEp and secondEp parameters of the cluster in the client configuration file. As follows:
1. The hostname and port are not specified in Java applications
```java
public Connection getConn() throws Exception{
String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
Properties connProps = new Properties();
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
return conn;
}
```
2. Specify firstEp and secondEp in the configuration file
```txt
# first fully qualified domain name (FQDN) for TDengine system
firstEp cluster_node1:6030
# second fully qualified domain name (FQDN) for TDengine system, for cluster only
secondEp cluster_node2:6030
```
In the above example, JDBC driver uses the client configuration file to establish a connection to the hostname of 'cluster_node1', port 6030, and database name of 'test'. When the firstEp node in the cluster fails, JDBC will try to connect to the cluster using secondEp. In the TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established.
**Note**: In this case, the configuration file belongs to TDengine client which is running inside a Java application. default file path of Linux OS is '/etc/taos/taos.cfg', and default file path of Windows OS is 'C://TDengine/cfg/taos.cfg'.
#### Priority of the parameters
If the parameters in the URL, Properties, and client configuration file are repeated set, the priorities of the parameters in descending order are as follows:
1. URL parameters
2. Properties
3. Client configuration file in taos.cfg
For example, if you specify password as 'taosdata' in the URL and password as 'taosdemo' in the Properties, JDBC will establish a connection using the password in the URL.
For details, see Client Configuration:[client configuration](https://www.taosdata.com/en/documentation/administrator#client)
### Create database and table
```java
Statement stmt = conn.createStatement();
// create database
stmt.executeUpdate("create database if not exists db");
// use database
stmt.executeUpdate("use db");
// create table
stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int, humidity float)");
```
### Insert
```java
// insert data
int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now + 1s, 20, 9.3)");
System.out.println("insert " + affectedRows + " rows.");
```
**Note**: 'now' is an internal system function. The default value is the current time of the computer where the client resides. 'now + 1s' indicates that the current time on the client is added by one second. The following time units are a(millisecond), s (second), m(minute), h(hour), d(day), w(week), n(month), and y(year).
### Query
```java
// query data
ResultSet resultSet = stmt.executeQuery("select * from tb");
Timestamp ts = null;
int temperature = 0;
float humidity = 0;
while(resultSet.next()){
ts = resultSet.getTimestamp(1);
temperature = resultSet.getInt(2);
humidity = resultSet.getFloat("humidity");
System.out.printf("%s, %d, %s\n", ts, temperature, humidity);
}
```
**Note**: The query is consistent with the operation of the relational database, and the index in ResultSet starts from 1.
### Handle exceptions
```java
try (Statement statement = connection.createStatement()) {
// executeQuery
ResultSet resultSet = statement.executeQuery(sql);
// print result
printResult(resultSet);
} catch (SQLException e) {
System.out.println("ERROR Message: " + e.getMessage());
System.out.println("ERROR Code: " + e.getErrorCode());
e.printStackTrace();
}
```
The Java connector may report three types of error codes: JDBC Driver (error codes ranging from 0x2301 to 0x2350), JNI method (error codes ranging from 0x2351 to 0x2400), and TDengine Error. For details about the error code, see:
- https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java
- https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h
### Write data through parameter binding
Since version 2.1.2.0, TDengine's JDBC-JNI implementation has significantly improved parameter binding support for data write (INSERT) scenarios. Data can be written in the following way, avoiding SQL parsing and significantly improving the write performance.(**Note**: parameter binding is not supported in JDBC-RESTful)
```java
Statement stmt = conn.createStatement();
Random r = new Random();
TSDBPreparedStatement s = (TSDBPreparedStatement) conn.prepareStatement("insert into ? using weather_test tags (?, ?) (ts, c1, c2) values(?, ?, ?)");
s.setTableName("w1");
s.setTagInt(0, r.nextInt(10));
s.setTagString(1, "Beijing");
int numOfRows = 10;
ArrayList<Long> ts = new ArrayList<>();
for (int i = 0; i < numOfRows; i++){
ts.add(System.currentTimeMillis() + i);
}
s.setTimestamp(0, ts);
ArrayList<Integer> s1 = new ArrayList<>();
for (int i = 0; i < numOfRows; i++){
s1.add(r.nextInt(100));
}
s.setInt(1, s1);
ArrayList<String> s2 = new ArrayList<>();
for (int i = 0; i < numOfRows; i++){
s2.add("test" + r.nextInt(100));
}
s.setString(2, s2, 10);
s.columnDataAddBatch();
s.columnDataExecuteBatch();
s.columnDataClearBatch();
s.columnDataCloseBatch();
```
The methods used to set tags are:
```java
public void setTagNull(int index, int type)
public void setTagBoolean(int index, boolean value)
public void setTagInt(int index, int value)
public void setTagByte(int index, byte value)
public void setTagShort(int index, short value)
public void setTagLong(int index, long value)
public void setTagTimestamp(int index, long value)
public void setTagFloat(int index, float value)
public void setTagDouble(int index, double value)
public void setTagString(int index, String value)
public void setTagNString(int index, String value)
```
The methods used to set columns are:
```java
public void setInt(int columnIndex, ArrayList<Integer> list) throws SQLException
public void setFloat(int columnIndex, ArrayList<Float> list) throws SQLException
public void setTimestamp(int columnIndex, ArrayList<Long> list) throws SQLException
public void setLong(int columnIndex, ArrayList<Long> list) throws SQLException
public void setDouble(int columnIndex, ArrayList<Double> list) throws SQLException
public void setBoolean(int columnIndex, ArrayList<Boolean> list) throws SQLException
public void setByte(int columnIndex, ArrayList<Byte> list) throws SQLException
public void setShort(int columnIndex, ArrayList<Short> list) throws SQLException
public void setString(int columnIndex, ArrayList<String> list, int size) throws SQLException
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
```
**Note**: Both setString and setNString require the user to declare the column width of the corresponding column in the table definition in the size parameter.
### Data Subscription
#### Subscribe
```java
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topic", "select * from meters", false);
```
parameters:
* topic: the unique topic name of the subscription.
* sql: a select statement.
* restart: true if restart the subscription already exists; false if continue the previous subscription.
In the example above, a subscription named 'topic' is created which use the SQL statement 'select * from meters'. If the subscription already exists, it will continue with the previous query progress, rather than consuming all the data from scratch.
#### Consume
```java
int total = 0;
while(true) {
TSDBResultSet rs = sub.consume();
int count = 0;
while(rs.next()) {
count++;
}
total += count;
System.out.printf("%d rows consumed, total %d\n", count, total);
Thread.sleep(1000);
}
```
The consume method returns a result set containing all the new data so far since the last consume. Make sure to call consume as often as you need (like Thread.sleep(1000) in the example), otherwise you will put unnecessary stress on the server.
#### Close
```java
sub.close(true);
// release resources
resultSet.close();
stmt.close();
conn.close();
```
The close method closes a subscription. If the parameter is true, the subscription progress information is reserved, and a subscription with the same name can be created later to continue consuming data. If false, the subscription progress is not retained.
**Note**: the connection must be closed; otherwise, a connection leak may occur.
## Connection Pool
### HikariCP example
```java
public static void main(String[] args) throws SQLException {
HikariConfig config = new HikariConfig();
// jdbc properties
config.setJdbcUrl("jdbc:TAOS://127.0.0.1:6030/log");
config.setUsername("root");
config.setPassword("taosdata");
// connection pool configurations
config.setMinimumIdle(10); //minimum number of idle connection
config.setMaximumPoolSize(10); //maximum number of connection in the pool
config.setConnectionTimeout(30000); //maximum wait milliseconds for get connection from pool
config.setMaxLifetime(0); // maximum life time for each connection
config.setIdleTimeout(0); // max idle time for recycle idle connection
config.setConnectionTestQuery("select server_status()"); //validation query
HikariDataSource ds = new HikariDataSource(config); //create datasource
Connection connection = ds.getConnection(); // get connection
Statement statement = connection.createStatement(); // get statement
//query or insert
// ...
connection.close(); // put back to conneciton pool
}
```
### Druid example
```java
public static void main(String[] args) throws Exception {
DruidDataSource dataSource = new DruidDataSource();
// jdbc properties
dataSource.setDriverClassName("com.taosdata.jdbc.TSDBDriver");
dataSource.setUrl(url);
dataSource.setUsername("root");
dataSource.setPassword("taosdata");
// pool configurations
dataSource.setInitialSize(10);
dataSource.setMinIdle(10);
dataSource.setMaxActive(10);
dataSource.setMaxWait(30000);
dataSource.setValidationQuery("select server_status()");
Connection connection = dataSource.getConnection(); // get connection
Statement statement = connection.createStatement(); // get statement
//query or insert
// ...
connection.close(); // put back to conneciton pool
}
```
**Note**
As of TDengine V1.6.4.1, the function select server_status() is supported specifically for heartbeat detection, so it is recommended to use select server_status() for Validation queries when using connection pools.
Select server_status() returns 1 on success, as shown below.
```sql
taos> select server_status();
server_status()|
================
1 |
Query OK, 1 row(s) in set (0.000141s)
```
## Integrated with framework
- Please refer to [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate) if using taos-jdbcdriver in Spring JdbcTemplate.
- Please refer to [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo) if using taos-jdbcdriver in Spring JdbcTemplate.
## FAQ
- java.lang.UnsatisfiedLinkError: no taos in java.library.path
**Cause**The application program cannot find Library function *taos*
**Answer**Copy `C:\TDengine\driver\taos.dll` to `C:\Windows\System32\` on Windows and make a soft link through `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` on Linux.
- java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
**Cause**Currently TDengine only support 64bit JDK
**Answer**re-install 64bit JDK.
- For other questions, please refer to [Issues](https://github.com/taosdata/TDengine/issues)

View File

@ -284,3 +284,5 @@ keepColumnName 1
# 0 no query allowed, queries are disabled
# queryBufferSize -1
# percent of redundant data in tsdb meta will compact meta data,0 means donot compact
# tsdbMetaCompactRatio 0

View File

@ -214,6 +214,7 @@ SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t function
int16_t size);
size_t tscNumOfExprs(SQueryInfo* pQueryInfo);
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo);
SExprInfo *tscExprGet(SQueryInfo* pQueryInfo, int32_t index);
int32_t tscExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy);
int32_t tscExprCopyAll(SArray* dst, const SArray* src, bool deepcopy);

View File

@ -643,7 +643,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
for(int32_t j = 0; j < numOfExpr; ++j) {
pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows);
if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM) {
pCtx[j].ptsOutputBuf = pCtx[0].pOutput;
if(j > 0)pCtx[j].ptsOutputBuf = pCtx[j - 1].pOutput;
}
}

View File

@ -2590,13 +2590,12 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
// set the first column ts for diff query
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
colIndex += 1;
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0};
SExprInfo* pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
TSDB_KEYSIZE, getNewResColId(pCmd), TSDB_KEYSIZE, false);
SColumnList ids = createColumnList(1, 0, 0);
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
}
SExprInfo* pExpr = tscExprAppend(pQueryInfo, functionId, &index, resultType, resultSize, getNewResColId(pCmd), intermediateResSize, false);
@ -2869,7 +2868,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
const int32_t TS_COLUMN_INDEX = PRIMARYKEY_TIMESTAMP_COL_INDEX;
SColumnList ids = createColumnList(1, index.tableIndex, TS_COLUMN_INDEX);
insertResultField(pQueryInfo, TS_COLUMN_INDEX, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
aAggs[TSDB_FUNC_TS].name, pExpr);
colIndex += 1; // the first column is ts
@ -5864,10 +5863,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
pQueryInfo->order.orderColId = pSchema[index.columnIndex].colId;
} else if (isTopBottomQuery(pQueryInfo)) {
/* order of top/bottom query in interval is not valid */
SExprInfo* pExpr = tscExprGet(pQueryInfo, 0);
int32_t pos = tscExprTopBottomIndex(pQueryInfo);
assert(pos > 0);
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
assert(pExpr->base.functionId == TSDB_FUNC_TS);
pExpr = tscExprGet(pQueryInfo, 1);
pExpr = tscExprGet(pQueryInfo, pos);
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
return invalidOperationMsg(pMsgBuf, msg5);
}
@ -5959,10 +5960,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
}
} else {
/* order of top/bottom query in interval is not valid */
SExprInfo* pExpr = tscExprGet(pQueryInfo, 0);
int32_t pos = tscExprTopBottomIndex(pQueryInfo);
assert(pos > 0);
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
assert(pExpr->base.functionId == TSDB_FUNC_TS);
pExpr = tscExprGet(pQueryInfo, 1);
pExpr = tscExprGet(pQueryInfo, pos);
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
return invalidOperationMsg(pMsgBuf, msg5);
}

View File

@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
return 0;
}
static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int compLen) {
static void decompressQueryColData(SSqlObj *pSql, SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int32_t compLen) {
int32_t decompLen = 0;
int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput;
int32_t *compSizes;
@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char *
pData = *data + compLen + numOfCols * sizeof(int32_t);
}
tscDebug("0x%"PRIx64" decompress col data, compressed size:%d, decompressed size:%d",
pSql->self, (int32_t)(compLen + numOfCols * sizeof(int32_t)), decompLen);
int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen;
memmove(*data + decompLen, pData, tailLen);
memmove(*data, outputBuf, decompLen);
@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
//Decompress col data if compressed from server
if (pRetrieve->compressed) {
int32_t compLen = htonl(pRetrieve->compLen);
decompressQueryColData(pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
decompressQueryColData(pSql, pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
}
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);

View File

@ -2419,6 +2419,19 @@ size_t tscNumOfExprs(SQueryInfo* pQueryInfo) {
return taosArrayGetSize(pQueryInfo->exprList);
}
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo){
size_t numOfExprs = tscNumOfExprs(pQueryInfo);
for(int32_t i = 0; i < numOfExprs; ++i) {
SExprInfo* pExpr = tscExprGet(pQueryInfo, i);
if (pExpr == NULL)
continue;
if (pExpr->base.functionId == TSDB_FUNC_TOP || pExpr->base.functionId == TSDB_FUNC_BOTTOM) {
return i;
}
}
return -1;
}
// todo REFACTOR
void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes) {
assert (pExpr != NULL || argument != NULL || bytes != 0);

File diff suppressed because it is too large Load Diff

View File

@ -517,6 +517,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b
}
}
//TODO: refactor this function to eliminate additional memory copy
int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) {
ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows);
ASSERT(target->numOfCols == source->numOfCols);

View File

@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30;
int32_t tsCompressMsgSize = -1;
/* denote if server needs to compress the retrieved column data before adding to the rpc response message body.
* 0: disable column data compression
* 1: enable column data compression
* This option is default to disabled. Once enabled, compression will be conducted if any column has size more
* than QUERY_COMP_THRESHOLD. Otherwise, no further compression is needed.
* 0: all data are compressed
* -1: all data are not compressed
* other values: if any retrieved column size is greater than the tsCompressColData, all data will be compressed.
*/
int32_t tsCompressColData = 0;
int32_t tsCompressColData = -1;
// client
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
@ -95,7 +94,7 @@ int32_t tsMaxNumOfOrderedResults = 100000;
// 10 ms for sliding time, the value will changed in case of time precision changed
int32_t tsMinSlidingTime = 10;
// the maxinum number of distict query result
// the maxinum number of distict query result
int32_t tsMaxNumOfDistinctResults = 1000 * 10000;
// 1 us for interval time range, changed accordingly
@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0;
int32_t tsMinTablePerVnode = TSDB_TABLES_STEP;
int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES;
int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP;
int32_t tsTsdbMetaCompactRatio = TSDB_META_COMPACT_RATIO;
// tsdb config
@ -1006,10 +1006,10 @@ static void doInitGlobalConfig(void) {
cfg.option = "compressColData";
cfg.ptr = &tsCompressColData;
cfg.valType = TAOS_CFG_VTYPE_INT8;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0;
cfg.maxValue = 1;
cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = -1;
cfg.maxValue = 100000000.0f;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
@ -1581,6 +1581,16 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
cfg.option = "tsdbMetaCompactRatio";
cfg.ptr = &tsTsdbMetaCompactRatio;
cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.minValue = 0;
cfg.maxValue = 100;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM);
#ifdef TD_TSZ
// lossy compress

View File

@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) {
switch (token->type) {
case TSDB_DATA_TYPE_BOOL: {
int32_t k = strncasecmp(token->z, "true", 4);
if (k == 0) {
if (strncasecmp(token->z, "true", 4) == 0) {
pVar->i64 = TSDB_TRUE;
} else {
assert(strncasecmp(token->z, "false", 5) == 0);
} else if (strncasecmp(token->z, "false", 5) == 0) {
pVar->i64 = TSDB_FALSE;
} else {
return;
}
break;

View File

@ -10,7 +10,8 @@ import sys
_datetime_epoch = datetime.utcfromtimestamp(0)
def _is_not_none(obj):
obj != None
return obj != None
class TaosBind(ctypes.Structure):
_fields_ = [
("buffer_type", c_int),
@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure):
self.buffer = cast(buffer, c_void_p)
self.num = len(values)
def binary(self, values):
def _str_to_buffer(self, values):
self.num = len(values)
self.buffer = cast(c_char_p("".join(filter(_is_not_none, values)).encode("utf-8")), c_void_p)
self.length = (c_int * len(values))(*[len(value) if value is not None else 0 for value in values])
self.buffer_type = FieldType.C_BINARY
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values)
def nchar(self, values):
# type: (list[str]) -> None
is_null = [1 if v == None else 0 for v in values]
self.is_null = cast((c_byte * self.num)(*is_null), c_char_p)
if sum(is_null) == self.num:
self.length = (c_int32 * len(values))(0 * self.num)
return
if sys.version_info < (3, 0):
_bytes = [bytes(value) if value is not None else None for value in values]
buffer_length = max(len(b) + 1 for b in _bytes if b is not None)
@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure):
)
self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes])
self.buffer_length = buffer_length
def binary(self, values):
self.buffer_type = FieldType.C_BINARY
self._str_to_buffer(values)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values)
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
def nchar(self, values):
# type: (list[str]) -> None
self.buffer_type = FieldType.C_NCHAR
self._str_to_buffer(values)
def tinyint_unsigned(self, values):
self.buffer_type = FieldType.C_TINYINT_UNSIGNED

View File

@ -3,6 +3,9 @@
"""Constants in TDengine python
"""
import ctypes, struct
class FieldType(object):
"""TDengine Field Types"""
@ -33,8 +36,8 @@ class FieldType(object):
C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float("nan")
C_DOUBLE_NULL = float("nan")
C_FLOAT_NULL = ctypes.c_float(struct.unpack("<f", b"\x00\x00\xf0\x7f")[0])
C_DOUBLE_NULL = ctypes.c_double(struct.unpack("<d", b"\x00\x00\x00\x00\x00\xff\xff\x7f")[0])
C_BINARY_NULL = bytearray([int("0xff", 16)])
# Timestamp precision definition
C_TIMESTAMP_MILLI = 0

View File

@ -0,0 +1,50 @@
from taos import *
conn = connect()
dbname = "pytest_taos_stmt_multi"
conn.execute("drop database if exists %s" % dbname)
conn.execute("create database if not exists %s" % dbname)
conn.select_db(dbname)
conn.execute(
"create table if not exists log(ts timestamp, bo bool, nil tinyint, \
ti tinyint, si smallint, ii int, bi bigint, tu tinyint unsigned, \
su smallint unsigned, iu int unsigned, bu bigint unsigned, \
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
)
stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
params = new_multi_binds(16)
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
params[1].bool((True, None, False))
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
params[3].tinyint([0, 127, None])
params[4].smallint([3, None, 2])
params[5].int([3, 4, None])
params[6].bigint([3, 4, None])
params[7].tinyint_unsigned([3, 4, None])
params[8].smallint_unsigned([3, 4, None])
params[9].int_unsigned([3, 4, None])
params[10].bigint_unsigned([3, 4, None])
params[11].float([3, None, 1])
params[12].double([3, None, 1.2])
params[13].binary(["abc", "dddafadfadfadfadfa", None])
# params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
params[14].nchar([None, None, None])
params[15].timestamp([None, None, 1626861392591])
stmt.bind_param_batch(params)
stmt.execute()
result = stmt.use_result()
assert result.affected_rows == 3
result.close()
result = conn.query("select * from log")
for row in result:
print(row)
result.close()
stmt.close()
conn.close()

View File

@ -277,6 +277,7 @@ do { \
#define TSDB_MAX_TABLES 10000000
#define TSDB_DEFAULT_TABLES 1000000
#define TSDB_TABLES_STEP 1000
#define TSDB_META_COMPACT_RATIO 0 // disable tsdb meta compact by default
#define TSDB_MIN_DAYS_PER_FILE 1
#define TSDB_MAX_DAYS_PER_FILE 3650

View File

@ -190,7 +190,9 @@ static void parse_args(
fprintf(stderr, "password reading error\n");
}
taosSetConsoleEcho(true);
getchar();
if (EOF == getchar()) {
fprintf(stderr, "getchar() return EOF\n");
}
} else {
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p");

View File

@ -659,7 +659,21 @@ static FILE * g_fpOfInsertResult = NULL;
fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0)
#define errorPrint(fmt, ...) \
do { fprintf(stderr, " \033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, " \033[0m"); } while(0)
do {\
struct tm Tm, *ptm;\
struct timeval timeSecs; \
time_t curTime;\
gettimeofday(&timeSecs, NULL); \
curTime = timeSecs.tv_sec;\
ptm = localtime_r(&curTime, &Tm);\
fprintf(stderr, " \033[31m");\
fprintf(stderr, "%02d/%02d %02d:%02d:%02d.%06d %08" PRId64 " ",\
ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour,\
ptm->tm_min, ptm->tm_sec, (int32_t)timeSecs.tv_usec,\
taosGetSelfPthreadId());\
fprintf(stderr, "ERROR: "fmt, __VA_ARGS__);\
fprintf(stderr, " \033[0m");\
} while(0)
// for strncpy buffer overflow
#define min(a, b) (((a) < (b)) ? (a) : (b))
@ -5202,7 +5216,8 @@ static int64_t generateStbRowData(
tmpLen = strlen(tmp);
tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, INT_BUFF_LEN));
} else {
errorPrint( "Not support data type: %s\n", stbInfo->columns[i].dataType);
errorPrint( "Not support data type: %s\n",
stbInfo->columns[i].dataType);
return -1;
}
@ -5215,8 +5230,7 @@ static int64_t generateStbRowData(
return 0;
}
dataLen -= 1;
dataLen += snprintf(pstr + dataLen, maxLen - dataLen, ")");
tstrncpy(pstr + dataLen - 1, ")", 2);
verbosePrint("%s() LN%d, dataLen:%"PRId64"\n", __func__, __LINE__, dataLen);
verbosePrint("%s() LN%d, recBuf:\n\t%s\n", __func__, __LINE__, recBuf);

View File

@ -62,6 +62,20 @@ typedef struct {
#define errorPrint(fmt, ...) \
do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
static bool isStringNumber(char *input)
{
int len = strlen(input);
if (0 == len) {
return false;
}
for (int i = 0; i < len; i++) {
if (!isdigit(input[i]))
return false;
}
return true;
}
// -------------------------- SHOW DATABASE INTERFACE-----------------------
enum _show_db_index {
@ -472,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
g_args.table_batch = atoi(arg);
break;
case 'T':
if (!isStringNumber(arg)) {
errorPrint("%s", "\n\t-T need a number following!\n");
exit(EXIT_FAILURE);
}
g_args.thread_num = atoi(arg);
break;
case OPT_ABORT:

View File

@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on)
}
if (on)
term.c_lflag|=ECHOFLAGS;
term.c_lflag |= ECHOFLAGS;
else
term.c_lflag &=~ECHOFLAGS;
term.c_lflag &= ~ECHOFLAGS;
err = tcsetattr(STDIN_FILENO,TCSAFLUSH,&term);
if (err == -1 && err == EINTR) {
err = tcsetattr(STDIN_FILENO, TCSAFLUSH, &term);
if (err == -1 || err == EINTR) {
perror("Cannot set the attribution of the terminal");
return -1;
}

View File

@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
#define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows)
//TODO: may need to fine tune this threshold
#define QUERY_COMP_THRESHOLD (1024 * 512)
#define NEEDTO_COMPRESS_QUERY(size) ((size) > QUERY_COMP_THRESHOLD ? 1 : 0)
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData? 1 : 0)
enum {
// when query starts to execute, this status will set
@ -614,6 +612,7 @@ int32_t getNumOfResult(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx* pCtx, int3
void finalizeQueryResult(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SResultRowInfo* pResultRowInfo, int32_t* rowCellInfoOffset);
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows);
void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity);
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput);
void freeParam(SQueryParam *param);
int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param);

View File

@ -3591,7 +3591,7 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i
// set the timestamp output buffer for top/bottom/diff query
int32_t fid = pCtx[i].functionId;
if (fid == TSDB_FUNC_TOP || fid == TSDB_FUNC_BOTTOM || fid == TSDB_FUNC_DIFF || fid == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if (i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
}
@ -3619,14 +3619,15 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf
}
}
for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
pBInfo->pCtx[i].pOutput = pColInfo->pData + pColInfo->info.bytes * pDataBlock->info.rows;
// re-estabilish output buffer pointer.
int32_t functionId = pBInfo->pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE){
if (i > 0) pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
}
}
}
@ -3644,7 +3645,35 @@ void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity) {
}
}
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput) {
bool needCopyTs = false;
int32_t tsNum = 0;
char *src = NULL;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
needCopyTs = true;
if (i > 0 && pCtx[i-1].functionId == TSDB_FUNC_TS_DUMMY){
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i - 1); // find ts data
src = pColRes->pData;
}
}else if(functionId == TSDB_FUNC_TS_DUMMY) {
tsNum++;
}
}
if (!needCopyTs) return;
if (tsNum < 2) return;
if (src == NULL) return;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if(functionId == TSDB_FUNC_TS_DUMMY) {
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i);
memcpy(pColRes->pData, src, pColRes->info.bytes * pRes->info.rows);
}
}
}
void initCtxOutputBuffer(SQLFunctionCtx* pCtx, int32_t size) {
for (int32_t j = 0; j < size; ++j) {
@ -3826,7 +3855,7 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe
}
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
if (!pResInfo->initialized) {
@ -3887,7 +3916,7 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF
int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
/*
@ -5708,6 +5737,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return pRes;
}
@ -5733,8 +5763,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
if (*newgroup) {
if (pRes->info.rows > 0) {
pProjectInfo->existDataBlock = pBlock;
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return pInfo->pRes;
break;
} else { // init output buffer for a new group data
for (int32_t j = 0; j < pOperator->numOfOutput; ++j) {
aAggs[pInfo->pCtx[j].functionId].xFinalize(&pInfo->pCtx[j]);
@ -5764,7 +5793,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
break;
}
}
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return (pInfo->pRes->info.rows > 0)? pInfo->pRes:NULL;
}

View File

@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
}
(*pRsp)->precision = htons(pQueryAttr->precision);
(*pRsp)->compressed = (int8_t)(tsCompressColData && checkNeedToCompressQueryCol(pQInfo));
(*pRsp)->compressed = (int8_t)((tsCompressColData != -1) && checkNeedToCompressQueryCol(pQInfo));
if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) {
doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen);
@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
if ((*pRsp)->compressed && compLen != 0) {
int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput;
*contLen = *contLen - pQueryAttr->resultRowSize * s + compLen + numOfCols * sizeof(int32_t);
int32_t origSize = pQueryAttr->resultRowSize * s;
int32_t compSize = compLen + numOfCols * sizeof(int32_t);
*contLen = *contLen - origSize + compSize;
*pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen);
qDebug("QInfo:0x%"PRIx64" compress col data, uncompressed size:%d, compressed size:%d, ratio:%.2f",
pQInfo->qId, origSize, compSize, (float)origSize / (float)compSize);
}
(*pRsp)->compLen = htonl(compLen);

View File

@ -42,8 +42,9 @@ typedef struct {
typedef struct {
pthread_rwlock_t lock;
SFSStatus* cstatus; // current status
SHashObj* metaCache; // meta cache
SFSStatus* cstatus; // current status
SHashObj* metaCache; // meta cache
SHashObj* metaCacheComp; // meta cache for compact
bool intxn;
SFSStatus* nstatus; // new status
} STsdbFS;

View File

@ -14,6 +14,8 @@
*/
#include "tsdbint.h"
extern int32_t tsTsdbMetaCompactRatio;
#define TSDB_MAX_SUBBLOCKS 8
static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) {
if (key < 0) {
@ -55,8 +57,9 @@ typedef struct {
#define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch)))
static int tsdbCommitMeta(STsdbRepo *pRepo);
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen);
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact);
static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid);
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile);
static int tsdbCommitTSData(STsdbRepo *pRepo);
static void tsdbStartCommit(STsdbRepo *pRepo);
static void tsdbEndCommit(STsdbRepo *pRepo, int eno);
@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) {
// =================== Commit Meta Data
static int tsdbInitCommitMetaFile(STsdbRepo *pRepo, SMFile* pMf, bool open) {
STsdbFS * pfs = REPO_FS(pRepo);
SMFile * pOMFile = pfs->cstatus->pmf;
SDiskID did;
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(pMf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (open && tsdbCreateMFile(pMf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMf));
} else {
tsdbInitMFileEx(pMf, pOMFile);
if (open && tsdbOpenMFile(pMf, O_WRONLY) < 0) {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
}
return 0;
}
static int tsdbCommitMeta(STsdbRepo *pRepo) {
STsdbFS * pfs = REPO_FS(pRepo);
SMemTable *pMem = pRepo->imem;
@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
SActObj * pAct = NULL;
SActCont * pCont = NULL;
SListNode *pNode = NULL;
SDiskID did;
ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0);
if (listNEles(pMem->actList) <= 0) {
// no meta data to commit, just keep the old meta file
tsdbUpdateMFile(pfs, pOMFile);
if (tsTsdbMetaCompactRatio > 0) {
if (tsdbInitCommitMetaFile(pRepo, &mf, false) < 0) {
return -1;
}
int ret = tsdbCompactMetaFile(pRepo, pfs, &mf);
if (ret < 0) tsdbError("compact meta file error");
return ret;
}
return 0;
} else {
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
} else {
tsdbInitMFileEx(&mf, pOMFile);
if (tsdbOpenMFile(&mf, O_WRONLY) < 0) {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
if (tsdbInitCommitMetaFile(pRepo, &mf, true) < 0) {
return -1;
}
}
@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
pAct = (SActObj *)pNode->data;
if (pAct->act == TSDB_UPDATE_META) {
pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj));
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len) < 0) {
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len, false) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
tstrerror(terrno));
tsdbCloseMFile(&mf);
@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
tsdbCloseMFile(&mf);
tsdbUpdateMFile(pfs, &mf);
if (tsTsdbMetaCompactRatio > 0 && tsdbCompactMetaFile(pRepo, pfs, &mf) < 0) {
tsdbError("compact meta file error");
}
return 0;
}
@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) {
pRtn->minFid, pRtn->midFid, pRtn->maxFid);
}
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen) {
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact) {
char buf[64] = "\0";
void * pBuf = buf;
SKVRecord rInfo;
@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void
}
tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM)));
SKVRecord *pRecord = taosHashGet(pfs->metaCache, (void *)&uid, sizeof(uid));
SHashObj* cache = compact ? pfs->metaCacheComp : pfs->metaCache;
pMFile->info.nRecords++;
SKVRecord *pRecord = taosHashGet(cache, (void *)&uid, sizeof(uid));
if (pRecord != NULL) {
pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord));
} else {
pMFile->info.nRecords++;
}
taosHashPut(pfs->metaCache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
taosHashPut(cache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
return 0;
}
@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) {
return 0;
}
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile) {
float delPercent = (float)(pMFile->info.nDels) / (float)(pMFile->info.nRecords);
float tombPercent = (float)(pMFile->info.tombSize) / (float)(pMFile->info.size);
float compactRatio = (float)(tsTsdbMetaCompactRatio)/100;
if (delPercent < compactRatio && tombPercent < compactRatio) {
return 0;
}
if (tsdbOpenMFile(pMFile, O_RDONLY) < 0) {
tsdbError("open meta file %s compact fail", pMFile->f.rname);
return -1;
}
tsdbInfo("begin compact tsdb meta file, ratio:%d, nDels:%" PRId64 ",nRecords:%" PRId64 ",tombSize:%" PRId64 ",size:%" PRId64,
tsTsdbMetaCompactRatio, pMFile->info.nDels,pMFile->info.nRecords,pMFile->info.tombSize,pMFile->info.size);
SMFile mf;
SDiskID did;
// first create tmp meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)) + 1);
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to compact meta data", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
// second iterator metaCache
int code = -1;
int64_t maxBufSize = 1024;
SKVRecord *pRecord;
void *pBuf = NULL;
pBuf = malloc((size_t)maxBufSize);
if (pBuf == NULL) {
goto _err;
}
// init Comp
assert(pfs->metaCacheComp == NULL);
pfs->metaCacheComp = taosHashInit(4096, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
if (pfs->metaCacheComp == NULL) {
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, NULL);
while (pRecord) {
if (tsdbSeekMFile(pMFile, pRecord->offset + sizeof(SKVRecord), SEEK_SET) < 0) {
tsdbError("vgId:%d failed to seek file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (pRecord->size > maxBufSize) {
maxBufSize = pRecord->size;
void* tmp = realloc(pBuf, (size_t)maxBufSize);
if (tmp == NULL) {
goto _err;
}
pBuf = tmp;
}
int nread = (int)tsdbReadMFile(pMFile, pBuf, pRecord->size);
if (nread < 0) {
tsdbError("vgId:%d failed to read file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (nread < pRecord->size) {
tsdbError("vgId:%d failed to read file %s since file corrupted, expected read:%" PRId64 " actual read:%d",
REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile), pRecord->size, nread);
goto _err;
}
if (tsdbUpdateMetaRecord(pfs, &mf, pRecord->uid, pBuf, (int)pRecord->size, true) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pRecord->uid,
tstrerror(terrno));
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, pRecord);
}
code = 0;
_err:
if (code == 0) TSDB_FILE_FSYNC(&mf);
tsdbCloseMFile(&mf);
tsdbCloseMFile(pMFile);
if (code == 0) {
// rename meta.tmp -> meta
tsdbInfo("vgId:%d meta file rename %s -> %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf), TSDB_FILE_FULL_NAME(pMFile));
taosRename(mf.f.aname,pMFile->f.aname);
tstrncpy(mf.f.aname, pMFile->f.aname, TSDB_FILENAME_LEN);
tstrncpy(mf.f.rname, pMFile->f.rname, TSDB_FILENAME_LEN);
// update current meta file info
pfs->nstatus->pmf = NULL;
tsdbUpdateMFile(pfs, &mf);
taosHashCleanup(pfs->metaCache);
pfs->metaCache = pfs->metaCacheComp;
pfs->metaCacheComp = NULL;
} else {
// remove meta.tmp file
remove(mf.f.aname);
taosHashCleanup(pfs->metaCacheComp);
pfs->metaCacheComp = NULL;
}
tfree(pBuf);
ASSERT(mf.info.nDels == 0);
ASSERT(mf.info.tombSize == 0);
tsdbInfo("end compact tsdb meta file,code:%d,nRecords:%" PRId64 ",size:%" PRId64,
code,mf.info.nRecords,mf.info.size);
return code;
}
// =================== Commit Time-Series Data
static int tsdbCommitTSData(STsdbRepo *pRepo) {
SMemTable *pMem = pRepo->imem;

View File

@ -215,6 +215,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) {
}
pfs->intxn = false;
pfs->metaCacheComp = NULL;
pfs->nstatus = tsdbNewFSStatus(maxFSet);
if (pfs->nstatus == NULL) {

View File

@ -16,11 +16,11 @@
#include "tsdbint.h"
static const char *TSDB_FNAME_SUFFIX[] = {
"head", // TSDB_FILE_HEAD
"data", // TSDB_FILE_DATA
"last", // TSDB_FILE_LAST
"", // TSDB_FILE_MAX
"meta" // TSDB_FILE_META
"head", // TSDB_FILE_HEAD
"data", // TSDB_FILE_DATA
"last", // TSDB_FILE_LAST
"", // TSDB_FILE_MAX
"meta", // TSDB_FILE_META
};
static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname);

View File

@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable);
static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid);
static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema);
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable);
static int tsdbAddSchema(STable *pTable, STSchema *pSchema);
static void tsdbFreeTableSchema(STable *pTable);
@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
tsdbUnlockRepoMeta(pRepo);
// Write to memtable action
// TODO: refactor duplicate codes
int tlen = 0;
void *pBuf = NULL;
if (newSuper || superChanged) {
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, super);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, super);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
// add insert new super table action
if (tsdbInsertNewTableAction(pRepo, super) != 0) {
goto _err;
}
}
// add insert new table action
if (tsdbInsertNewTableAction(pRepo, table) != 0) {
goto _err;
}
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, table);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, table);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
if (tsdbCheckCommit(pRepo) < 0) return -1;
@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
tdDestroyTSchemaBuilder(&schemaBuilder);
}
// Chage in memory
// Change in memory
if (pNewSchema != NULL) { // change super table tag schema
TSDB_WLOCK_TABLE(pTable->pSuper);
STSchema *pOldSchema = pTable->pSuper->tagSchema;
@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
}
// ------------------ INTERNAL FUNCTIONS ------------------
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable) {
int tlen = 0;
void *pBuf = NULL;
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pTable);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) {
return -1;
}
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, pTable);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
return 0;
}
STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) {
STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta));
if (pMeta == NULL) {
@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) {
if (pTable->lastCols == NULL) {
return -1;
}
// TODO: use binary search instead
for (int16_t i = 0; i < pTable->maxColNum; ++i) {
if (pTable->lastCols[i].colId == colId) {
return i;
@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
TSDB_WUNLOCK_TABLE(pCTable);
if (insertAct) {
int tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pCTable);
void *buf = tsdbAllocBytes(pRepo, tlen);
ASSERT(buf != NULL);
tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, buf, pCTable);
if (tsdbInsertNewTableAction(pRepo, pCTable) != 0) {
tsdbError("vgId:%d table %s tid %d uid %" PRIu64 " tsdbInsertNewTableAction fail", REPO_ID(pRepo), TABLE_CHAR_NAME(pTable),
TABLE_TID(pTable), TABLE_UID(pTable));
}
}
}
@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable));
tlen += tdEncodeKVRow(buf, pTable->tagVal);
} else {
tlen += taosEncodeFixedU8(buf, (uint8_t)taosArrayGetSize(pTable->schema));
for (int i = 0; i < taosArrayGetSize(pTable->schema); i++) {
uint32_t arraySize = (uint32_t)taosArrayGetSize(pTable->schema);
if(arraySize > UINT8_MAX) {
tlen += taosEncodeFixedU8(buf, 0);
tlen += taosEncodeFixedU32(buf, arraySize);
} else {
tlen += taosEncodeFixedU8(buf, (uint8_t)arraySize);
}
for (uint32_t i = 0; i < arraySize; i++) {
STSchema *pSchema = taosArrayGetP(pTable->schema, i);
tlen += tdEncodeSchema(buf, pSchema);
}
@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable));
buf = tdDecodeKVRow(buf, &(pTable->tagVal));
} else {
uint8_t nSchemas;
buf = taosDecodeFixedU8(buf, &nSchemas);
uint32_t nSchemas = 0;
buf = taosDecodeFixedU8(buf, (uint8_t *)&nSchemas);
if(nSchemas == 0) {
buf = taosDecodeFixedU32(buf, &nSchemas);
}
for (int i = 0; i < nSchemas; i++) {
STSchema *pSchema;
buf = tdDecodeSchema(buf, &pSchema);
@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) {
taosArrayDestroy(pTable->schema);
}
}
}

View File

@ -14,23 +14,24 @@
*/
#include "tfunctional.h"
#include "tarray.h"
tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) {
tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*)));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func;
return pSavedFunc;
}
tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) {
tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func;
return pSavedFunc;
}
tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) {
tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func;
return pSavedFunc;
}

View File

@ -104,6 +104,21 @@ class TDTestCase:
tdSql.checkRows(2)
tdSql.checkData(0, 1, 1)
tdSql.checkData(1, 1, 2)
tdSql.query("select ts,bottom(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
#TD-2457 bottom + interval + order by
tdSql.error('select top(col2,1) from test interval(1y) order by col2;')

View File

@ -54,6 +54,29 @@ class TDTestCase:
tdSql.query("select derivative(col, 10s, 0) from stb group by tbname")
tdSql.checkRows(10)
tdSql.query("select ts,derivative(col, 10s, 1),ts from stb group by tbname")
tdSql.checkRows(4)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(3, 0, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 1, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 3, "2018-09-17 09:01:20.000")
tdSql.query("select ts,derivative(col, 10s, 1),ts from tb1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 1, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:20.009")
tdSql.query("select ts from(select ts,derivative(col, 10s, 0) from stb group by tbname)")
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname")
tdSql.query("select derivative(col, 10s, 1) from tb1")

View File

@ -95,6 +95,24 @@ class TDTestCase:
tdSql.error("select diff(col14) from test")
tdSql.query("select ts,diff(col1),ts from test1")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,diff(col1),ts from test group by tbname")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select diff(col1) from test1")
tdSql.checkRows(10)

View File

@ -117,7 +117,22 @@ class TDTestCase:
tdSql.checkRows(2)
tdSql.checkData(0, 1, 8.1)
tdSql.checkData(1, 1, 9.1)
tdSql.query("select ts,top(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
#TD-2563 top + super_table + interval
tdSql.execute("create table meters(ts timestamp, c int) tags (d int)")
tdSql.execute("create table t1 using meters tags (1)")

View File

@ -0,0 +1,97 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
import subprocess
from random import choice
class TwoClients:
def initConnection(self):
self.host = "chr03"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.port =6030
self.rowNum = 10
self.ts = 1537146000000
def run(self):
# new taos client
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn1)
cur1 = conn1.cursor()
tdSql.init(cur1, True)
tdSql.execute("drop database if exists db3")
# insert data with taosc
for i in range(10):
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
# # check data correct
tdSql.execute("show databases")
tdSql.execute("use db3")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 20000)
tdSql.query("select count (*) from stb0")
tdSql.checkData(0, 0, 4000000)
# insert data with python connector , if you want to use this case ,cancel note.
# for x in range(10):
# dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
# tdSql.execute("drop database if exists db3")
# tdSql.execute("create database db3 keep 3650 replica 2 ")
# tdSql.execute("use db3")
# tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
# col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(3000), tag1 int)''')
# rowNum2= 988
# for i in range(rowNum2):
# tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
# rowNum3= 988
# for i in range(rowNum3):
# tdSql.execute("alter table test drop column col%d ;" %( i+14) )
# self.rowNum = 50
# self.rowNum2 = 2000
# self.ts = 1537146000000
# for j in range(self.rowNum2):
# tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
# for i in range(self.rowNum):
# tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
# % (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
# # check data correct
# tdSql.execute("show databases")
# tdSql.execute("use db3")
# tdSql.query("select count (tbname) from test")
# tdSql.checkData(0, 0, 200)
# tdSql.query("select count (*) from test")
# tdSql.checkData(0, 0, 200000)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf manualTest/TD-5114/%s.sql" % testcaseFilename )
clients = TwoClients()
clients.initConnection()
# clients.getBuildPath()
clients.run()

View File

@ -0,0 +1,61 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db3",
"drop": "yes",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 20000,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 2000,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,275 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from sys import version
from fabric import Connection
import random
import time
import datetime
import logging
import subprocess
import os
import sys
class Node:
def __init__(self, index, username, hostIP, password, version):
self.index = index
self.username = username
self.hostIP = hostIP
# self.hostName = hostName
# self.homeDir = homeDir
self.version = version
self.verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % self.version
self.installPath = "TDengine-enterprise-server-%s" % self.version
# self.corePath = '/coredump'
self.conn = Connection("{}@{}".format(username, hostIP), connect_kwargs={"password": "{}".format(password)})
def buildTaosd(self):
try:
print(self.conn)
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
self.conn.run("cd /home/chr/installtest/ && tar -xvf %s " %self.verName)
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
except Exception as e:
print("Build Taosd error for node %d " % self.index)
logging.exception(e)
pass
def rebuildTaosd(self):
try:
print(self.conn)
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
except Exception as e:
print("Build Taosd error for node %d " % self.index)
logging.exception(e)
pass
def startTaosd(self):
try:
self.conn.run("sudo systemctl start taosd")
except Exception as e:
print("Start Taosd error for node %d " % self.index)
logging.exception(e)
def restartTarbi(self):
try:
self.conn.run("sudo systemctl restart tarbitratord ")
except Exception as e:
print("Start Taosd error for node %d " % self.index)
logging.exception(e)
def clearData(self):
timeNow = datetime.datetime.now()
# timeYes = datetime.datetime.now() + datetime.timedelta(days=-1)
timStr = timeNow.strftime('%Y%m%d%H%M%S')
# timStr = timeNow.strftime('%Y%m%d%H%M%S')
try:
# self.conn.run("mv /var/lib/taos/ /var/lib/taos%s " % timStr)
self.conn.run("rm -rf /home/chr/data/taos*")
except Exception as e:
print("rm -rf /var/lib/taos error %d " % self.index)
logging.exception(e)
def stopTaosd(self):
try:
self.conn.run("sudo systemctl stop taosd")
except Exception as e:
print("Stop Taosd error for node %d " % self.index)
logging.exception(e)
def restartTaosd(self):
try:
self.conn.run("sudo systemctl restart taosd")
except Exception as e:
print("Stop Taosd error for node %d " % self.index)
logging.exception(e)
class oneNode:
def FirestStartNode(self, id, username, IP, passwd, version):
# get installPackage
verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % version
# installPath = "TDengine-enterprise-server-%s" % self.version
node131 = Node(131, 'ubuntu', '192.168.1.131', 'tbase125!', '2.0.20.0')
node131.conn.run('sshpass -p tbase125! scp /nas/TDengine/v%s/enterprise/%s root@192.168.1.%d:/home/chr/installtest/' % (version,verName,id))
node131.conn.close()
# install TDengine at 192.168.103/104/141
try:
node = Node(id, username, IP, passwd, version)
node.conn.run('echo "start taosd"')
node.buildTaosd()
# clear DataPath , if need clear data
node.clearData()
node.startTaosd()
if id == 103 :
node.restartTarbi()
print("start taosd ver:%s node:%d successfully " % (version,id))
node.conn.close()
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
# assert query_pid == 1 , "node %d: start taosd failed " % id
except Exception as e:
print("Stop Taosd error for node %d " % id)
logging.exception(e)
def startNode(self, id, username, IP, passwd, version):
# start TDengine
try:
node = Node(id, username, IP, passwd, version)
node.conn.run('echo "restart taosd"')
# clear DataPath , if need clear data
node.clearData()
node.restartTaosd()
time.sleep(5)
if id == 103 :
node.restartTarbi()
print("start taosd ver:%s node:%d successfully " % (version,id))
node.conn.close()
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
# assert query_pid == 1 , "node %d: start taosd failed " % id
except Exception as e:
print("Stop Taosd error for node %d " % id)
logging.exception(e)
def firstUpgradeNode(self, id, username, IP, passwd, version):
# get installPackage
verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % version
# installPath = "TDengine-enterprise-server-%s" % self.version
node131 = Node(131, 'ubuntu', '192.168.1.131', 'tbase125!', '2.0.20.0')
node131.conn.run('echo "upgrade cluster"')
node131.conn.run('sshpass -p tbase125! scp /nas/TDengine/v%s/enterprise/%s root@192.168.1.%d:/home/chr/installtest/' % (version,verName,id))
node131.conn.close()
# upgrade TDengine at 192.168.103/104/141
try:
node = Node(id, username, IP, passwd, version)
node.conn.run('echo "start taosd"')
node.conn.run('echo "1234" > /home/chr/test.log')
node.buildTaosd()
time.sleep(5)
node.startTaosd()
if id == 103 :
node.restartTarbi()
print("start taosd ver:%s node:%d successfully " % (version,id))
node.conn.close()
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
# assert query_pid == 1 , "node %d: start taosd failed " % id
except Exception as e:
print("Stop Taosd error for node %d " % id)
logging.exception(e)
def upgradeNode(self, id, username, IP, passwd, version):
# backCluster TDengine at 192.168.103/104/141
try:
node = Node(id, username, IP, passwd, version)
node.conn.run('echo "rollback taos"')
node.rebuildTaosd()
time.sleep(5)
node.startTaosd()
if id == 103 :
node.restartTarbi()
print("start taosd ver:%s node:%d successfully " % (version,id))
node.conn.close()
except Exception as e:
print("Stop Taosd error for node %d " % id)
logging.exception(e)
# how to use : cd TDinternal/commumity/test/pytest && python3 manualTest/rollingUpgrade.py ,when inserting data, we can start " python3 manualTest/rollingUpagrade.py". add example "oneNode().FirestStartNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')"
# node103=oneNode().FirestStartNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
# node104=oneNode().FirestStartNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
# node141=oneNode().FirestStartNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
# node103=oneNode().startNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
# time.sleep(30)
# node141=oneNode().startNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
# time.sleep(30)
# node104=oneNode().startNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
# time.sleep(30)
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.5')
# time.sleep(30)
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.5')
# time.sleep(30)
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.5')
# time.sleep(30)
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
# time.sleep(30)
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
# time.sleep(30)
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
# time.sleep(30)
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
# time.sleep(30)
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
# time.sleep(30)
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')
# time.sleep(30)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
# time.sleep(120)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
# time.sleep(180)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
# time.sleep(240)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.5')
# time.sleep(120)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.5')
# time.sleep(120)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.5')
# time.sleep(180)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
# time.sleep(120)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
# time.sleep(120)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
# time.sleep(180)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
# time.sleep(180)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
# time.sleep(180)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.9')
# time.sleep(5)
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.9')
# time.sleep(5)
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.9')
# time.sleep(30)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
# time.sleep(12)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
# time.sleep(12)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
# time.sleep(180)
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
# time.sleep(120)
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
# time.sleep(120)
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')

View File

@ -0,0 +1,83 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.ts = 1537146000000
def run(self):
tdSql.prepare()
print("======= Verify filter for bool, nchar and binary type =========")
tdLog.debug(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute("create table st1 using st tags(true, 'table1', '水表')")
for i in range(1, 6):
tdSql.execute(
"insert into st1 values(%d, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d, %f, %f)" %
(self.ts + i, i %
2, i, i,
i, i, i, i, 1.0, 1.0))
# =============Data type keywords cannot be used in filter====================
# timestamp
tdSql.error("select * from st where timestamp = 1629417600")
# bool
tdSql.error("select * from st where bool = false")
#binary
tdSql.error("select * from st where binary = 'taosdata'")
# nchar
tdSql.error("select * from st where nchar = '涛思数据'")
# tinyint
tdSql.error("select * from st where tinyint = 127")
# smallint
tdSql.error("select * from st where smallint = 32767")
# int
tdSql.error("select * from st where INTEGER = 2147483647")
tdSql.error("select * from st where int = 2147483647")
# bigint
tdSql.error("select * from st where bigint = 2147483647")
# float
tdSql.error("select * from st where float = 3.4E38")
# double
tdSql.error("select * from st where double = 1.7E308")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

View File

@ -0,0 +1,87 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 10,
"num_of_records_per_req": 1000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db1",
"drop": "yes",
"replica": 1,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 1000,
"childtable_prefix": "stb00_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 1000,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 10000,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 200,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,87 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 10,
"num_of_records_per_req": 1000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db1",
"drop": "yes",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 1000,
"childtable_prefix": "stb00_",
"auto_create_table": "no",
"batch_create_tbl_num": 100,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 100,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 1000,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 200,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,86 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db2",
"drop": "yes",
"replica": 1,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 200000,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 0,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 2,
"childtable_prefix": "stb1_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 5,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,86 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db2",
"drop": "no",
"replica": 1,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 1,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 100,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 0,
"childtable_limit": -1,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"yes",
"childtable_count": 1,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 10,
"childtable_limit": -1,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-11-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,86 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db2",
"drop": "no",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 1,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 100,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 0,
"childtable_limit": -1,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"yes",
"childtable_count": 1,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 10,
"childtable_limit": -1,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-11-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,86 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db2",
"drop": "yes",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 2000,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 100,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 2000,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 2,
"childtable_prefix": "stb1_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 5,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}

View File

@ -0,0 +1,161 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from distutils.log import debug
import sys
import os
import taos
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
import subprocess
from random import choice
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
global selfPath
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
def run(self):
# set path para
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
testPath = selfPath+ "/../../../"
walFilePath = testPath + "/sim/dnode1/data/mnode_bak/wal/"
#new db and insert data
tdSql.execute("drop database if exists db2")
os.system("%staosdemo -f tsdb/insertDataDb1.json -y " % binPath)
tdSql.execute("drop database if exists db1")
os.system("%staosdemo -f tsdb/insertDataDb2.json -y " % binPath)
tdSql.execute("drop table if exists db2.stb0")
os.system("%staosdemo -f tsdb/insertDataDb2Newstab.json -y " % binPath)
tdSql.execute("use db2")
tdSql.execute("drop table if exists stb1_0")
tdSql.execute("drop table if exists stb1_1")
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
tdSql.execute("alter table db2.stb0 add column c4 int")
tdSql.execute("alter table db2.stb0 drop column c2")
tdSql.execute("alter table db2.stb0 add tag t3 int;")
tdSql.execute("alter table db2.stb0 drop tag t1")
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
tdSql.execute("alter table stb2_0 drop column c1")
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
# create db utest
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
tdSql.execute("drop database if exists utest")
tdSql.execute("create database utest keep 3650")
tdSql.execute("use utest")
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
# rowNum1 = 13
# for i in range(rowNum1):
# columnName= "col" + str(i+1)
# tdSql.execute("alter table test drop column %s ;" % columnName )
rowNum2= 988
for i in range(rowNum2):
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
rowNum3= 988
for i in range(rowNum3):
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
self.rowNum = 1
self.rowNum2 = 100
self.rowNum3 = 20
self.ts = 1537146000000
for j in range(self.rowNum2):
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
for i in range(self.rowNum):
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
for j in range(self.rowNum2):
tdSql.execute("drop table if exists test%d" % (j+1))
# stop taosd and restart taosd
tdDnodes.stop(1)
sleep(10)
tdDnodes.start(1)
sleep(5)
tdSql.execute("reset query cache")
query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
print(query_pid2)
# verify that the data is correct
tdSql.execute("use db2")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 1)
tdSql.query("select count (tbname) from stb1")
tdSql.checkRows(0)
tdSql.query("select count(*) from stb0_0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb2_0")
tdSql.checkData(0, 0, 2)
tdSql.execute("use utest")
tdSql.query("select count (tbname) from test")
tdSql.checkData(0, 0, 1)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

View File

@ -0,0 +1,160 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
import subprocess
from random import choice
class TwoClients:
def initConnection(self):
self.host = "chenhaoran02"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.port =6030
self.rowNum = 10
self.ts = 1537146000000
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
def run(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
walFilePath = "/var/lib/taos/mnode_bak/wal/"
# new taos client
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn1)
cur1 = conn1.cursor()
tdSql.init(cur1, True)
# new db ,new super tables , child tables, and insert data
tdSql.execute("drop database if exists db2")
os.system("%staosdemo -f tsdb/insertDataDb1.json -y " % binPath)
tdSql.execute("drop database if exists db1")
os.system("%staosdemo -f tsdb/insertDataDb2.json -y " % binPath)
tdSql.execute("drop table if exists db2.stb0")
os.system("%staosdemo -f tsdb/insertDataDb2Newstab.json -y " % binPath)
# new general tables and modify general tables;
tdSql.execute("use db2")
tdSql.execute("drop table if exists stb1_0")
tdSql.execute("drop table if exists stb1_1")
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
tdSql.execute("alter table db2.stb0 add column c4 int")
tdSql.execute("alter table db2.stb0 drop column c2")
tdSql.execute("alter table db2.stb0 add tag t3 int")
tdSql.execute("alter table db2.stb0 drop tag t1")
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
tdSql.execute("alter table stb2_0 drop column c1")
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
# create db utest and modify super tables;
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
tdSql.execute("drop database if exists utest")
tdSql.execute("create database utest keep 3650")
tdSql.execute("use utest")
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
rowNum2= 988
for i in range(rowNum2):
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
rowNum3= 988
for i in range(rowNum3):
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
self.rowNum = 1
self.rowNum2 = 100
self.ts = 1537146000000
for j in range(self.rowNum2):
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
for i in range(self.rowNum):
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
# delete child tables;
for j in range(self.rowNum2):
tdSql.execute("drop table if exists test%d" % (j+1))
#restart taosd
os.system("ps -ef |grep taosd |grep -v 'grep' |awk '{print $2}'|xargs kill -2")
sleep(20)
print("123")
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
sleep(4)
tdSql.execute("reset query cache")
query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
print(query_pid2)
# new taos connecting to server
conn2 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn2)
cur2 = conn2.cursor()
tdSql.init(cur2, True)
# check data correct
tdSql.query("show databases")
tdSql.execute("use db2")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 1)
tdSql.query("select count (tbname) from stb1")
tdSql.checkRows(0)
tdSql.query("select count(*) from stb0_0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb2_0")
tdSql.checkData(0, 0, 2)
tdSql.query("select * from stb2_0")
tdSql.checkData(1, 2, 'R')
tdSql.execute("use utest")
tdSql.query("select count (tbname) from test")
tdSql.checkData(0, 0, 1)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
clients = TwoClients()
clients.initConnection()
# clients.getBuildPath()
clients.run()

View File

@ -0,0 +1,170 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
import subprocess
from random import choice
class TwoClients:
def initConnection(self):
self.host = "chenhaoran02"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.port =6030
self.rowNum = 10
self.ts = 1537146000000
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
def run(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
walFilePath = "/var/lib/taos/mnode_bak/wal/"
# new taos client
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn1)
cur1 = conn1.cursor()
tdSql.init(cur1, True)
# new db ,new super tables , child tables, and insert data
tdSql.execute("drop database if exists db2")
os.system("%staosdemo -f tsdb/insertDataDb1Replica2.json -y " % binPath)
tdSql.execute("drop database if exists db1")
os.system("%staosdemo -f tsdb/insertDataDb2Replica2.json -y " % binPath)
tdSql.execute("drop table if exists db2.stb0")
os.system("%staosdemo -f tsdb/insertDataDb2NewstabReplica2.json -y " % binPath)
# new general tables and modify general tables;
tdSql.execute("use db2")
tdSql.execute("drop table if exists stb1_0")
tdSql.execute("drop table if exists stb1_1")
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
tdSql.execute("alter table db2.stb0 add column c4 int")
tdSql.execute("alter table db2.stb0 drop column c2")
tdSql.execute("alter table db2.stb0 add tag t3 int")
tdSql.execute("alter table db2.stb0 drop tag t1")
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
tdSql.execute("alter table stb2_0 drop column c1")
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
# create db utest replica 2 and modify super tables;
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
tdSql.execute("drop database if exists utest")
tdSql.execute("create database utest keep 3650 replica 2 ")
tdSql.execute("use utest")
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
rowNum2= 988
for i in range(rowNum2):
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
rowNum3= 988
for i in range(rowNum3):
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
self.rowNum = 1
self.rowNum2 = 100
self.ts = 1537146000000
for j in range(self.rowNum2):
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
for i in range(self.rowNum):
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
# delete child tables;
for j in range(self.rowNum2):
tdSql.execute("drop table if exists test%d" % (j+1))
# drop dnodes and restart taosd;
sleep(3)
tdSql.execute(" drop dnode 'chenhaoran02:6030'; ")
sleep(20)
os.system("rm -rf /var/lib/taos/*")
print("clear dnode chenhaoran02'data files")
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
print("start taosd")
sleep(10)
tdSql.execute("reset query cache ;")
tdSql.execute("create dnode chenhaoran02 ;")
# #
# os.system("ps -ef |grep taosd |grep -v 'grep' |awk '{print $2}'|xargs kill -2")
# sleep(20)
# os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
# sleep(4)
# tdSql.execute("reset query cache")
# query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
# print(query_pid2)
# new taos connecting to server
conn2 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn2)
cur2 = conn2.cursor()
tdSql.init(cur2, True)
# check data correct
tdSql.query("show databases")
tdSql.execute("use db2")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 1)
tdSql.query("select count (tbname) from stb1")
tdSql.checkRows(0)
tdSql.query("select count(*) from stb0_0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb0")
tdSql.checkData(0, 0, 2)
tdSql.query("select count(*) from stb2_0")
tdSql.checkData(0, 0, 2)
tdSql.query("select * from stb2_0")
tdSql.checkData(1, 2, 'R')
tdSql.execute("use utest")
tdSql.query("select count (tbname) from test")
tdSql.checkData(0, 0, 1)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
clients = TwoClients()
clients.initConnection()
# clients.getBuildPath()
clients.run()