merge from develop
This commit is contained in:
commit
56885b7645
|
@ -160,7 +160,6 @@ pipeline {
|
|||
skipbuild='2'
|
||||
skipbuild=sh(script: "git log -2 --pretty=%B | fgrep -ie '[skip ci]' -e '[ci skip]' && echo 1 || echo 2", returnStdout:true)
|
||||
println skipbuild
|
||||
|
||||
}
|
||||
sh'''
|
||||
rm -rf ${WORKSPACE}.tes
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# TDengine的运营与维护
|
||||
# TDengine的运营与运维
|
||||
|
||||
## <a class="anchor" id="planning"></a>容量规划
|
||||
|
||||
|
@ -28,12 +28,28 @@ taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
|
|||
|
||||
最后,如果内存充裕,可以考虑加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
|
||||
|
||||
#### 客户端内存需求
|
||||
|
||||
客户端应用采用 taosc 客户端驱动连接服务端,会有内存需求的开销。
|
||||
|
||||
客户端的内存开销主要由写入过程中的 SQL 语句、表的元数据信息缓存、以及结构性开销构成。系统最大容纳的表数量为 N(每个通过超级表创建的表的 meta data 开销约 256 字节),最大并行写入线程数量 T,最大 SQL 语句长度 S(通常是 1 Mbytes)。由此可以进行客户端内存开销的估算(单位 MBytes):
|
||||
```
|
||||
M = (T * S * 3 + (N / 4096) + 100)
|
||||
```
|
||||
|
||||
举例如下:用户最大并发写入线程数 100,子表数总数 10,000,000,那么客户端的内存最低要求是:
|
||||
```
|
||||
100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
|
||||
```
|
||||
|
||||
即配置 3 GBytes 内存是最低要求。
|
||||
|
||||
### CPU 需求
|
||||
|
||||
CPU 的需求取决于如下两方面:
|
||||
|
||||
* __数据插入__ TDengine 单核每秒能至少处理一万个插入请求。每个插入请求可以带多条记录,一次插入一条记录与插入 10 条记录,消耗的计算资源差别很小。因此每次插入,条数越大,插入效率越高。如果一个插入请求带 200 条以上记录,单核就能达到每秒插入 100 万条记录的速度。但对前端数据采集的要求越高,因为需要缓存记录,然后一批插入。
|
||||
* __查询需求__ TDengine 提供高效的查询,但是每个场景的查询差异很大,查询频次变化也很大,难以给出客观数字。需要用户针对自己的场景,写一些查询语句,才能确定。
|
||||
* **数据插入** TDengine 单核每秒能至少处理一万个插入请求。每个插入请求可以带多条记录,一次插入一条记录与插入 10 条记录,消耗的计算资源差别很小。因此每次插入,条数越大,插入效率越高。如果一个插入请求带 200 条以上记录,单核就能达到每秒插入 100 万条记录的速度。但对前端数据采集的要求越高,因为需要缓存记录,然后一批插入。
|
||||
* **查询需求** TDengine 提供高效的查询,但是每个场景的查询差异很大,查询频次变化也很大,难以给出客观数字。需要用户针对自己的场景,写一些查询语句,才能确定。
|
||||
|
||||
因此仅对数据插入而言,CPU 是可以估算出来的,但查询所耗的计算资源无法估算。在实际运营过程中,不建议 CPU 使用率超过 50%,超过后,需要增加新的节点,以获得更多计算资源。
|
||||
|
||||
|
@ -96,51 +112,170 @@ TDengine系统后台服务由taosd提供,可以在配置文件taos.cfg里修
|
|||
taosd -C
|
||||
```
|
||||
|
||||
下面仅仅列出一些重要的配置参数,更多的参数请看配置文件里的说明。各个参数的详细介绍及作用请看前述章节,而且这些参数的缺省配置都是工作的,一般无需设置。**注意:配置修改后,需要重启*taosd*服务才能生效。**
|
||||
下面仅仅列出一些重要的配置参数,更多的参数请看配置文件里的说明。各个参数的详细介绍及作用请看前述章节,而且这些参数的缺省配置都是可以工作的,一般无需设置。**注意:配置文件参数修改后,需要重启*taosd*服务,或客户端应用才能生效。**
|
||||
|
||||
- firstEp: taosd启动时,主动连接的集群中首个dnode的end point, 默认值为localhost:6030。
|
||||
- fqdn:数据节点的FQDN,缺省为操作系统配置的第一个hostname。如果习惯IP地址访问,可设置为该节点的IP地址。这个参数值的长度需要控制在 96 个字符以内。
|
||||
- serverPort:taosd启动后,对外服务的端口号,默认值为6030。(RESTful服务使用的端口号是在此基础上+11,即默认值为6041。)
|
||||
- dataDir: 数据文件目录,所有的数据文件都将写入该目录。默认值:/var/lib/taos。
|
||||
- logDir:日志文件目录,客户端和服务器的运行日志文件将写入该目录。默认值:/var/log/taos。
|
||||
- arbitrator:系统中裁决器的end point, 缺省值为空。
|
||||
- role:dnode的可选角色。0-any; 既可作为mnode,也可分配vnode;1-mgmt;只能作为mnode,不能分配vnode;2-dnode;不能作为mnode,只能分配vnode
|
||||
- debugFlag:运行日志开关。131(输出错误和警告日志),135( 输出错误、警告和调试日志),143( 输出错误、警告、调试和跟踪日志)。默认值:131或135(不同模块有不同的默认值)。
|
||||
- numOfLogLines:单个日志文件允许的最大行数。默认值:10,000,000行。
|
||||
- logKeepDays:日志文件的最长保存时间。大于0时,日志文件会被重命名为taosdlog.xxx,其中xxx为日志文件最后修改的时间戳,单位为秒。默认值:0天。
|
||||
- maxSQLLength:单条SQL语句允许最长限制。默认值:65380字节。
|
||||
- telemetryReporting: 是否允许 TDengine 采集和上报基本使用信息,0表示不允许,1表示允许。 默认值:1。
|
||||
- stream: 是否启用连续查询(流计算功能),0表示不允许,1表示允许。 默认值:1。
|
||||
- queryBufferSize: 为所有并发查询占用保留的内存大小。计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。单位为 MB(2.0.15 以前的版本中,此参数的单位是字节)。
|
||||
- ratioOfQueryCores: 设置查询线程的最大数量。最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。
|
||||
| **#** | **配置参数名称** | **内部** | **S\|C** | **单位** | **含义** | **取值范围** | **缺省值** | **备注** |
|
||||
| ----- | ----------------------- | -------- | -------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 1 | firstEP | | **SC** | | taosd启动时,主动连接的集群中首个dnode的end point | | localhost:6030 | |
|
||||
| 2 | secondEP | YES | **SC** | | taosd启动时,如果firstEp连接不上,尝试连接集群中第二个dnode的end point | | 无 | |
|
||||
| 3 | fqdn | | **SC** | | 数据节点的FQDN。如果习惯IP地址访问,可设置为该节点的IP地址。 | | 缺省为操作系统配置的第一个hostname。 | 这个参数值的长度需要控制在 96 个字符以内。 |
|
||||
| 4 | serverPort | | **SC** | | taosd启动后,对外服务的端口号 | | 6030 | RESTful服务使用的端口号是在此基础上+11,即默认值为6041。 |
|
||||
| 5 | logDir | | **SC** | | 日志文件目录,客户端和服务器的运行日志将写入该目录 | | /var/log/taos | |
|
||||
| 6 | scriptDir | YES | **S** | | | | | |
|
||||
| 7 | dataDir | | **S** | | 数据文件目录,所有的数据文件都将写入该目录 | | /var/lib/taos | |
|
||||
| 8 | arbitrator | | **S** | | 系统中裁决器的end point | | 空 | |
|
||||
| 9 | numOfThreadsPerCore | | **SC** | | 每个CPU核生成的队列消费者线程数量 | | 1.0 | |
|
||||
| 10 | ratioOfQueryThreads | | **S** | | 设置查询线程的最大数量 | 0:表示只有1个查询线程;1:表示最大和CPU核数相等的查询线程;2:表示最大建立2倍CPU核数的查询线程。 | 1 | 该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
|
||||
| 11 | numOfMnodes | | **S** | | 系统中管理节点个数 | | 3 | |
|
||||
| 12 | vnodeBak | | **S** | | 删除vnode时是否备份vnode目录 | 0:否,1:是 | 1 | |
|
||||
| 13 | telemetryRePorting | | **S** | | 是否允许 TDengine 采集和上报基本使用信息 | 0:不允许;1:允许 | 1 | |
|
||||
| 14 | balance | | **S** | | 是否启动负载均衡 | 0,1 | 1 | |
|
||||
| 15 | balanceInterval | YES | **S** | 秒 | 管理节点在正常运行状态下,检查负载均衡的时间间隔 | 1-30000 | 300 | |
|
||||
| 16 | role | | **S** | | dnode的可选角色 | 0:any(既可作为mnode,也可分配vnode);1:mgmt(只能作为mnode,不能分配vnode);2:dnode(不能作为mnode,只能分配vnode) | 0 | |
|
||||
| 17 | maxTmerCtrl | | **SC** | 个 | 定时器个数 | 8-2048 | 512 | |
|
||||
| 18 | monitorInterval | | **S** | 秒 | 监控数据库记录系统参数(CPU/内存)的时间间隔 | 1-600 | 30 | |
|
||||
| 19 | offlineThreshold | | **S** | 秒 | dnode离线阈值,超过该时间将导致dnode离线 | 5-7200000 | 86400*10(10天) | |
|
||||
| 20 | rpcTimer | | **SC** | 毫秒 | rpc重试时长 | 100-3000 | 300 | |
|
||||
| 21 | rpcMaxTime | | **SC** | 秒 | rpc等待应答最大时长 | 100-7200 | 600 | |
|
||||
| 22 | statusInterval | | **S** | 秒 | dnode向mnode报告状态间隔 | 1-10 | 1 | |
|
||||
| 23 | shellActivityTimer | | **SC** | 秒 | shell客户端向mnode发送心跳间隔 | 1-120 | 3 | |
|
||||
| 24 | tableMetaKeepTimer | | **S** | 秒 | 表的元数据cache时长 | 1-8640000 | 7200 | |
|
||||
| 25 | minSlidingTime | | **S** | 毫秒 | 最小滑动窗口时长 | 10-1000000 | 10 | 支持us补值后,这个值就是1us了。 |
|
||||
| 26 | minIntervalTime | | **S** | 毫秒 | 时间窗口最小值 | 1-1000000 | 10 | |
|
||||
| 27 | stream | | **S** | | 是否启用连续查询(流计算功能) | 0:不允许;1:允许 | 1 | |
|
||||
| 28 | maxStreamCompDelay | | **S** | 毫秒 | 连续查询启动最大延迟 | 10-1000000000 | 20000 | 为避免多个stream同时执行占用太多系统资源,程序中对stream的执行时间人为增加了一些随机的延时。maxFirstStreamCompDelay 是stream第一次执行前最少要等待的时间。streamCompDelayRatio 是延迟时间的计算系数,它乘以查询的 interval 后为延迟时间基准。maxStreamCompDelay是延迟时间基准的上限。实际延迟时间为一个不超过延迟时间基准的随机值。stream某次计算失败后需要重试,retryStreamCompDelay是重试的等待时间基准。实际重试等待时间为不超过等待时间基准的随机值。 |
|
||||
| 29 | maxFirstStreamCompDelay | | **S** | 毫秒 | 第一次连续查询启动最大延迟 | 10-1000000000 | 10000 | |
|
||||
| 30 | retryStreamCompDelay | | **S** | 毫秒 | 连续查询重试等待间隔 | 10-1000000000 | 10 | |
|
||||
| 31 | streamCompDelayRatio | | **S** | | 连续查询的延迟时间计算系数 | 0.1-0.9 | 0.1 | |
|
||||
| 32 | maxVgroupsPerDb | | **S** | | 每个DB中 能够使用的最大vnode个数 | 0-8192 | | |
|
||||
| 33 | maxTablesPerVnode | | **S** | | 每个vnode中能够创建的最大表个数 | | 1000000 | |
|
||||
| 34 | minTablesPerVnode | YES | **S** | | 每个vnode中必须创建的最小表个数 | | 100 | |
|
||||
| 35 | tableIncStepPerVnode | YES | **S** | | 每个vnode中超过最小表数后递增步长 | | 1000 | |
|
||||
| 36 | cache | | **S** | MB | 内存块的大小 | | 16 | |
|
||||
| 37 | blocks | | **S** | | 每个vnode(tsdb)中有多少cache大小的内存块。因此一个vnode的用的内存大小粗略为(cache * blocks) | | 6 | |
|
||||
| 38 | days | | **S** | 天 | 数据文件存储数据的时间跨度 | | 10 | |
|
||||
| 39 | keep | | **S** | 天 | 数据保留的天数 | | 3650 | |
|
||||
| 40 | minRows | | **S** | | 文件块中记录的最小条数 | | 100 | |
|
||||
| 41 | maxRows | | **S** | | 文件块中记录的最大条数 | | 4096 | |
|
||||
| 42 | quorum | | **S** | | 异步写入成功所需应答之法定数 | 1-3 | 1 | |
|
||||
| 43 | comp | | **S** | | 文件压缩标志位 | 0:关闭,1:一阶段压缩,2:两阶段压缩 | 2 | |
|
||||
| 44 | walLevel | | **S** | | WAL级别 | 1:写wal, 但不执行fsync; 2:写wal, 而且执行fsync | 1 | |
|
||||
| 45 | fsync | | **S** | 毫秒 | 当wal设置为2时,执行fsync的周期 | 最小为0,表示每次写入,立即执行fsync;最大为180000(三分钟) | 3000 | |
|
||||
| 46 | replica | | **S** | | 副本个数 | 1-3 | 1 | |
|
||||
| 47 | mqttHostName | YES | **S** | | mqtt uri | | | [mqtt://username:password@hostname:1883/taos/](mqtt://username:password@hostname:1883/taos/) |
|
||||
| 48 | mqttPort | YES | **S** | | mqtt client name | | | 1883 |
|
||||
| 49 | mqttTopic | YES | **S** | | | | | /test |
|
||||
| 50 | compressMsgSize | | **S** | bytes | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为64330字节,即大于64330字节的消息体才进行压缩。 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 | -1 | |
|
||||
| 51 | maxSQLLength | | **C** | bytes | 单条SQL语句允许的最长限制 | 65480-1048576 | 65380 | |
|
||||
| 52 | maxNumOfOrderedRes | | **SC** | | 支持超级表时间排序允许的最多记录数限制 | | 10万 | |
|
||||
| 53 | timezone | | **SC** | | 时区 | | 从系统中动态获取当前的时区设置 | |
|
||||
| 54 | locale | | **SC** | | 系统区位信息及编码格式 | | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置 | |
|
||||
| 55 | charset | | **SC** | | 字符集编码 | | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置 | |
|
||||
| 56 | maxShellConns | | **S** | | 一个dnode容许的连接数 | 10-50000000 | 5000 | |
|
||||
| 57 | maxConnections | | **S** | | 一个数据库连接所容许的dnode连接数 | 1-100000 | 5000 | 实际测试下来,如果默认没有配,选 50 个 worker thread 会产生 Network unavailable |
|
||||
| 58 | minimalLogDirGB | | **SC** | GB | 当日志文件夹的磁盘大小小于该值时,停止写日志 | | 0.1 | |
|
||||
| 59 | minimalTmpDirGB | | **SC** | GB | 当日志文件夹的磁盘大小小于该值时,停止写临时文件 | | 0.1 | |
|
||||
| 60 | minimalDataDirGB | | **S** | GB | 当日志文件夹的磁盘大小小于该值时,停止写时序数据 | | 0.1 | |
|
||||
| 61 | mnodeEqualVnodeNum | | **S** | | 一个mnode等同于vnode消耗的个数 | | 4 | |
|
||||
| 62 | http | | **S** | | 服务器内部的http服务开关。 | 0:关闭http服务, 1:激活http服务。 | 1 | |
|
||||
| 63 | mqtt | YES | **S** | | 服务器内部的mqtt服务开关。 | 0:关闭mqtt服务, 1:激活mqtt服务。 | 0 | |
|
||||
| 64 | monitor | | **S** | | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括CPU、内存、硬盘、网络带宽、HTTP请求量的监控记录,记录信息存储在`LOG`库中。 | 0:关闭监控服务, 1:激活监控服务。 | 0 | |
|
||||
| 65 | httpEnableRecordSql | | **S** | | 内部使用,记录通过RESTFul接口,产生的SQL调用 | | 0 | 生成的文件(httpnote.0/httpnote.1),与服务端日志所在目录相同。 |
|
||||
| 66 | httpMaxThreads | | **S** | | RESTFul接口的线程数 | | 2 | |
|
||||
| 67 | telegrafUseFieldNum | YES | | | | | | |
|
||||
| 68 | restfulRowLimit | | **S** | | RESTFul接口单次返回的记录条数 | | 10240 | 最大10,000,000 |
|
||||
| 69 | numOfLogLines | | **SC** | | 单个日志文件允许的最大行数。 | | 10,000,000 | |
|
||||
| 70 | asyncLog | | **SC** | | 日志写入模式 | 0:同步、1:异步 | 1 | |
|
||||
| 71 | logKeepDays | | **SC** | 天 | 日志文件的最长保存时间 | | 0 | 大于0时,日志文件会被重命名为taosdlog.xxx,其中xxx为日志文件最后修改的时间戳。 |
|
||||
| 72 | debugFlag | | **SC** | | 运行日志开关 | 131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志) | 131或135(不同模块有不同的默认值) | |
|
||||
| 73 | mDebugFlag | | **S** | | 管理模块的日志开关 | 同上 | 135 | |
|
||||
| 74 | dDebugFlag | | **SC** | | dnode模块的日志开关 | 同上 | 135 | |
|
||||
| 75 | sDebugFlag | | **SC** | | sync模块的日志开关 | 同上 | 135 | |
|
||||
| 76 | wDebugFlag | | **SC** | | wal模块的日志开关 | 同上 | 135 | |
|
||||
| 77 | sdbDebugFlag | | **SC** | | sdb模块的日志开关 | 同上 | 135 | |
|
||||
| 78 | rpcDebugFlag | | **SC** | | rpc模块的日志开关 | 同上 | | |
|
||||
| 79 | tmrDebugFlag | | **SC** | | 定时器模块的日志开关 | 同上 | | |
|
||||
| 80 | cDebugFlag | | **C** | | client模块的日志开关 | 同上 | | |
|
||||
| 81 | jniDebugFlag | | **C** | | jni模块的日志开关 | 同上 | | |
|
||||
| 82 | odbcDebugFlag | | **C** | | odbc模块的日志开关 | 同上 | | |
|
||||
| 83 | uDebugFlag | | **SC** | | 共用功能模块的日志开关 | 同上 | | |
|
||||
| 84 | httpDebugFlag | | **S** | | http模块的日志开关 | 同上 | | |
|
||||
| 85 | mqttDebugFlag | | **S** | | mqtt模块的日志开关 | 同上 | | |
|
||||
| 86 | monitorDebugFlag | | **S** | | 监控模块的日志开关 | 同上 | | |
|
||||
| 87 | qDebugFlag | | **SC** | | 查询模块的日志开关 | 同上 | | |
|
||||
| 88 | vDebugFlag | | **SC** | | vnode模块的日志开关 | 同上 | | |
|
||||
| 89 | tsdbDebugFlag | | **S** | | TSDB模块的日志开关 | 同上 | | |
|
||||
| 90 | cqDebugFlag | | **SC** | | 连续查询模块的日志开关 | 同上 | | |
|
||||
| 91 | tscEnableRecordSql | | **C** | | 是否记录客户端sql语句到文件 | 0:否,1:是 | 0 | 生成的文件(tscnote-xxxx.0/tscnote-xxx.1,xxxx是pid),与客户端日志所在目录相同。 |
|
||||
| 92 | enableCoreFile | | **SC** | | 是否开启服务crash时生成core文件 | 0:否,1:是 | 1 | 不同的启动方式,生成core文件的目录如下:1、systemctl start taosd启动:生成的core在根目录下;2、手动启动,就在taosd执行目录下。 |
|
||||
| 93 | gitinfo | YES | **SC** | | | 1 | | |
|
||||
| 94 | gitinfoofInternal | YES | **SC** | | | 2 | | |
|
||||
| 95 | Buildinfo | YES | **SC** | | | 3 | | |
|
||||
| 96 | version | YES | **SC** | | | 4 | | |
|
||||
| 97 | | | | | | | | |
|
||||
| 98 | maxBinaryDisplayWidth | | **C** | | Taos shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏 | 5 - | 30 | 实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度** 和 **maxBinaryDisplayWidth** 的较大者。否则,上限为 **字段名长度** 和 **字段值长度** 的较大者。可在 shell 中通过命令 set max_binary_display_width nn动态修改此选项 |
|
||||
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) |
|
||||
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
|
||||
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 |
|
||||
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。 | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
|
||||
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
|
||||
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |
|
||||
|
||||
**注意:**对于端口,TDengine会使用从serverPort起13个连续的TCP和UDP端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从6030到6042共13个端口,而且必须TCP和UDP都打开。(详细的端口情况请参见 [TDengine 2.0 端口说明](https://www.taosdata.com/cn/documentation/faq#port))
|
||||
|
||||
不同应用场景的数据往往具有不同的数据特征,比如保留天数、副本数、采集频次、记录大小、采集点的数量、压缩等都可完全不同。为获得在存储上的最高效率,TDengine提供如下存储相关的系统配置参数(既可以作为 create database 指令的参数,也可以写在 taos.cfg 配置文件中用来设定创建新数据库时所采用的默认值):
|
||||
|
||||
- days:一个数据文件存储数据的时间跨度。单位为天,默认值:10。
|
||||
- keep:数据库中数据保留的天数。单位为天,默认值:3650。(可通过 alter database 修改)
|
||||
- minRows:文件块中记录的最小条数。单位为条,默认值:100。
|
||||
- maxRows:文件块中记录的最大条数。单位为条,默认值:4096。
|
||||
- comp:文件压缩标志位。0:关闭;1:一阶段压缩;2:两阶段压缩。默认值:2。(可通过 alter database 修改)
|
||||
- wal:WAL级别。1:写wal,但不执行fsync;2:写wal, 而且执行fsync。默认值:1。(在 taos.cfg 中参数名需要写作 walLevel)
|
||||
- fsync:当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。单位为毫秒,默认值:3000。
|
||||
- cache:内存块的大小。单位为兆字节(MB),默认值:16。
|
||||
- blocks:每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache * blocks)。单位为块,默认值:4。(可通过 alter database 修改)
|
||||
- replica:副本个数。取值范围:1-3,单位为个,默认值:1。(可通过 alter database 修改)
|
||||
- quorum:多副本环境下指令执行的确认数要求。取值范围:1、2,单位为个,默认值:1。(可通过 alter database 修改)
|
||||
- precision:时间戳精度标识。ms表示毫秒,us表示微秒,默认值:ms。(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。)
|
||||
- cacheLast:是否在内存中缓存子表的最近数据。0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。默认值:0。(可通过 alter database 修改)(从 2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1];而 2.0.11.0 之前的版本在 SQL 指令中不支持此参数。)(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。)
|
||||
- update:是否允许更新。0:不允许;1:允许。默认值:0。
|
||||
| **#** | **配置参数名称** | **单位** | **含义** | **取值范围** | **缺省值** |
|
||||
| ----- | ---------------- | -------- | ------------------------------------------------------------ | ------------------------------------------------ | ---------- |
|
||||
| 1 | days | 天 | 一个数据文件存储数据的时间跨度 | | 10 |
|
||||
| 2 | keep | 天 | (可通过 alter database 修改<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION_IN_PARAM_LIST -->)数据库中数据保留的天数。 | 3650 |
|
||||
| 3 | cache | MB | 内存块的大小 | | 16 |
|
||||
| 4 | blocks | | (可通过 alter database 修改)每个 VNODE(TSDB)中有多少个 cache 大小的内存块。因此一个 VNODE 使用的内存大小粗略为(cache * blocks)。 | | 4 |
|
||||
| 5 | quorum | | (可通过 alter database 修改)多副本环境下指令执行的确认数要求 | 1-2 | 1 |
|
||||
| 6 | minRows | | 文件块中记录的最小条数 | | 100 |
|
||||
| 7 | maxRows | | 文件块中记录的最大条数 | | 4096 |
|
||||
| 8 | comp | | (可通过 alter database 修改)文件压缩标志位 | 0:关闭,1:一阶段压缩,2:两阶段压缩 | 2 |
|
||||
| 9 | walLevel | | (作为 database 的参数时名为 wal;在 taos.cfg 中作为参数时需要写作 walLevel)WAL级别 | 1:写wal,但不执行fsync;2:写wal, 而且执行fsync | 1 |
|
||||
| 10 | fsync | 毫秒 | 当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。 | | 3000 |
|
||||
| 11 | replica | | (可通过 alter database 修改)副本个数 | 1-3 | 1 |
|
||||
| 12 | precision | | 时间戳精度标识(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。) | ms 表示毫秒,us 表示微秒 | ms |
|
||||
| 13 | update | | 是否允许更新 | 0:不允许;1:允许 | 0 |
|
||||
| 14 | cacheLast | | (可通过 alter database 修改)是否在内存中缓存子表的最近数据(从 2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1];而 2.0.11.0 之前的版本在 SQL 指令中不支持此参数。)(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。) | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能 | 0 |
|
||||
|
||||
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
|
||||
|
||||
```
|
||||
create database demo days 10 cache 32 blocks 8 replica 3 update 1;
|
||||
```mysql
|
||||
CREATE DATABASE demo DAYS 10 CACHE 32 BLOCKS 8 REPLICA 3 UPDATE 1;
|
||||
```
|
||||
|
||||
该SQL创建了一个库demo, 每个数据文件存储10天数据,内存块为32兆字节,每个VNODE占用8个内存块,副本数为3,允许更新,而其他参数与系统配置完全一致。
|
||||
|
||||
一个数据库创建成功后,仅部分参数可以修改并实时生效,其余参数不能修改:
|
||||
|
||||
| **参数名** | **能否修改** | **范围** | **修改语法示例** |
|
||||
| ----------- | ------------ | ------------------------------------------------------------ | ------------------------------------- |
|
||||
| name | | | |
|
||||
| create time | | | |
|
||||
| ntables | | | |
|
||||
| vgroups | | | |
|
||||
| replica | **YES** | 在线dnode数目为1:1-1;2:1-2;>=3:1-3 | ALTER DATABASE <dbname> REPLICA *n* |
|
||||
| quorum | **YES** | 1-2 | ALTER DATABASE <dbname> QUORUM *n* |
|
||||
| days | | | |
|
||||
| keep | **YES** | days-365000 | ALTER DATABASE <dbname> KEEP *n* |
|
||||
| cache | | | |
|
||||
| blocks | **YES** | 3-1000 | ALTER DATABASE <dbname> BLOCKS *n* |
|
||||
| minrows | | | |
|
||||
| maxrows | | | |
|
||||
| wal | | | |
|
||||
| fsync | | | |
|
||||
| comp | **YES** | 0-2 | ALTER DATABASE <dbname> COMP *n* |
|
||||
| precision | | | |
|
||||
| status | | | |
|
||||
| update | | | |
|
||||
| cachelast | **YES** | 0 \| 1 \| 2 \| 3 | ALTER DATABASE <dbname> CACHELAST *n* |
|
||||
|
||||
**说明:**在 2.1.3.0 版本之前,通过 ALTER DATABASE 语句修改这些参数后,需要重启服务器才能生效。
|
||||
|
||||
TDengine集群中加入一个新的dnode时,涉及集群相关的一些参数必须与已有集群的配置相同,否则不能成功加入到集群中。会进行校验的参数如下:
|
||||
|
||||
- numOfMnodes:系统中管理节点个数。默认值:3。(2.0 版本从 2.0.20.11 开始、2.1 及以上版本从 2.1.6.0 开始,numOfMnodes 默认值改为 1。)
|
||||
|
@ -172,7 +307,7 @@ ALTER DNODE <dnode_id> <config>
|
|||
alter dnode 1 debugFlag 135;
|
||||
```
|
||||
|
||||
## <a class="anchor" id="client"></a>客户端配置
|
||||
## <a class="anchor" id="client"></a>客户端及应用驱动配置
|
||||
|
||||
TDengine系统的前台交互客户端应用程序为taos,以及应用驱动,它与taosd共享同一个配置文件taos.cfg。运行taos时,使用参数-c指定配置文件目录,如taos -c /home/cfg,表示使用/home/cfg/目录下的taos.cfg配置文件中的参数,缺省目录是/etc/taos。更多taos的使用方法请见帮助信息 `taos --help`。本节主要说明 taos 客户端应用在配置文件 taos.cfg 文件中使用到的参数。
|
||||
|
||||
|
@ -182,15 +317,15 @@ TDengine系统的前台交互客户端应用程序为taos,以及应用驱动
|
|||
taos -C 或 taos --dump-config
|
||||
```
|
||||
|
||||
客户端配置参数
|
||||
客户端及应用驱动配置参数列表及解释
|
||||
|
||||
- firstEp: taos启动时,主动连接的集群中第一个taosd实例的end point, 缺省值为 localhost:6030。
|
||||
|
||||
- secondEp: taos 启动时,如果 firstEp 连不上,将尝试连接 secondEp。
|
||||
|
||||
- locale
|
||||
- locale:系统区位信息及编码格式。
|
||||
|
||||
默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置
|
||||
默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
|
||||
|
||||
TDengine为存储中文、日文、韩文等非ASCII编码的宽字符,提供一种专门的字段类型nchar。写入nchar字段的数据将统一采用UCS4-LE格式进行编码并发送到服务器。需要注意的是,编码正确性是客户端来保证。因此,如果用户想要正常使用nchar字段来存储诸如中文、日文、韩文等非ASCII字符,需要正确设置客户端的编码格式。
|
||||
|
||||
|
@ -198,9 +333,9 @@ taos -C 或 taos --dump-config
|
|||
|
||||
在 Linux 中 locale 的命名规则为: <语言>\_<地区>.<字符集编码> 如:zh_CN.UTF-8,zh代表中文,CN代表大陆地区,UTF-8表示字符集。字符集编码为客户端正确解析本地字符串提供编码转换的说明。Linux系统与 Mac OSX 系统可以通过设置locale来确定系统的字符编码,由于Windows使用的locale中不是POSIX标准的locale格式,因此在Windows下需要采用另一个配置参数charset来指定字符编码。在Linux 系统中也可以使用charset来指定字符编码。
|
||||
|
||||
- charset
|
||||
- charset:字符集编码。
|
||||
|
||||
默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置
|
||||
默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
|
||||
|
||||
如果配置文件中不设置charset,在Linux系统中,taos在启动时候,自动读取系统当前的locale信息,并从locale信息中解析提取charset编码格式。如果自动读取locale信息失败,则尝试读取charset配置,如果读取charset配置也失败,则中断启动过程。
|
||||
|
||||
|
@ -260,7 +395,7 @@ taos -C 或 taos --dump-config
|
|||
|
||||
- maxBinaryDisplayWidth
|
||||
|
||||
Shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏。默认值:30。可在 shell 中通过命令 set max_binary_display_width nn 动态修改此选项。
|
||||
Shell中 binary 和 nchar 字段的显示宽度上限,超过此限制的部分将被隐藏。默认值:30。可在 taos shell 中通过命令 set max_binary_display_width nn 动态修改此选项。
|
||||
|
||||
## <a class="anchor" id="user"></a>用户管理
|
||||
|
||||
|
@ -315,7 +450,7 @@ TDengine也支持在shell对已存在的表从CSV文件中进行数据导入。C
|
|||
```mysql
|
||||
insert into tb1 file 'path/data.csv';
|
||||
```
|
||||
注意:如果CSV文件首行存在描述信息,请手动删除后再导入
|
||||
**注意:如果CSV文件首行存在描述信息,请手动删除后再导入。如某列为空,填NULL,无引号。**
|
||||
|
||||
例如,现在存在一个子表d1001, 其表结构如下:
|
||||
|
||||
|
@ -343,7 +478,7 @@ taos> DESCRIBE d1001
|
|||
'2018-10-11 06:38:05.000',17.30000,219,0.32000
|
||||
'2018-10-12 06:38:05.000',18.30000,219,0.31000
|
||||
```
|
||||
那么可以用如下命令导入数据
|
||||
那么可以用如下命令导入数据:
|
||||
|
||||
```mysql
|
||||
taos> insert into d1001 file '~/data.csv';
|
||||
|
@ -360,7 +495,7 @@ TDengine提供了方便的数据库导入导出工具taosdump。用户可以将t
|
|||
|
||||
**按表导出CSV文件**
|
||||
|
||||
如果用户需要导出一个表或一个STable中的数据,可在shell中运行
|
||||
如果用户需要导出一个表或一个STable中的数据,可在taos shell中运行:
|
||||
|
||||
```mysql
|
||||
select * from <tb_name> >> data.csv;
|
||||
|
@ -370,7 +505,9 @@ select * from <tb_name> >> data.csv;
|
|||
|
||||
**用taosdump导出数据**
|
||||
|
||||
TDengine提供了方便的数据库导出工具taosdump。用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。具体使用方法,请参见博客:[TDengine DUMP工具使用指南](https://www.taosdata.com/blog/2020/03/09/1334.html)
|
||||
利用taosdump,用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。
|
||||
|
||||
具体使用方法,请参见博客:[TDengine DUMP工具使用指南](https://www.taosdata.com/blog/2020/03/09/1334.html)。
|
||||
|
||||
## <a class="anchor" id="status"></a>系统连接、任务查询管理
|
||||
|
||||
|
@ -435,46 +572,100 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会
|
|||
|
||||
安装TDengine后,默认会在操作系统中生成下列目录或文件:
|
||||
|
||||
| 目录/文件 | 说明 |
|
||||
| ------------------------- | :----------------------------------------------------------- |
|
||||
| **目录/文件** | **说明** |
|
||||
| ------------------------- | ------------------------------------------------------------ |
|
||||
| /usr/local/taos/bin | TDengine可执行文件目录。其中的执行文件都会软链接到/usr/bin目录下。 |
|
||||
| /usr/local/taos/connector | TDengine各种连接器目录。 |
|
||||
| /usr/local/taos/driver | TDengine动态链接库目录。会软链接到/usr/lib目录下。 |
|
||||
| /usr/local/taos/examples | TDengine各种语言应用示例目录。 |
|
||||
| /usr/local/taos/include | TDengine对外提供的C语言接口的头文件。 |
|
||||
| /etc/taos/taos.cfg | TDengine默认[配置文件] |
|
||||
| /var/lib/taos | TDengine默认数据文件目录,可通过[配置文件]修改位置. |
|
||||
| /var/log/taos | TDengine默认日志文件目录,可通过[配置文件]修改位置 |
|
||||
| /var/lib/taos | TDengine默认数据文件目录。可通过[配置文件]修改位置。 |
|
||||
| /var/log/taos | TDengine默认日志文件目录。可通过[配置文件]修改位置。 |
|
||||
|
||||
**可执行文件**
|
||||
|
||||
TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下。其中包括:
|
||||
|
||||
- _taosd_:TDengine服务端可执行文件
|
||||
- _taos_: TDengine Shell可执行文件
|
||||
- _taosdump_:数据导入导出工具
|
||||
- remove.sh:卸载TDengine的脚本, 请谨慎执行,链接到/usr/bin目录下的rmtaos命令。会删除TDengine的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos。
|
||||
- *taosd*:TDengine服务端可执行文件
|
||||
- *taos*:TDengine Shell可执行文件
|
||||
- *taosdump*:数据导入导出工具
|
||||
- *taosdemo*:TDengine测试工具
|
||||
- remove.sh:卸载TDengine的脚本,请谨慎执行,链接到/usr/bin目录下的**rmtaos**命令。会删除TDengine的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos。
|
||||
|
||||
您可以通过修改系统配置文件taos.cfg来配置不同的数据目录和日志目录。
|
||||
|
||||
## TDengine 的启动、停止、卸载
|
||||
|
||||
TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启动和、停止、重启操作。TDengine 的服务进程是 taosd,默认情况下 TDengine 在系统启动后将自动启动。DBA 可以通过 systemd/systemctl/service 手动操作停止、启动、重新启动服务。
|
||||
|
||||
以 systemctl 为例,命令如下:
|
||||
|
||||
- 启动服务进程:`systemctl start taosd`
|
||||
|
||||
- 停止服务进程:`systemctl stop taosd`
|
||||
|
||||
- 重启服务进程:`systemctl restart taosd`
|
||||
|
||||
- 查看服务状态:`systemctl status taosd`
|
||||
|
||||
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
|
||||
```
|
||||
......
|
||||
|
||||
Active: active (running)
|
||||
|
||||
......
|
||||
```
|
||||
|
||||
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
|
||||
```
|
||||
......
|
||||
|
||||
Active: inactive (dead)
|
||||
|
||||
......
|
||||
```
|
||||
|
||||
卸载 TDengine,只需要执行如下命令
|
||||
```
|
||||
rmtaos
|
||||
```
|
||||
|
||||
**警告:执行该命令后,TDengine 程序将被完全删除,务必谨慎使用。**
|
||||
|
||||
## <a class="anchor" id="keywords"></a>TDengine参数限制与保留关键字
|
||||
|
||||
**名称命名规则**
|
||||
|
||||
1. 合法字符:英文字符、数字和下划线
|
||||
2. 允许英文字符或下划线开头,不允许以数字开头
|
||||
3. 不区分大小写
|
||||
|
||||
**密码合法字符集**
|
||||
|
||||
`[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`
|
||||
|
||||
去掉了 ```‘“`\``` (单双引号、撇号、反斜杠、空格)
|
||||
|
||||
- 数据库名:不能包含“.”以及特殊字符,不能超过 32 个字符
|
||||
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符
|
||||
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符,每行数据最大长度 16k 个字符
|
||||
- 表的列名:不能包含特殊字符,不能超过 64 个字符
|
||||
- 数据库名、表名、列名,都不能以数字开头,合法的可用字符集是“英文字符、数字和下划线”
|
||||
- 表的列数:不能超过 1024 列
|
||||
- 表的列数:不能超过 1024 列,最少需要 2 列,第一列必须是时间戳
|
||||
- 记录的最大长度:包括时间戳 8 byte,不能超过 16KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 byte 的存储位置)
|
||||
- 单条 SQL 语句默认最大字符串长度:65480 byte
|
||||
- 单条 SQL 语句默认最大字符串长度:65480 byte,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1048576 byte
|
||||
- 数据库副本数:不能超过 3
|
||||
- 用户名:不能超过 23 个 byte
|
||||
- 用户密码:不能超过 15 个 byte
|
||||
- 标签(Tags)数量:不能超过 128 个
|
||||
- 标签(Tags)数量:不能超过 128 个,可以 0 个
|
||||
- 标签的总长度:不能超过 16K byte
|
||||
- 记录条数:仅受存储空间限制
|
||||
- 表的个数:仅受节点个数限制
|
||||
- 库的个数:仅受节点个数限制
|
||||
- 单个库上虚拟节点个数:不能超过 64 个
|
||||
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
|
||||
- SELECT 语句的查询结果,最多允许返回 1024 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。
|
||||
|
||||
目前 TDengine 有将近 200 个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable 名、数据列名及标签列名等。这些关键字列表如下:
|
||||
|
||||
|
@ -519,3 +710,102 @@ TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下
|
|||
| CONNS | ID | NOTNULL | STABLE | WAL |
|
||||
| COPY | IF | NOW | STABLES | WHERE |
|
||||
|
||||
## 诊断及其他
|
||||
|
||||
#### 网络连接诊断
|
||||
|
||||
当出现客户端应用无法访问服务端时,需要确认客户端与服务端之间网络的各端口连通情况,以便有针对性地排除故障。
|
||||
|
||||
目前网络连接诊断支持在:Linux 与 Linux,Linux 与 Windows 之间进行诊断测试。
|
||||
|
||||
诊断步骤:
|
||||
|
||||
1. 如拟诊断的端口范围与服务器 taosd 实例的端口范围相同,须先停掉 taosd 实例
|
||||
2. 服务端命令行输入:`taos -n server -P <port>` 以服务端身份启动对端口 port 为基准端口的监听
|
||||
3. 客户端命令行输入:`taos -n client -h <fqdn of server> -P <port>` 以客户端身份启动对指定的服务器、指定的端口发送测试包
|
||||
|
||||
服务端运行正常的话会输出以下信息
|
||||
|
||||
```bash
|
||||
# taos -n server -P 6000
|
||||
12/21 14:50:13.522509 0x7f536f455200 UTL work as server, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
|
||||
|
||||
12/21 14:50:13.522659 0x7f5352242700 UTL TCP server at port:6000 is listening
|
||||
12/21 14:50:13.522727 0x7f5351240700 UTL TCP server at port:6001 is listening
|
||||
...
|
||||
...
|
||||
...
|
||||
12/21 14:50:13.523954 0x7f5342fed700 UTL TCP server at port:6011 is listening
|
||||
12/21 14:50:13.523989 0x7f53437ee700 UTL UDP server at port:6010 is listening
|
||||
12/21 14:50:13.524019 0x7f53427ec700 UTL UDP server at port:6011 is listening
|
||||
12/21 14:50:22.192849 0x7f5352242700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6000
|
||||
12/21 14:50:22.192993 0x7f5352242700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6000
|
||||
12/21 14:50:22.237082 0x7f5351a41700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6000
|
||||
12/21 14:50:22.237203 0x7f5351a41700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6000
|
||||
12/21 14:50:22.237450 0x7f5351240700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6001
|
||||
12/21 14:50:22.237576 0x7f5351240700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6001
|
||||
12/21 14:50:22.281038 0x7f5350a3f700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6001
|
||||
12/21 14:50:22.281141 0x7f5350a3f700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6001
|
||||
...
|
||||
...
|
||||
...
|
||||
12/21 14:50:22.677443 0x7f5342fed700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6011
|
||||
12/21 14:50:22.677576 0x7f5342fed700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6011
|
||||
12/21 14:50:22.721144 0x7f53427ec700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6011
|
||||
12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011
|
||||
```
|
||||
|
||||
客户端运行正常会输出以下信息:
|
||||
|
||||
```bash
|
||||
# taos -n client -h 172.27.0.7 -P 6000
|
||||
12/21 14:50:22.192434 0x7fc95d859200 UTL work as client, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
|
||||
|
||||
12/21 14:50:22.192472 0x7fc95d859200 UTL server ip:172.27.0.7 is resolved from host:172.27.0.7
|
||||
12/21 14:50:22.236869 0x7fc95d859200 UTL successed to test TCP port:6000
|
||||
12/21 14:50:22.237215 0x7fc95d859200 UTL successed to test UDP port:6000
|
||||
...
|
||||
...
|
||||
...
|
||||
12/21 14:50:22.676891 0x7fc95d859200 UTL successed to test TCP port:6010
|
||||
12/21 14:50:22.677240 0x7fc95d859200 UTL successed to test UDP port:6010
|
||||
12/21 14:50:22.720893 0x7fc95d859200 UTL successed to test TCP port:6011
|
||||
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
|
||||
```
|
||||
|
||||
仔细阅读打印出来的错误信息,可以帮助管理员找到原因,以解决问题。
|
||||
|
||||
#### 启动状态及RPC诊断
|
||||
|
||||
`taos -n startup -h <fqdn of server>`
|
||||
|
||||
判断 taosd 服务端是否成功启动,是数据库管理员经常遇到的一种情形。特别当若干台服务器组成集群时,判断每个服务端实例是否成功启动就会是一个重要问题。除检索 taosd 服务端日志文件进行问题定位、分析外,还可以通过 `taos -n startup -h <fqdn of server>` 来诊断一个 taosd 进程的启动状态。
|
||||
|
||||
针对多台服务器组成的集群,当服务启动过程耗时较长时,可通过该命令行来诊断每台服务器的 taosd 实例的启动状态,以准确定位问题。
|
||||
|
||||
`taos -n rpc -h <fqdn of server>`
|
||||
|
||||
该命令用来诊断已经启动的 taosd 实例的端口是否可正常访问。如果 taosd 程序异常或者失去响应,可以通过 `taos -n rpc -h <fqdn of server>` 来发起一个与指定 fqdn 的 rpc 通信,看看 taosd 是否能收到,以此来判定是网络问题还是 taosd 程序异常问题。
|
||||
|
||||
#### sync 及 arbitrator 诊断
|
||||
|
||||
```
|
||||
taos -n sync -P 6040 -h <fqdn of server>
|
||||
taos -n sync -P 6042 -h <fqdn of server>
|
||||
```
|
||||
|
||||
用来诊断 sync 端口是否工作正常,判断服务端 sync 模块是否成功工作。另外,-P 6042 用来诊断 arbitrator 是否配置正常,判断指定服务器的 arbitrator 是否能正常工作。
|
||||
|
||||
#### 服务端日志
|
||||
|
||||
taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。
|
||||
|
||||
一旦设定为 135 或 143,日志文件增长很快,特别是写入、查询请求量较大时,增长速度惊人。如合并保存日志,很容易把日志内的关键信息(如配置信息、错误信息等)冲掉。为此,服务端将重要信息日志与其他日志分开存放:
|
||||
|
||||
- taosinfo 存放重要信息日志
|
||||
- taosdlog 存放其他日志
|
||||
|
||||
其中,taosinfo 日志文件最大长度由 numOfLogLines 来进行配置,一个 taosd 实例最多保留两个文件。
|
||||
|
||||
taosd 服务端日志采用异步落盘写入机制,优点是可以避免硬盘写入压力太大,对性能造成很大影响。缺点是,在极端情况下,存在少量日志行数丢失的可能。
|
||||
|
||||
|
|
|
@ -206,7 +206,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传
|
|||
|
||||
显示当前数据库下的所有数据表信息。
|
||||
|
||||
说明:可在like中使用通配符进行名称的匹配,这一通配符字符串最长不能超过24字节。
|
||||
说明:可在 like 中使用通配符进行名称的匹配,这一通配符字符串最长不能超过 20 字节。( 从 2.1.6.1 版本开始,通配符字符串的长度放宽到了 100 字节,并可以通过 taos.cfg 中的 maxWildCardsLength 参数来配置这一长度限制。但不建议使用太长的通配符字符串,将有可能严重影响 LIKE 操作的执行性能。)
|
||||
|
||||
通配符匹配:1)'%'(百分号)匹配0到任意个字符;2)'\_'下划线匹配单个任意字符。
|
||||
|
||||
|
@ -435,6 +435,17 @@ INSERT INTO
|
|||
INSERT INTO d1001 FILE '/tmp/csvfile.csv';
|
||||
```
|
||||
|
||||
- **插入来自文件的数据记录,并自动建表**
|
||||
从 2.1.5.0 版本开始,支持在插入来自 CSV 文件的数据时,以超级表为模板来自动创建不存在的数据表。例如:
|
||||
```mysql
|
||||
INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile.csv';
|
||||
```
|
||||
也可以在一条语句中向多个表以自动建表的方式插入记录。例如:
|
||||
```mysql
|
||||
INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile_21001.csv'
|
||||
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
|
||||
```
|
||||
|
||||
**历史记录写入**:可使用IMPORT或者INSERT命令,IMPORT的语法,功能与INSERT完全一样。
|
||||
|
||||
**说明:**针对 insert 类型的 SQL 语句,我们采用的流式解析策略,在发现后面的错误之前,前面正确的部分 SQL 仍会执行。下面的 SQL 中,INSERT 语句是无效的,但是 d1001 仍会被创建。
|
||||
|
@ -1215,6 +1226,37 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
|
|||
Query OK, 1 row(s) in set (0.001042s)
|
||||
```
|
||||
|
||||
- **INTERP**
|
||||
```mysql
|
||||
SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL ({ VALUE | PREV | NULL | LINEAR})];
|
||||
```
|
||||
功能说明:返回表/超级表的指定时间截面、指定字段的记录。
|
||||
|
||||
返回结果数据类型:同应用的字段。
|
||||
|
||||
应用字段:所有字段。
|
||||
|
||||
适用于:**表、超级表**。
|
||||
|
||||
说明:(从 2.0.15.0 版本开始新增此函数)INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。其中,条件语句里面可以附带更多的筛选条件,例如标签、tbname。
|
||||
|
||||
限制:INTERP 目前不支持 FILL(NEXT)。
|
||||
|
||||
示例:
|
||||
```mysql
|
||||
taos> select interp(*) from meters where ts='2017-7-14 10:42:00.005' fill(prev);
|
||||
interp(ts) | interp(f1) | interp(f2) | interp(f3) |
|
||||
====================================================================
|
||||
2017-07-14 10:42:00.005 | 5 | 9 | 6 |
|
||||
Query OK, 1 row(s) in set (0.002912s)
|
||||
|
||||
taos> select interp(*) from meters where tbname in ('t1') and ts='2017-7-14 10:42:00.005' fill(prev);
|
||||
interp(ts) | interp(f1) | interp(f2) | interp(f3) |
|
||||
====================================================================
|
||||
2017-07-14 10:42:00.005 | 5 | 6 | 7 |
|
||||
Query OK, 1 row(s) in set (0.002005s)
|
||||
```
|
||||
|
||||
### 计算函数
|
||||
|
||||
- **DIFF**
|
||||
|
|
|
@ -144,6 +144,9 @@ keepColumnName 1
|
|||
# max length of an SQL
|
||||
# maxSQLLength 65480
|
||||
|
||||
# max length of WildCards
|
||||
# maxWildCardsLength 100
|
||||
|
||||
# the maximum number of records allowed for super table time sorting
|
||||
# maxNumOfOrderedRes 100000
|
||||
|
||||
|
|
|
@ -61,6 +61,7 @@ typedef struct SJoinSupporter {
|
|||
uint64_t uid; // query table uid
|
||||
SArray* colList; // previous query information, no need to use this attribute, and the corresponding attribution
|
||||
SArray* exprList;
|
||||
SArray* colCond;
|
||||
SFieldInfo fieldsInfo;
|
||||
STagCond tagCond;
|
||||
SGroupbyExpr groupInfo; // group by info
|
||||
|
@ -244,8 +245,9 @@ SCond* tsGetSTableQueryCond(STagCond* pCond, uint64_t uid);
|
|||
void tsSetSTableQueryCond(STagCond* pTagCond, uint64_t uid, SBufferWriter* bw);
|
||||
|
||||
int32_t tscTagCondCopy(STagCond* dest, const STagCond* src);
|
||||
int32_t tscColCondCopy(SArray** dest, const SArray* src, uint64_t uid, int16_t tidx);
|
||||
void tscTagCondRelease(STagCond* pCond);
|
||||
|
||||
void tscColCondRelease(SArray** pCond);
|
||||
void tscGetSrcColumnInfo(SSrcColumnInfo* pColInfo, SQueryInfo* pQueryInfo);
|
||||
|
||||
bool tscShouldBeFreed(SSqlObj* pSql);
|
||||
|
@ -340,7 +342,7 @@ STableMeta* createSuperTableMeta(STableMetaMsg* pChild);
|
|||
uint32_t tscGetTableMetaSize(STableMeta* pTableMeta);
|
||||
CChildTableMeta* tscCreateChildMeta(STableMeta* pTableMeta);
|
||||
uint32_t tscGetTableMetaMaxSize();
|
||||
int32_t tscCreateTableMetaFromSTableMeta(STableMeta** pChild, const char* name, size_t *tableMetaCapacity);
|
||||
int32_t tscCreateTableMetaFromSTableMeta(STableMeta** ppChild, const char* name, size_t *tableMetaCapacity, STableMeta **ppStable);
|
||||
STableMeta* tscTableMetaDup(STableMeta* pTableMeta);
|
||||
SVgroupsInfo* tscVgroupsInfoDup(SVgroupsInfo* pVgroupsInfo);
|
||||
|
||||
|
@ -355,6 +357,7 @@ char* strdup_throw(const char* str);
|
|||
|
||||
bool vgroupInfoIdentical(SNewVgroupInfo *pExisted, SVgroupMsg* src);
|
||||
SNewVgroupInfo createNewVgroupInfo(SVgroupMsg *pVgroupMsg);
|
||||
STblCond* tsGetTableFilter(SArray* filters, uint64_t uid, int16_t idx);
|
||||
|
||||
void tscRemoveTableMetaBuf(STableMetaInfo* pTableMetaInfo, uint64_t id);
|
||||
|
||||
|
|
|
@ -339,6 +339,11 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
|
|||
const char* msg = (sub->cmd.command == TSDB_SQL_STABLEVGROUP)? "vgroup-list":"multi-tableMeta";
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
tscError("0x%"PRIx64" get %s failed, code:%s", pSql->self, msg, tstrerror(code));
|
||||
if (code == TSDB_CODE_RPC_FQDN_ERROR) {
|
||||
size_t sz = strlen(tscGetErrorMsgPayload(&sub->cmd));
|
||||
tscAllocPayload(&pSql->cmd, (int)sz + 1);
|
||||
memcpy(tscGetErrorMsgPayload(&pSql->cmd), tscGetErrorMsgPayload(&sub->cmd), sz);
|
||||
}
|
||||
goto _error;
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -501,6 +501,15 @@ static void doProcessMsgFromServer(SSchedMsg* pSchedMsg) {
|
|||
pRes->code = rpcMsg->code;
|
||||
}
|
||||
rpcMsg->code = (pRes->code == TSDB_CODE_SUCCESS) ? (int32_t)pRes->numOfRows : pRes->code;
|
||||
if (pRes->code == TSDB_CODE_RPC_FQDN_ERROR) {
|
||||
if (pEpSet) {
|
||||
char buf[TSDB_FQDN_LEN + 64] = {0};
|
||||
tscAllocPayload(pCmd, sizeof(buf));
|
||||
sprintf(tscGetErrorMsgPayload(pCmd), "%s\"%s\"", tstrerror(pRes->code),pEpSet->fqdn[(pEpSet->inUse)%(pEpSet->numOfEps)]);
|
||||
} else {
|
||||
sprintf(tscGetErrorMsgPayload(pCmd), "%s", tstrerror(pRes->code));
|
||||
}
|
||||
}
|
||||
(*pSql->fp)(pSql->param, pSql, rpcMsg->code);
|
||||
}
|
||||
|
||||
|
@ -675,7 +684,7 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql) {
|
|||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
int32_t srcColListSize = (int32_t)(taosArrayGetSize(pQueryInfo->colList) * sizeof(SColumnInfo));
|
||||
int32_t srcColFilterSize = tscGetColFilterSerializeLen(pQueryInfo);
|
||||
int32_t srcColFilterSize = 0;
|
||||
int32_t srcTagFilterSize = tscGetTagFilterSerializeLen(pQueryInfo);
|
||||
|
||||
size_t numOfExprs = tscNumOfExprs(pQueryInfo);
|
||||
|
@ -686,6 +695,7 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql) {
|
|||
|
||||
int32_t tableSerialize = 0;
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableMeta * pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
if (pTableMetaInfo->pVgroupTables != NULL) {
|
||||
size_t numOfGroups = taosArrayGetSize(pTableMetaInfo->pVgroupTables);
|
||||
|
||||
|
@ -698,8 +708,15 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql) {
|
|||
tableSerialize = totalTables * sizeof(STableIdInfo);
|
||||
}
|
||||
|
||||
return MIN_QUERY_MSG_PKT_SIZE + minMsgSize() + sizeof(SQueryTableMsg) + srcColListSize + srcColFilterSize + srcTagFilterSize +
|
||||
exprSize + tsBufSize + tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen;
|
||||
if (pQueryInfo->colCond && taosArrayGetSize(pQueryInfo->colCond) > 0) {
|
||||
STblCond *pCond = tsGetTableFilter(pQueryInfo->colCond, pTableMeta->id.uid, 0);
|
||||
if (pCond != NULL && pCond->cond != NULL) {
|
||||
srcColFilterSize = pCond->len;
|
||||
}
|
||||
}
|
||||
|
||||
return MIN_QUERY_MSG_PKT_SIZE + minMsgSize() + sizeof(SQueryTableMsg) + srcColListSize + srcColFilterSize + srcTagFilterSize + exprSize + tsBufSize +
|
||||
tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen;
|
||||
}
|
||||
|
||||
static char *doSerializeTableInfo(SQueryTableMsg *pQueryMsg, SSqlObj *pSql, STableMetaInfo *pTableMetaInfo, char *pMsg,
|
||||
|
@ -957,10 +974,21 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
|
|||
pQueryMsg->tableCols[i].colId = htons(pCol->colId);
|
||||
pQueryMsg->tableCols[i].bytes = htons(pCol->bytes);
|
||||
pQueryMsg->tableCols[i].type = htons(pCol->type);
|
||||
pQueryMsg->tableCols[i].flist.numOfFilters = htons(pCol->flist.numOfFilters);
|
||||
//pQueryMsg->tableCols[i].flist.numOfFilters = htons(pCol->flist.numOfFilters);
|
||||
pQueryMsg->tableCols[i].flist.numOfFilters = 0;
|
||||
|
||||
// append the filter information after the basic column information
|
||||
serializeColFilterInfo(pCol->flist.filterInfo, pCol->flist.numOfFilters, &pMsg);
|
||||
//serializeColFilterInfo(pCol->flist.filterInfo, pCol->flist.numOfFilters, &pMsg);
|
||||
}
|
||||
|
||||
if (pQueryInfo->colCond && taosArrayGetSize(pQueryInfo->colCond) > 0 && !onlyQueryTags(&query) ) {
|
||||
STblCond *pCond = tsGetTableFilter(pQueryInfo->colCond, pTableMeta->id.uid, 0);
|
||||
if (pCond != NULL && pCond->cond != NULL) {
|
||||
pQueryMsg->colCondLen = htons(pCond->len);
|
||||
memcpy(pMsg, pCond->cond, pCond->len);
|
||||
|
||||
pMsg += pCond->len;
|
||||
}
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < query.numOfOutput; ++i) {
|
||||
|
@ -1035,7 +1063,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
|
|||
|
||||
SCond *pCond = tsGetSTableQueryCond(pTagCond, pTableMeta->id.uid);
|
||||
if (pCond != NULL && pCond->cond != NULL) {
|
||||
pQueryMsg->tagCondLen = htonl(pCond->len);
|
||||
pQueryMsg->tagCondLen = htons(pCond->len);
|
||||
memcpy(pMsg, pCond->cond, pCond->len);
|
||||
|
||||
pMsg += pCond->len;
|
||||
|
@ -2844,18 +2872,19 @@ int32_t tscGetTableMetaImpl(SSqlObj* pSql, STableMetaInfo *pTableMetaInfo, bool
|
|||
tNameExtractFullName(&pTableMetaInfo->name, name);
|
||||
|
||||
size_t len = strlen(name);
|
||||
if (pTableMetaInfo->tableMetaCapacity != 0) {
|
||||
if (pTableMetaInfo->pTableMeta != NULL) {
|
||||
memset(pTableMetaInfo->pTableMeta, 0, pTableMetaInfo->tableMetaCapacity);
|
||||
}
|
||||
// just make runtime happy
|
||||
if (pTableMetaInfo->tableMetaCapacity != 0 && pTableMetaInfo->pTableMeta != NULL) {
|
||||
memset(pTableMetaInfo->pTableMeta, 0, pTableMetaInfo->tableMetaCapacity);
|
||||
}
|
||||
taosHashGetCloneExt(tscTableMetaMap, name, len, NULL, (void **)&(pTableMetaInfo->pTableMeta), &pTableMetaInfo->tableMetaCapacity);
|
||||
|
||||
STableMeta* pMeta = pTableMetaInfo->pTableMeta;
|
||||
|
||||
STableMeta* pMeta = pTableMetaInfo->pTableMeta;
|
||||
STableMeta* pSTMeta = (STableMeta *)(pSql->pBuf);
|
||||
if (pMeta && pMeta->id.uid > 0) {
|
||||
// in case of child table, here only get the
|
||||
if (pMeta->tableType == TSDB_CHILD_TABLE) {
|
||||
int32_t code = tscCreateTableMetaFromSTableMeta(&pTableMetaInfo->pTableMeta, name, &pTableMetaInfo->tableMetaCapacity);
|
||||
int32_t code = tscCreateTableMetaFromSTableMeta(&pTableMetaInfo->pTableMeta, name, &pTableMetaInfo->tableMetaCapacity, (STableMeta **)(&pSTMeta));
|
||||
pSql->pBuf = (void *)(pSTMeta);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return getTableMetaFromMnode(pSql, pTableMetaInfo, autocreate);
|
||||
}
|
||||
|
|
|
@ -196,6 +196,11 @@ TAOS *taos_connect_internal(const char *ip, const char *user, const char *pass,
|
|||
|
||||
if (pSql->res.code != TSDB_CODE_SUCCESS) {
|
||||
terrno = pSql->res.code;
|
||||
if (terrno ==TSDB_CODE_RPC_FQDN_ERROR) {
|
||||
printf("taos connect failed, reason: %s\n\n", taos_errstr(pSql));
|
||||
} else {
|
||||
printf("taos connect failed, reason: %s.\n\n", tstrerror(terrno));
|
||||
}
|
||||
taos_free_result(pSql);
|
||||
taos_close(pObj);
|
||||
return NULL;
|
||||
|
@ -643,7 +648,7 @@ char *taos_errstr(TAOS_RES *tres) {
|
|||
return (char*) tstrerror(terrno);
|
||||
}
|
||||
|
||||
if (hasAdditionalErrorInfo(pSql->res.code, &pSql->cmd)) {
|
||||
if (hasAdditionalErrorInfo(pSql->res.code, &pSql->cmd) || pSql->res.code == TSDB_CODE_RPC_FQDN_ERROR) {
|
||||
return pSql->cmd.payload;
|
||||
} else {
|
||||
return (char*)tstrerror(pSql->res.code);
|
||||
|
|
|
@ -796,6 +796,7 @@ static void issueTsCompQuery(SSqlObj* pSql, SJoinSupporter* pSupporter, SSqlObj*
|
|||
STimeWindow window = pQueryInfo->window;
|
||||
tscInitQueryInfo(pQueryInfo);
|
||||
|
||||
pQueryInfo->colCond = pSupporter->colCond;
|
||||
pQueryInfo->window = window;
|
||||
TSDB_QUERY_CLEAR_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_TAG_FILTER_QUERY);
|
||||
TSDB_QUERY_SET_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_MULTITABLE_QUERY);
|
||||
|
@ -1883,6 +1884,9 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
|
|||
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) { // return the tableId & tag
|
||||
SColumnIndex colIndex = {0};
|
||||
|
||||
pSupporter->colCond = pNewQueryInfo->colCond;
|
||||
pNewQueryInfo->colCond = NULL;
|
||||
|
||||
STagCond* pTagCond = &pSupporter->tagCond;
|
||||
assert(pTagCond->joinInfo.hasJoin);
|
||||
|
||||
|
@ -2319,6 +2323,11 @@ int32_t tscHandleFirstRoundStableQuery(SSqlObj *pSql) {
|
|||
goto _error;
|
||||
}
|
||||
|
||||
if (tscColCondCopy(&pNewQueryInfo->colCond, pQueryInfo->colCond, pTableMetaInfo->pTableMeta->id.uid, 0) != 0) {
|
||||
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
pNewQueryInfo->window = pQueryInfo->window;
|
||||
pNewQueryInfo->interval = pQueryInfo->interval;
|
||||
pNewQueryInfo->sessionWindow = pQueryInfo->sessionWindow;
|
||||
|
|
|
@ -62,11 +62,11 @@ int32_t converToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *le
|
|||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
n = sprintf(str, "%f", GET_FLOAT_VAL(buf));
|
||||
n = sprintf(str, "%e", GET_FLOAT_VAL(buf));
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
n = sprintf(str, "%f", GET_DOUBLE_VAL(buf));
|
||||
n = sprintf(str, "%e", GET_DOUBLE_VAL(buf));
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_BINARY:
|
||||
|
@ -82,6 +82,22 @@ int32_t converToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *le
|
|||
n = bufSize + 2;
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
n = sprintf(str, "%d", *(uint8_t*)buf);
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
n = sprintf(str, "%d", *(uint16_t*)buf);
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
n = sprintf(str, "%u", *(uint32_t*)buf);
|
||||
break;
|
||||
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
n = sprintf(str, "%" PRIu64, *(uint64_t*)buf);
|
||||
break;
|
||||
|
||||
default:
|
||||
tscError("unsupported type:%d", type);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
|
@ -118,6 +134,24 @@ SCond* tsGetSTableQueryCond(STagCond* pTagCond, uint64_t uid) {
|
|||
return NULL;
|
||||
}
|
||||
|
||||
STblCond* tsGetTableFilter(SArray* filters, uint64_t uid, int16_t idx) {
|
||||
if (filters == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
size_t size = taosArrayGetSize(filters);
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
STblCond* cond = taosArrayGet(filters, i);
|
||||
|
||||
if (uid == cond->uid && (idx >= 0 && cond->idx == idx)) {
|
||||
return cond;
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
void tsSetSTableQueryCond(STagCond* pTagCond, uint64_t uid, SBufferWriter* bw) {
|
||||
if (tbufTell(bw) == 0) {
|
||||
return;
|
||||
|
@ -753,8 +787,7 @@ typedef struct SDummyInputInfo {
|
|||
SSDataBlock *block;
|
||||
STableQueryInfo *pTableQueryInfo;
|
||||
SSqlObj *pSql; // refactor: remove it
|
||||
int32_t numOfFilterCols;
|
||||
SSingleColumnFilterInfo *pFilterInfo;
|
||||
SFilterInfo *pFilterInfo;
|
||||
} SDummyInputInfo;
|
||||
|
||||
typedef struct SJoinStatus {
|
||||
|
@ -770,38 +803,7 @@ typedef struct SJoinOperatorInfo {
|
|||
SRspResultInfo resultInfo; // todo refactor, add this info for each operator
|
||||
} SJoinOperatorInfo;
|
||||
|
||||
static void converNcharFilterColumn(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, int32_t rows, bool *gotNchar) {
|
||||
for (int32_t i = 0; i < numOfFilterCols; ++i) {
|
||||
if (pFilterInfo[i].info.type == TSDB_DATA_TYPE_NCHAR) {
|
||||
pFilterInfo[i].pData2 = pFilterInfo[i].pData;
|
||||
pFilterInfo[i].pData = malloc(rows * pFilterInfo[i].info.bytes);
|
||||
int32_t bufSize = pFilterInfo[i].info.bytes - VARSTR_HEADER_SIZE;
|
||||
for (int32_t j = 0; j < rows; ++j) {
|
||||
char* dst = (char *)pFilterInfo[i].pData + j * pFilterInfo[i].info.bytes;
|
||||
char* src = (char *)pFilterInfo[i].pData2 + j * pFilterInfo[i].info.bytes;
|
||||
int32_t len = 0;
|
||||
taosMbsToUcs4(varDataVal(src), varDataLen(src), varDataVal(dst), bufSize, &len);
|
||||
varDataLen(dst) = len;
|
||||
}
|
||||
*gotNchar = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void freeNcharFilterColumn(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols) {
|
||||
for (int32_t i = 0; i < numOfFilterCols; ++i) {
|
||||
if (pFilterInfo[i].info.type == TSDB_DATA_TYPE_NCHAR) {
|
||||
if (pFilterInfo[i].pData2) {
|
||||
tfree(pFilterInfo[i].pData);
|
||||
pFilterInfo[i].pData = pFilterInfo[i].pData2;
|
||||
pFilterInfo[i].pData2 = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static void doSetupSDataBlock(SSqlRes* pRes, SSDataBlock* pBlock, SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols) {
|
||||
static void doSetupSDataBlock(SSqlRes* pRes, SSDataBlock* pBlock, SFilterInfo* pFilterInfo) {
|
||||
int32_t offset = 0;
|
||||
char* pData = pRes->data;
|
||||
|
||||
|
@ -817,14 +819,16 @@ static void doSetupSDataBlock(SSqlRes* pRes, SSDataBlock* pBlock, SSingleColumnF
|
|||
}
|
||||
|
||||
// filter data if needed
|
||||
if (numOfFilterCols > 0) {
|
||||
doSetFilterColumnInfo(pFilterInfo, numOfFilterCols, pBlock);
|
||||
if (pFilterInfo) {
|
||||
//doSetFilterColumnInfo(pFilterInfo, numOfFilterCols, pBlock);
|
||||
doSetFilterColInfo(pFilterInfo, pBlock);
|
||||
bool gotNchar = false;
|
||||
converNcharFilterColumn(pFilterInfo, numOfFilterCols, pBlock->info.rows, &gotNchar);
|
||||
filterConverNcharColumns(pFilterInfo, pBlock->info.rows, &gotNchar);
|
||||
int8_t* p = calloc(pBlock->info.rows, sizeof(int8_t));
|
||||
bool all = doFilterDataBlock(pFilterInfo, numOfFilterCols, pBlock->info.rows, p);
|
||||
//bool all = doFilterDataBlock(pFilterInfo, numOfFilterCols, pBlock->info.rows, p);
|
||||
bool all = filterExecute(pFilterInfo, pBlock->info.rows, p);
|
||||
if (gotNchar) {
|
||||
freeNcharFilterColumn(pFilterInfo, numOfFilterCols);
|
||||
filterFreeNcharColumns(pFilterInfo);
|
||||
}
|
||||
if (!all) {
|
||||
doCompactSDataBlock(pBlock, pBlock->info.rows, p);
|
||||
|
@ -862,7 +866,7 @@ SSDataBlock* doGetDataBlock(void* param, bool* newgroup) {
|
|||
|
||||
pBlock->info.rows = pRes->numOfRows;
|
||||
if (pRes->numOfRows != 0) {
|
||||
doSetupSDataBlock(pRes, pBlock, pInput->pFilterInfo, pInput->numOfFilterCols);
|
||||
doSetupSDataBlock(pRes, pBlock, pInput->pFilterInfo);
|
||||
*newgroup = false;
|
||||
return pBlock;
|
||||
}
|
||||
|
@ -877,7 +881,7 @@ SSDataBlock* doGetDataBlock(void* param, bool* newgroup) {
|
|||
}
|
||||
|
||||
pBlock->info.rows = pRes->numOfRows;
|
||||
doSetupSDataBlock(pRes, pBlock, pInput->pFilterInfo, pInput->numOfFilterCols);
|
||||
doSetupSDataBlock(pRes, pBlock, pInput->pFilterInfo);
|
||||
*newgroup = false;
|
||||
return pBlock;
|
||||
}
|
||||
|
@ -920,25 +924,40 @@ SSDataBlock* doDataBlockJoin(void* param, bool* newgroup) {
|
|||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return pJoinInfo->pRes;
|
||||
}
|
||||
|
||||
|
||||
SJoinStatus* st0 = &pJoinInfo->status[0];
|
||||
SColumnInfoData* p0 = taosArrayGet(st0->pBlock->pDataBlock, 0);
|
||||
int64_t* ts0 = (int64_t*) p0->pData;
|
||||
|
||||
if (st0->index >= st0->pBlock->info.rows) {
|
||||
continue;
|
||||
}
|
||||
|
||||
bool prefixEqual = true;
|
||||
|
||||
while(1) {
|
||||
prefixEqual = true;
|
||||
for (int32_t i = 1; i < pJoinInfo->numOfUpstream; ++i) {
|
||||
SJoinStatus* st = &pJoinInfo->status[i];
|
||||
ts0 = (int64_t*) p0->pData;
|
||||
|
||||
SColumnInfoData* p = taosArrayGet(st->pBlock->pDataBlock, 0);
|
||||
int64_t* ts = (int64_t*)p->pData;
|
||||
|
||||
if (st->index >= st->pBlock->info.rows || st0->index >= st0->pBlock->info.rows) {
|
||||
fetchNextBlockIfCompleted(pOperator, newgroup);
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return pJoinInfo->pRes;
|
||||
}
|
||||
|
||||
prefixEqual = false;
|
||||
break;
|
||||
}
|
||||
|
||||
if (ts[st->index] < ts0[st0->index]) { // less than the first
|
||||
prefixEqual = false;
|
||||
|
||||
if ((++(st->index)) >= st->pBlock->info.rows) {
|
||||
if ((++(st->index)) >= st->pBlock->info.rows) {
|
||||
fetchNextBlockIfCompleted(pOperator, newgroup);
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return pJoinInfo->pRes;
|
||||
|
@ -1053,22 +1072,21 @@ static void destroyDummyInputOperator(void* param, int32_t numOfOutput) {
|
|||
pInfo->block = destroyOutputBuf(pInfo->block);
|
||||
pInfo->pSql = NULL;
|
||||
|
||||
doDestroyFilterInfo(pInfo->pFilterInfo, pInfo->numOfFilterCols);
|
||||
filterFreeInfo(pInfo->pFilterInfo);
|
||||
|
||||
cleanupResultRowInfo(&pInfo->pTableQueryInfo->resInfo);
|
||||
tfree(pInfo->pTableQueryInfo);
|
||||
}
|
||||
|
||||
// todo this operator servers as the adapter for Operator tree and SqlRes result, remove it later
|
||||
SOperatorInfo* createDummyInputOperator(SSqlObj* pSql, SSchema* pSchema, int32_t numOfCols, SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols) {
|
||||
SOperatorInfo* createDummyInputOperator(SSqlObj* pSql, SSchema* pSchema, int32_t numOfCols, SFilterInfo* pFilters) {
|
||||
assert(numOfCols > 0);
|
||||
STimeWindow win = {.skey = INT64_MIN, .ekey = INT64_MAX};
|
||||
|
||||
SDummyInputInfo* pInfo = calloc(1, sizeof(SDummyInputInfo));
|
||||
|
||||
pInfo->pSql = pSql;
|
||||
pInfo->pFilterInfo = pFilterInfo;
|
||||
pInfo->numOfFilterCols = numOfFilterCols;
|
||||
pInfo->pFilterInfo = pFilters;
|
||||
pInfo->pTableQueryInfo = createTmpTableQueryInfo(win);
|
||||
|
||||
pInfo->block = calloc(numOfCols, sizeof(SSDataBlock));
|
||||
|
@ -1156,6 +1174,7 @@ void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo, uint64_t objId, b
|
|||
pRes->completed = (pRes->numOfRows == 0);
|
||||
}
|
||||
|
||||
/*
|
||||
static void createInputDataFilterInfo(SQueryInfo* px, int32_t numOfCol1, int32_t* numOfFilterCols, SSingleColumnFilterInfo** pFilterInfo) {
|
||||
SColumnInfo* tableCols = calloc(numOfCol1, sizeof(SColumnInfo));
|
||||
for(int32_t i = 0; i < numOfCol1; ++i) {
|
||||
|
@ -1173,6 +1192,7 @@ static void createInputDataFilterInfo(SQueryInfo* px, int32_t numOfCol1, int32_t
|
|||
|
||||
tfree(tableCols);
|
||||
}
|
||||
*/
|
||||
|
||||
void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQueryInfo* px, SSqlObj* pSql) {
|
||||
SSqlRes* pOutput = &pSql->res;
|
||||
|
@ -1201,11 +1221,17 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
|
|||
// if it is a join query, create join operator here
|
||||
int32_t numOfCol1 = pTableMeta->tableInfo.numOfColumns;
|
||||
|
||||
int32_t numOfFilterCols = 0;
|
||||
SSingleColumnFilterInfo* pFilterInfo = NULL;
|
||||
createInputDataFilterInfo(px, numOfCol1, &numOfFilterCols, &pFilterInfo);
|
||||
SFilterInfo *pFilters = NULL;
|
||||
STblCond *pCond = NULL;
|
||||
if (px->colCond) {
|
||||
pCond = tsGetTableFilter(px->colCond, pTableMeta->id.uid, 0);
|
||||
if (pCond && pCond->cond) {
|
||||
createQueryFilter(pCond->cond, pCond->len, &pFilters);
|
||||
}
|
||||
//createInputDataFlterInfo(px, numOfCol1, &numOfFilterCols, &pFilterInfo);
|
||||
}
|
||||
|
||||
SOperatorInfo* pSourceOperator = createDummyInputOperator(pSqlObjList[0], pSchema, numOfCol1, pFilterInfo, numOfFilterCols);
|
||||
SOperatorInfo* pSourceOperator = createDummyInputOperator(pSqlObjList[0], pSchema, numOfCol1, pFilters);
|
||||
|
||||
pOutput->precision = pSqlObjList[0]->res.precision;
|
||||
|
||||
|
@ -1222,15 +1248,21 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
|
|||
|
||||
for(int32_t i = 1; i < px->numOfTables; ++i) {
|
||||
STableMeta* pTableMeta1 = tscGetMetaInfo(px, i)->pTableMeta;
|
||||
numOfCol1 = pTableMeta1->tableInfo.numOfColumns;
|
||||
SFilterInfo *pFilters1 = NULL;
|
||||
|
||||
SSchema* pSchema1 = tscGetTableSchema(pTableMeta1);
|
||||
int32_t n = pTableMeta1->tableInfo.numOfColumns;
|
||||
|
||||
int32_t numOfFilterCols1 = 0;
|
||||
SSingleColumnFilterInfo* pFilterInfo1 = NULL;
|
||||
createInputDataFilterInfo(px, numOfCol1, &numOfFilterCols1, &pFilterInfo1);
|
||||
if (px->colCond) {
|
||||
pCond = tsGetTableFilter(px->colCond, pTableMeta1->id.uid, i);
|
||||
if (pCond && pCond->cond) {
|
||||
createQueryFilter(pCond->cond, pCond->len, &pFilters1);
|
||||
}
|
||||
//createInputDataFilterInfo(px, numOfCol1, &numOfFilterCols1, &pFilterInfo1);
|
||||
}
|
||||
|
||||
p[i] = createDummyInputOperator(pSqlObjList[i], pSchema1, n, pFilterInfo1, numOfFilterCols1);
|
||||
p[i] = createDummyInputOperator(pSqlObjList[i], pSchema1, n, pFilters1);
|
||||
memcpy(&schema[offset], pSchema1, n * sizeof(SSchema));
|
||||
offset += n;
|
||||
}
|
||||
|
@ -2178,6 +2210,11 @@ int32_t tscGetResRowLength(SArray* pExprList) {
|
|||
}
|
||||
|
||||
static void destroyFilterInfo(SColumnFilterList* pFilterList) {
|
||||
if (pFilterList->filterInfo == NULL) {
|
||||
pFilterList->numOfFilters = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
for(int32_t i = 0; i < pFilterList->numOfFilters; ++i) {
|
||||
if (pFilterList->filterInfo[i].filterstr) {
|
||||
tfree(pFilterList->filterInfo[i].pz);
|
||||
|
@ -2880,6 +2917,64 @@ int32_t tscTagCondCopy(STagCond* dest, const STagCond* src) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t tscColCondCopy(SArray** dest, const SArray* src, uint64_t uid, int16_t tidx) {
|
||||
if (src == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
size_t s = taosArrayGetSize(src);
|
||||
*dest = taosArrayInit(s, sizeof(SCond));
|
||||
|
||||
for (int32_t i = 0; i < s; ++i) {
|
||||
STblCond* pCond = taosArrayGet(src, i);
|
||||
STblCond c = {0};
|
||||
|
||||
if (tidx > 0) {
|
||||
if (!(pCond->uid == uid && pCond->idx == tidx)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
c.idx = 0;
|
||||
} else {
|
||||
c.idx = pCond->idx;
|
||||
}
|
||||
|
||||
c.len = pCond->len;
|
||||
c.uid = pCond->uid;
|
||||
|
||||
if (pCond->len > 0) {
|
||||
assert(pCond->cond != NULL);
|
||||
c.cond = malloc(c.len);
|
||||
if (c.cond == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
memcpy(c.cond, pCond->cond, c.len);
|
||||
}
|
||||
|
||||
taosArrayPush(*dest, &c);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void tscColCondRelease(SArray** pCond) {
|
||||
if (*pCond == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
size_t s = taosArrayGetSize(*pCond);
|
||||
for (int32_t i = 0; i < s; ++i) {
|
||||
STblCond* p = taosArrayGet(*pCond, i);
|
||||
tfree(p->cond);
|
||||
}
|
||||
|
||||
taosArrayDestroy(*pCond);
|
||||
|
||||
*pCond = NULL;
|
||||
}
|
||||
|
||||
|
||||
void tscTagCondRelease(STagCond* pTagCond) {
|
||||
free(pTagCond->tbnameCond.cond);
|
||||
|
||||
|
@ -3072,6 +3167,7 @@ int32_t tscAddQueryInfo(SSqlCmd* pCmd) {
|
|||
|
||||
static void freeQueryInfoImpl(SQueryInfo* pQueryInfo) {
|
||||
tscTagCondRelease(&pQueryInfo->tagCond);
|
||||
tscColCondRelease(&pQueryInfo->colCond);
|
||||
tscFieldInfoClear(&pQueryInfo->fieldsInfo);
|
||||
|
||||
tscExprDestroy(pQueryInfo->exprList);
|
||||
|
@ -3162,6 +3258,11 @@ int32_t tscQueryInfoCopy(SQueryInfo* pQueryInfo, const SQueryInfo* pSrc) {
|
|||
goto _error;
|
||||
}
|
||||
|
||||
if (tscColCondCopy(&pQueryInfo->colCond, pSrc->colCond, 0, -1) != 0) {
|
||||
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
if (pSrc->fillType != TSDB_FILL_NONE) {
|
||||
pQueryInfo->fillVal = calloc(1, pSrc->fieldsInfo.numOfOutput * sizeof(int64_t));
|
||||
if (pQueryInfo->fillVal == NULL) {
|
||||
|
@ -3557,6 +3658,11 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
|
|||
goto _error;
|
||||
}
|
||||
|
||||
if (tscColCondCopy(&pNewQueryInfo->colCond, pQueryInfo->colCond, pTableMetaInfo->pTableMeta->id.uid, tableIndex) != 0) {
|
||||
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
if (pQueryInfo->fillType != TSDB_FILL_NONE) {
|
||||
//just make memory memory sanitizer happy
|
||||
//refactor later
|
||||
|
@ -4358,14 +4464,16 @@ CChildTableMeta* tscCreateChildMeta(STableMeta* pTableMeta) {
|
|||
return cMeta;
|
||||
}
|
||||
|
||||
int32_t tscCreateTableMetaFromSTableMeta(STableMeta** ppChild, const char* name, size_t *tableMetaCapacity) {
|
||||
int32_t tscCreateTableMetaFromSTableMeta(STableMeta** ppChild, const char* name, size_t *tableMetaCapacity, STableMeta**ppSTable) {
|
||||
assert(*ppChild != NULL);
|
||||
|
||||
STableMeta* p = NULL;
|
||||
size_t sz = 0;
|
||||
STableMeta* p = *ppSTable;
|
||||
STableMeta* pChild = *ppChild;
|
||||
|
||||
size_t sz = (p != NULL) ? tscGetTableMetaSize(p) : 0; //ppSTableBuf actually capacity may larger than sz, dont care
|
||||
if (p != NULL && sz != 0) {
|
||||
memset((char *)p, 0, sz);
|
||||
}
|
||||
taosHashGetCloneExt(tscTableMetaMap, pChild->sTableName, strnlen(pChild->sTableName, TSDB_TABLE_FNAME_LEN), NULL, (void **)&p, &sz);
|
||||
*ppSTable = p;
|
||||
|
||||
// tableMeta exists, build child table meta according to the super table meta
|
||||
// the uid need to be checked in addition to the general name of the super table.
|
||||
|
@ -4384,10 +4492,8 @@ int32_t tscCreateTableMetaFromSTableMeta(STableMeta** ppChild, const char* name,
|
|||
memcpy(pChild->schema, p->schema, totalBytes);
|
||||
|
||||
*ppChild = pChild;
|
||||
tfree(p);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
} else { // super table has been removed, current tableMeta is also expired. remove it here
|
||||
tfree(p);
|
||||
taosHashRemove(tscTableMetaMap, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -70,6 +70,7 @@ extern int8_t tsKeepOriginalColumnName;
|
|||
|
||||
// client
|
||||
extern int32_t tsMaxSQLStringLen;
|
||||
extern int32_t tsMaxWildCardsLen;
|
||||
extern int8_t tsTscEnableRecordSql;
|
||||
extern int32_t tsMaxNumOfOrderedResults;
|
||||
extern int32_t tsMinSlidingTime;
|
||||
|
|
|
@ -53,6 +53,8 @@ int32_t tVariantToString(tVariant *pVar, char *dst);
|
|||
|
||||
int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool includeLengthPrefix);
|
||||
|
||||
int32_t tVariantDumpEx(tVariant *pVariant, char *payload, int16_t type, bool includeLengthPrefix, bool *converted, char *extInfo);
|
||||
|
||||
int32_t tVariantTypeSetType(tVariant *pVariant, char type);
|
||||
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -118,7 +118,7 @@ void tExprTreeDestroy(tExprNode *pNode, void (*fp)(void *)) {
|
|||
} else if (pNode->nodeType == TSQL_NODE_VALUE) {
|
||||
tVariantDestroy(pNode->pVal);
|
||||
} else if (pNode->nodeType == TSQL_NODE_COL) {
|
||||
free(pNode->pSchema);
|
||||
tfree(pNode->pSchema);
|
||||
}
|
||||
|
||||
free(pNode);
|
||||
|
@ -435,7 +435,7 @@ tExprNode* exprTreeFromTableName(const char* tbnameCond) {
|
|||
expr->_node.optr = TSDB_RELATION_IN;
|
||||
tVariant* pVal = exception_calloc(1, sizeof(tVariant));
|
||||
right->pVal = pVal;
|
||||
pVal->nType = TSDB_DATA_TYPE_ARRAY;
|
||||
pVal->nType = TSDB_DATA_TYPE_POINTER_ARRAY;
|
||||
pVal->arr = taosArrayInit(2, POINTER_BYTES);
|
||||
|
||||
const char* cond = tbnameCond + QUERY_COND_REL_PREFIX_IN_LEN;
|
||||
|
@ -502,6 +502,183 @@ void buildFilterSetFromBinary(void **q, const char *buf, int32_t len) {
|
|||
*q = (void *)pObj;
|
||||
}
|
||||
|
||||
void convertFilterSetFromBinary(void **q, const char *buf, int32_t len, uint32_t tType) {
|
||||
SBufferReader br = tbufInitReader(buf, len, false);
|
||||
uint32_t sType = tbufReadUint32(&br);
|
||||
SHashObj *pObj = taosHashInit(256, taosGetDefaultHashFunction(tType), true, false);
|
||||
|
||||
taosHashSetEqualFp(pObj, taosGetDefaultEqualFunction(tType));
|
||||
|
||||
int dummy = -1;
|
||||
tVariant tmpVar = {0};
|
||||
size_t t = 0;
|
||||
int32_t sz = tbufReadInt32(&br);
|
||||
void *pvar = NULL;
|
||||
int64_t val = 0;
|
||||
int32_t bufLen = 0;
|
||||
if (IS_NUMERIC_TYPE(sType)) {
|
||||
bufLen = 60; // The maximum length of string that a number is converted to.
|
||||
} else {
|
||||
bufLen = 128;
|
||||
}
|
||||
|
||||
char *tmp = calloc(1, bufLen * TSDB_NCHAR_SIZE);
|
||||
|
||||
for (int32_t i = 0; i < sz; i++) {
|
||||
switch (sType) {
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
case TSDB_DATA_TYPE_TINYINT: {
|
||||
*(uint8_t *)&val = (uint8_t)tbufReadInt64(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
case TSDB_DATA_TYPE_SMALLINT: {
|
||||
*(uint16_t *)&val = (uint16_t)tbufReadInt64(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
case TSDB_DATA_TYPE_INT: {
|
||||
*(uint32_t *)&val = (uint32_t)tbufReadInt64(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
case TSDB_DATA_TYPE_BIGINT: {
|
||||
*(uint64_t *)&val = (uint64_t)tbufReadInt64(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_DOUBLE: {
|
||||
*(double *)&val = tbufReadDouble(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_FLOAT: {
|
||||
*(float *)&val = (float)tbufReadDouble(&br);
|
||||
t = sizeof(val);
|
||||
pvar = &val;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_BINARY: {
|
||||
pvar = (char *)tbufReadBinary(&br, &t);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_NCHAR: {
|
||||
pvar = (char *)tbufReadBinary(&br, &t);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
taosHashCleanup(pObj);
|
||||
*q = NULL;
|
||||
return;
|
||||
}
|
||||
|
||||
tVariantCreateFromBinary(&tmpVar, (char *)pvar, t, sType);
|
||||
|
||||
if (bufLen < t) {
|
||||
tmp = realloc(tmp, t * TSDB_NCHAR_SIZE);
|
||||
bufLen = (int32_t)t;
|
||||
}
|
||||
|
||||
switch (tType) {
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
case TSDB_DATA_TYPE_TINYINT: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
case TSDB_DATA_TYPE_SMALLINT: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
case TSDB_DATA_TYPE_INT: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
case TSDB_DATA_TYPE_BIGINT: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_DOUBLE: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_FLOAT: {
|
||||
if (tVariantDump(&tmpVar, (char *)&val, tType, false)) {
|
||||
goto err_ret;
|
||||
}
|
||||
pvar = &val;
|
||||
t = sizeof(val);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_BINARY: {
|
||||
if (tVariantDump(&tmpVar, tmp, tType, true)) {
|
||||
goto err_ret;
|
||||
}
|
||||
t = varDataLen(tmp);
|
||||
pvar = varDataVal(tmp);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_NCHAR: {
|
||||
if (tVariantDump(&tmpVar, tmp, tType, true)) {
|
||||
goto err_ret;
|
||||
}
|
||||
t = varDataLen(tmp);
|
||||
pvar = varDataVal(tmp);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
goto err_ret;
|
||||
}
|
||||
|
||||
taosHashPut(pObj, (char *)pvar, t, &dummy, sizeof(dummy));
|
||||
tVariantDestroy(&tmpVar);
|
||||
memset(&tmpVar, 0, sizeof(tmpVar));
|
||||
}
|
||||
|
||||
*q = (void *)pObj;
|
||||
pObj = NULL;
|
||||
|
||||
err_ret:
|
||||
tVariantDestroy(&tmpVar);
|
||||
taosHashCleanup(pObj);
|
||||
tfree(tmp);
|
||||
}
|
||||
|
||||
|
||||
tExprNode* exprdup(tExprNode* pNode) {
|
||||
if (pNode == NULL) {
|
||||
return NULL;
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
#include "tutil.h"
|
||||
#include "tlocale.h"
|
||||
#include "ttimezone.h"
|
||||
#include "tcompare.h"
|
||||
|
||||
// cluster
|
||||
char tsFirst[TSDB_EP_LEN] = {0};
|
||||
|
@ -75,6 +76,7 @@ int32_t tsCompressMsgSize = -1;
|
|||
|
||||
// client
|
||||
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
|
||||
int32_t tsMaxWildCardsLen = TSDB_PATTERN_STRING_MAX_LEN;
|
||||
int8_t tsTscEnableRecordSql = 0;
|
||||
|
||||
// the maximum number of results for projection query on super table that are returned from
|
||||
|
@ -984,6 +986,16 @@ static void doInitGlobalConfig(void) {
|
|||
cfg.unitType = TAOS_CFG_UTYPE_BYTE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
||||
cfg.option = "maxWildCardsLength";
|
||||
cfg.ptr = &tsMaxWildCardsLen;
|
||||
cfg.valType = TAOS_CFG_VTYPE_INT32;
|
||||
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
|
||||
cfg.minValue = 0;
|
||||
cfg.maxValue = TSDB_MAX_FIELD_LEN;
|
||||
cfg.ptrLength = 0;
|
||||
cfg.unitType = TAOS_CFG_UTYPE_BYTE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
||||
cfg.option = "maxNumOfOrderedRes";
|
||||
cfg.ptr = &tsMaxNumOfOrderedResults;
|
||||
cfg.valType = TAOS_CFG_VTYPE_INT32;
|
||||
|
@ -1531,6 +1543,7 @@ static void doInitGlobalConfig(void) {
|
|||
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
||||
assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM);
|
||||
#ifdef TD_TSZ
|
||||
// lossy compress
|
||||
cfg.option = "lossyColumns";
|
||||
|
|
|
@ -61,7 +61,7 @@ bool tscValidateTableNameLength(size_t len) {
|
|||
|
||||
// TODO refactor
|
||||
SColumnFilterInfo* tFilterInfoDup(const SColumnFilterInfo* src, int32_t numOfFilters) {
|
||||
if (numOfFilters == 0) {
|
||||
if (numOfFilters == 0 || src == NULL) {
|
||||
assert(src == NULL);
|
||||
return NULL;
|
||||
}
|
||||
|
|
|
@ -372,21 +372,21 @@ static void getStatics_nchr(const void *pData, int32_t numOfRow, int64_t *min, i
|
|||
}
|
||||
|
||||
tDataTypeDescriptor tDataTypes[15] = {
|
||||
{TSDB_DATA_TYPE_NULL, 6,1, "NOTYPE", NULL, NULL, NULL},
|
||||
{TSDB_DATA_TYPE_BOOL, 4, CHAR_BYTES, "BOOL", tsCompressBool, tsDecompressBool, getStatics_bool},
|
||||
{TSDB_DATA_TYPE_TINYINT, 7, CHAR_BYTES, "TINYINT", tsCompressTinyint, tsDecompressTinyint, getStatics_i8},
|
||||
{TSDB_DATA_TYPE_SMALLINT, 8, SHORT_BYTES, "SMALLINT", tsCompressSmallint, tsDecompressSmallint, getStatics_i16},
|
||||
{TSDB_DATA_TYPE_INT, 3, INT_BYTES, "INT", tsCompressInt, tsDecompressInt, getStatics_i32},
|
||||
{TSDB_DATA_TYPE_BIGINT, 6, LONG_BYTES, "BIGINT", tsCompressBigint, tsDecompressBigint, getStatics_i64},
|
||||
{TSDB_DATA_TYPE_FLOAT, 5, FLOAT_BYTES, "FLOAT", tsCompressFloat, tsDecompressFloat, getStatics_f},
|
||||
{TSDB_DATA_TYPE_DOUBLE, 6, DOUBLE_BYTES, "DOUBLE", tsCompressDouble, tsDecompressDouble, getStatics_d},
|
||||
{TSDB_DATA_TYPE_BINARY, 6, 0, "BINARY", tsCompressString, tsDecompressString, getStatics_bin},
|
||||
{TSDB_DATA_TYPE_TIMESTAMP, 9, LONG_BYTES, "TIMESTAMP", tsCompressTimestamp, tsDecompressTimestamp, getStatics_i64},
|
||||
{TSDB_DATA_TYPE_NCHAR, 5, 8, "NCHAR", tsCompressString, tsDecompressString, getStatics_nchr},
|
||||
{TSDB_DATA_TYPE_UTINYINT, 16, CHAR_BYTES, "TINYINT UNSIGNED", tsCompressTinyint, tsDecompressTinyint, getStatics_u8},
|
||||
{TSDB_DATA_TYPE_USMALLINT, 17, SHORT_BYTES, "SMALLINT UNSIGNED", tsCompressSmallint, tsDecompressSmallint, getStatics_u16},
|
||||
{TSDB_DATA_TYPE_UINT, 12, INT_BYTES, "INT UNSIGNED", tsCompressInt, tsDecompressInt, getStatics_u32},
|
||||
{TSDB_DATA_TYPE_UBIGINT, 15, LONG_BYTES, "BIGINT UNSIGNED", tsCompressBigint, tsDecompressBigint, getStatics_u64},
|
||||
{TSDB_DATA_TYPE_NULL, 6, 1, "NOTYPE", 0, 0, NULL, NULL, NULL},
|
||||
{TSDB_DATA_TYPE_BOOL, 4, CHAR_BYTES, "BOOL", false, true, tsCompressBool, tsDecompressBool, getStatics_bool},
|
||||
{TSDB_DATA_TYPE_TINYINT, 7, CHAR_BYTES, "TINYINT", INT8_MIN, INT8_MAX, tsCompressTinyint, tsDecompressTinyint, getStatics_i8},
|
||||
{TSDB_DATA_TYPE_SMALLINT, 8, SHORT_BYTES, "SMALLINT", INT16_MIN, INT16_MAX, tsCompressSmallint, tsDecompressSmallint, getStatics_i16},
|
||||
{TSDB_DATA_TYPE_INT, 3, INT_BYTES, "INT", INT32_MIN, INT32_MAX, tsCompressInt, tsDecompressInt, getStatics_i32},
|
||||
{TSDB_DATA_TYPE_BIGINT, 6, LONG_BYTES, "BIGINT", INT64_MIN, INT64_MAX, tsCompressBigint, tsDecompressBigint, getStatics_i64},
|
||||
{TSDB_DATA_TYPE_FLOAT, 5, FLOAT_BYTES, "FLOAT", 0, 0, tsCompressFloat, tsDecompressFloat, getStatics_f},
|
||||
{TSDB_DATA_TYPE_DOUBLE, 6, DOUBLE_BYTES, "DOUBLE", 0, 0, tsCompressDouble, tsDecompressDouble, getStatics_d},
|
||||
{TSDB_DATA_TYPE_BINARY, 6, 0, "BINARY", 0, 0, tsCompressString, tsDecompressString, getStatics_bin},
|
||||
{TSDB_DATA_TYPE_TIMESTAMP, 9, LONG_BYTES, "TIMESTAMP", INT64_MIN, INT64_MAX, tsCompressTimestamp, tsDecompressTimestamp, getStatics_i64},
|
||||
{TSDB_DATA_TYPE_NCHAR, 5, 8, "NCHAR", 0, 0, tsCompressString, tsDecompressString, getStatics_nchr},
|
||||
{TSDB_DATA_TYPE_UTINYINT, 16, CHAR_BYTES, "TINYINT UNSIGNED", 0, UINT8_MAX, tsCompressTinyint, tsDecompressTinyint, getStatics_u8},
|
||||
{TSDB_DATA_TYPE_USMALLINT, 17, SHORT_BYTES, "SMALLINT UNSIGNED", 0, UINT16_MAX, tsCompressSmallint, tsDecompressSmallint, getStatics_u16},
|
||||
{TSDB_DATA_TYPE_UINT, 12, INT_BYTES, "INT UNSIGNED", 0, UINT32_MAX, tsCompressInt, tsDecompressInt, getStatics_u32},
|
||||
{TSDB_DATA_TYPE_UBIGINT, 15, LONG_BYTES, "BIGINT UNSIGNED", 0, UINT64_MAX, tsCompressBigint, tsDecompressBigint, getStatics_u64},
|
||||
};
|
||||
|
||||
char tTokenTypeSwitcher[13] = {
|
||||
|
@ -405,6 +405,32 @@ char tTokenTypeSwitcher[13] = {
|
|||
TSDB_DATA_TYPE_NCHAR, // TK_NCHAR
|
||||
};
|
||||
|
||||
float floatMin = -FLT_MAX, floatMax = FLT_MAX;
|
||||
double doubleMin = -DBL_MAX, doubleMax = DBL_MAX;
|
||||
|
||||
FORCE_INLINE void* getDataMin(int32_t type) {
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
return &floatMin;
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
return &doubleMin;
|
||||
default:
|
||||
return &tDataTypes[type].minValue;
|
||||
}
|
||||
}
|
||||
|
||||
FORCE_INLINE void* getDataMax(int32_t type) {
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
return &floatMax;
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
return &doubleMax;
|
||||
default:
|
||||
return &tDataTypes[type].maxValue;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
bool isValidDataType(int32_t type) {
|
||||
return type >= TSDB_DATA_TYPE_NULL && type <= TSDB_DATA_TYPE_UBIGINT;
|
||||
}
|
||||
|
@ -566,6 +592,53 @@ void assignVal(char *val, const char *src, int32_t len, int32_t type) {
|
|||
}
|
||||
}
|
||||
|
||||
void operateVal(void *dst, void *s1, void *s2, int32_t optr, int32_t type) {
|
||||
if (optr == TSDB_BINARY_OP_ADD) {
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_TINYINT:
|
||||
*((int8_t *)dst) = GET_INT8_VAL(s1) + GET_INT8_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
*((uint8_t *)dst) = GET_UINT8_VAL(s1) + GET_UINT8_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_SMALLINT:
|
||||
*((int16_t *)dst) = GET_INT16_VAL(s1) + GET_INT16_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
*((uint16_t *)dst) = GET_UINT16_VAL(s1) + GET_UINT16_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_INT:
|
||||
*((int32_t *)dst) = GET_INT32_VAL(s1) + GET_INT32_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
*((uint32_t *)dst) = GET_UINT32_VAL(s1) + GET_UINT32_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_BIGINT:
|
||||
*((int64_t *)dst) = GET_INT64_VAL(s1) + GET_INT64_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
*((uint64_t *)dst) = GET_UINT64_VAL(s1) + GET_UINT64_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
*((int64_t *)dst) = GET_INT64_VAL(s1) + GET_INT64_VAL(s2);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
SET_FLOAT_VAL(dst, GET_FLOAT_VAL(s1) + GET_FLOAT_VAL(s2));
|
||||
break;
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
SET_DOUBLE_VAL(dst, GET_DOUBLE_VAL(s1) + GET_DOUBLE_VAL(s2));
|
||||
break;
|
||||
default: {
|
||||
assert(0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
assert(0);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void tsDataSwap(void *pLeft, void *pRight, int32_t type, int32_t size, void* buf) {
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_INT:
|
||||
|
|
|
@ -23,6 +23,13 @@
|
|||
#include "tutil.h"
|
||||
#include "tvariant.h"
|
||||
|
||||
#define SET_EXT_INFO(converted, res, minv, maxv, exti) do { \
|
||||
if (converted == NULL || exti == NULL || *converted == false) { break; } \
|
||||
if ((res) < (minv)) { *exti = -1; break; } \
|
||||
if ((res) > (maxv)) { *exti = 1; break; } \
|
||||
assert(0); \
|
||||
} while (0)
|
||||
|
||||
void tVariantCreate(tVariant *pVar, SStrToken *token) {
|
||||
int32_t ret = 0;
|
||||
int32_t type = token->type;
|
||||
|
@ -184,7 +191,7 @@ void tVariantDestroy(tVariant *pVar) {
|
|||
}
|
||||
|
||||
// NOTE: this is only for string array
|
||||
if (pVar->nType == TSDB_DATA_TYPE_ARRAY) {
|
||||
if (pVar->nType == TSDB_DATA_TYPE_POINTER_ARRAY) {
|
||||
size_t num = taosArrayGetSize(pVar->arr);
|
||||
for(size_t i = 0; i < num; i++) {
|
||||
void* p = taosArrayGetP(pVar->arr, i);
|
||||
|
@ -192,6 +199,9 @@ void tVariantDestroy(tVariant *pVar) {
|
|||
}
|
||||
taosArrayDestroy(pVar->arr);
|
||||
pVar->arr = NULL;
|
||||
} else if (pVar->nType == TSDB_DATA_TYPE_VALUE_ARRAY) {
|
||||
taosArrayDestroy(pVar->arr);
|
||||
pVar->arr = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -220,7 +230,7 @@ void tVariantAssign(tVariant *pDst, const tVariant *pSrc) {
|
|||
|
||||
if (IS_NUMERIC_TYPE(pSrc->nType) || (pSrc->nType == TSDB_DATA_TYPE_BOOL)) {
|
||||
pDst->i64 = pSrc->i64;
|
||||
} else if (pSrc->nType == TSDB_DATA_TYPE_ARRAY) { // this is only for string array
|
||||
} else if (pSrc->nType == TSDB_DATA_TYPE_POINTER_ARRAY) { // this is only for string array
|
||||
size_t num = taosArrayGetSize(pSrc->arr);
|
||||
pDst->arr = taosArrayInit(num, sizeof(char*));
|
||||
for(size_t i = 0; i < num; i++) {
|
||||
|
@ -228,9 +238,18 @@ void tVariantAssign(tVariant *pDst, const tVariant *pSrc) {
|
|||
char* n = strdup(p);
|
||||
taosArrayPush(pDst->arr, &n);
|
||||
}
|
||||
} else if (pSrc->nType == TSDB_DATA_TYPE_VALUE_ARRAY) {
|
||||
size_t num = taosArrayGetSize(pSrc->arr);
|
||||
pDst->arr = taosArrayInit(num, sizeof(int64_t));
|
||||
pDst->nLen = pSrc->nLen;
|
||||
assert(pSrc->nLen == num);
|
||||
for(size_t i = 0; i < num; i++) {
|
||||
int64_t *p = taosArrayGet(pSrc->arr, i);
|
||||
taosArrayPush(pDst->arr, p);
|
||||
}
|
||||
}
|
||||
|
||||
if (pDst->nType != TSDB_DATA_TYPE_ARRAY) {
|
||||
if (pDst->nType != TSDB_DATA_TYPE_POINTER_ARRAY && pDst->nType != TSDB_DATA_TYPE_VALUE_ARRAY) {
|
||||
pDst->nLen = tDataTypes[pDst->nType].bytes;
|
||||
}
|
||||
}
|
||||
|
@ -450,7 +469,7 @@ static FORCE_INLINE int32_t convertToDouble(char *pStr, int32_t len, double *val
|
|||
return 0;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t convertToInteger(tVariant *pVariant, int64_t *result, int32_t type, bool issigned, bool releaseVariantPtr) {
|
||||
static FORCE_INLINE int32_t convertToInteger(tVariant *pVariant, int64_t *result, int32_t type, bool issigned, bool releaseVariantPtr, bool *converted) {
|
||||
if (pVariant->nType == TSDB_DATA_TYPE_NULL) {
|
||||
setNull((char *)result, type, tDataTypes[type].bytes);
|
||||
return 0;
|
||||
|
@ -540,6 +559,10 @@ static FORCE_INLINE int32_t convertToInteger(tVariant *pVariant, int64_t *result
|
|||
}
|
||||
}
|
||||
|
||||
if (converted) {
|
||||
*converted = true;
|
||||
}
|
||||
|
||||
bool code = false;
|
||||
|
||||
uint64_t ui = 0;
|
||||
|
@ -602,6 +625,18 @@ static int32_t convertToBool(tVariant *pVariant, int64_t *pDest) {
|
|||
* to column type defined in schema
|
||||
*/
|
||||
int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool includeLengthPrefix) {
|
||||
return tVariantDumpEx(pVariant, payload, type, includeLengthPrefix, NULL, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* transfer data from variant serve as the implicit data conversion: from input sql string pVariant->nType
|
||||
* to column type defined in schema
|
||||
*/
|
||||
int32_t tVariantDumpEx(tVariant *pVariant, char *payload, int16_t type, bool includeLengthPrefix, bool *converted, char *extInfo) {
|
||||
if (converted) {
|
||||
*converted = false;
|
||||
}
|
||||
|
||||
if (pVariant == NULL || (pVariant->nType != 0 && !isValidDataType(pVariant->nType))) {
|
||||
return -1;
|
||||
}
|
||||
|
@ -620,7 +655,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_TINYINT: {
|
||||
if (convertToInteger(pVariant, &result, type, true, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, true, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, INT8_MIN + 1, INT8_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((int8_t *)payload) = (int8_t) result;
|
||||
|
@ -628,7 +664,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_UTINYINT: {
|
||||
if (convertToInteger(pVariant, &result, type, false, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, false, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, 0, UINT8_MAX - 1, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((uint8_t *)payload) = (uint8_t) result;
|
||||
|
@ -636,7 +673,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_SMALLINT: {
|
||||
if (convertToInteger(pVariant, &result, type, true, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, true, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, INT16_MIN + 1, INT16_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((int16_t *)payload) = (int16_t)result;
|
||||
|
@ -644,7 +682,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_USMALLINT: {
|
||||
if (convertToInteger(pVariant, &result, type, false, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, false, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, 0, UINT16_MAX - 1, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((uint16_t *)payload) = (uint16_t)result;
|
||||
|
@ -652,7 +691,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_INT: {
|
||||
if (convertToInteger(pVariant, &result, type, true, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, true, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, INT32_MIN + 1, INT32_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((int32_t *)payload) = (int32_t)result;
|
||||
|
@ -660,7 +700,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_UINT: {
|
||||
if (convertToInteger(pVariant, &result, type, false, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, false, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, result, 0, UINT32_MAX - 1, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((uint32_t *)payload) = (uint32_t)result;
|
||||
|
@ -668,7 +709,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_BIGINT: {
|
||||
if (convertToInteger(pVariant, &result, type, true, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, true, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, (int64_t)result, INT64_MIN + 1, INT64_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((int64_t *)payload) = (int64_t)result;
|
||||
|
@ -676,7 +718,8 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
}
|
||||
|
||||
case TSDB_DATA_TYPE_UBIGINT: {
|
||||
if (convertToInteger(pVariant, &result, type, false, false) < 0) {
|
||||
if (convertToInteger(pVariant, &result, type, false, false, converted) < 0) {
|
||||
SET_EXT_INFO(converted, (uint64_t)result, 0, UINT64_MAX - 1, extInfo);
|
||||
return -1;
|
||||
}
|
||||
*((uint64_t *)payload) = (uint64_t)result;
|
||||
|
@ -696,11 +739,37 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
return -1;
|
||||
}
|
||||
|
||||
if (converted) {
|
||||
*converted = true;
|
||||
}
|
||||
|
||||
if (value > FLT_MAX || value < -FLT_MAX) {
|
||||
SET_EXT_INFO(converted, value, -FLT_MAX, FLT_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
SET_FLOAT_VAL(payload, value);
|
||||
}
|
||||
} else if (pVariant->nType == TSDB_DATA_TYPE_BOOL || IS_SIGNED_NUMERIC_TYPE(pVariant->nType) || IS_UNSIGNED_NUMERIC_TYPE(pVariant->nType)) {
|
||||
if (converted) {
|
||||
*converted = true;
|
||||
}
|
||||
|
||||
if (pVariant->i64 > FLT_MAX || pVariant->i64 < -FLT_MAX) {
|
||||
SET_EXT_INFO(converted, pVariant->i64, -FLT_MAX, FLT_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
|
||||
SET_FLOAT_VAL(payload, pVariant->i64);
|
||||
} else if (IS_FLOAT_TYPE(pVariant->nType)) {
|
||||
if (converted) {
|
||||
*converted = true;
|
||||
}
|
||||
|
||||
if (pVariant->dKey > FLT_MAX || pVariant->dKey < -FLT_MAX) {
|
||||
SET_EXT_INFO(converted, pVariant->dKey, -FLT_MAX, FLT_MAX, extInfo);
|
||||
return -1;
|
||||
}
|
||||
|
||||
SET_FLOAT_VAL(payload, pVariant->dKey);
|
||||
} else if (pVariant->nType == TSDB_DATA_TYPE_NULL) {
|
||||
*((uint32_t *)payload) = TSDB_DATA_FLOAT_NULL;
|
||||
|
@ -824,6 +893,7 @@ int32_t tVariantDump(tVariant *pVariant, char *payload, int16_t type, bool inclu
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* In variant, bool/smallint/tinyint/int/bigint share the same attribution of
|
||||
* structure, also ignore the convert the type required
|
||||
|
@ -848,7 +918,7 @@ int32_t tVariantTypeSetType(tVariant *pVariant, char type) {
|
|||
case TSDB_DATA_TYPE_BIGINT:
|
||||
case TSDB_DATA_TYPE_TINYINT:
|
||||
case TSDB_DATA_TYPE_SMALLINT: {
|
||||
convertToInteger(pVariant, &(pVariant->i64), type, true, true);
|
||||
convertToInteger(pVariant, &(pVariant->i64), type, true, true, NULL);
|
||||
pVariant->nType = TSDB_DATA_TYPE_BIGINT;
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -34,6 +34,7 @@ extern "C" {
|
|||
|
||||
#define TSWINDOW_INITIALIZER ((STimeWindow) {INT64_MIN, INT64_MAX})
|
||||
#define TSWINDOW_DESC_INITIALIZER ((STimeWindow) {INT64_MAX, INT64_MIN})
|
||||
#define IS_TSWINDOW_SPECIFIED(win) (((win).skey != INT64_MIN) || ((win).ekey != INT64_MAX))
|
||||
|
||||
#define TSKEY_INITIAL_VAL INT64_MIN
|
||||
|
||||
|
|
|
@ -272,7 +272,8 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW TAOS_DEF_ERROR_CODE(0, 0x070A) //"Too many time window in query")
|
||||
#define TSDB_CODE_QRY_NOT_ENOUGH_BUFFER TAOS_DEF_ERROR_CODE(0, 0x070B) //"Query buffer limit has reached")
|
||||
#define TSDB_CODE_QRY_INCONSISTAN TAOS_DEF_ERROR_CODE(0, 0x070C) //"File inconsistency in replica")
|
||||
#define TSDB_CODE_QRY_SYS_ERROR TAOS_DEF_ERROR_CODE(0, 0x070D) //"System error")
|
||||
#define TSDB_CODE_QRY_INVALID_TIME_CONDITION TAOS_DEF_ERROR_CODE(0, 0x070D) //"invalid time condition")
|
||||
#define TSDB_CODE_QRY_SYS_ERROR TAOS_DEF_ERROR_CODE(0, 0x070E) //"System error")
|
||||
|
||||
|
||||
// grant
|
||||
|
|
|
@ -490,7 +490,8 @@ typedef struct {
|
|||
int16_t numOfCols; // the number of columns will be load from vnode
|
||||
SInterval interval;
|
||||
SSessionWindow sw; // session window
|
||||
uint32_t tagCondLen; // tag length in current query
|
||||
uint16_t tagCondLen; // tag length in current query
|
||||
uint16_t colCondLen; // column length in current query
|
||||
uint32_t tbnameCondLen; // table name filter condition string length
|
||||
int16_t numOfGroupCols; // num of group by columns
|
||||
int16_t orderByIdx;
|
||||
|
|
|
@ -75,131 +75,131 @@
|
|||
#define TK_SCORES 57
|
||||
#define TK_GRANTS 58
|
||||
#define TK_VNODES 59
|
||||
#define TK_IPTOKEN 60
|
||||
#define TK_DOT 61
|
||||
#define TK_CREATE 62
|
||||
#define TK_TABLE 63
|
||||
#define TK_STABLE 64
|
||||
#define TK_DATABASE 65
|
||||
#define TK_TABLES 66
|
||||
#define TK_STABLES 67
|
||||
#define TK_VGROUPS 68
|
||||
#define TK_DROP 69
|
||||
#define TK_TOPIC 70
|
||||
#define TK_FUNCTION 71
|
||||
#define TK_DNODE 72
|
||||
#define TK_USER 73
|
||||
#define TK_ACCOUNT 74
|
||||
#define TK_USE 75
|
||||
#define TK_DESCRIBE 76
|
||||
#define TK_ALTER 77
|
||||
#define TK_PASS 78
|
||||
#define TK_PRIVILEGE 79
|
||||
#define TK_LOCAL 80
|
||||
#define TK_COMPACT 81
|
||||
#define TK_LP 82
|
||||
#define TK_RP 83
|
||||
#define TK_IF 84
|
||||
#define TK_EXISTS 85
|
||||
#define TK_AS 86
|
||||
#define TK_OUTPUTTYPE 87
|
||||
#define TK_AGGREGATE 88
|
||||
#define TK_BUFSIZE 89
|
||||
#define TK_PPS 90
|
||||
#define TK_TSERIES 91
|
||||
#define TK_DBS 92
|
||||
#define TK_STORAGE 93
|
||||
#define TK_QTIME 94
|
||||
#define TK_CONNS 95
|
||||
#define TK_STATE 96
|
||||
#define TK_COMMA 97
|
||||
#define TK_KEEP 98
|
||||
#define TK_CACHE 99
|
||||
#define TK_REPLICA 100
|
||||
#define TK_QUORUM 101
|
||||
#define TK_DAYS 102
|
||||
#define TK_MINROWS 103
|
||||
#define TK_MAXROWS 104
|
||||
#define TK_BLOCKS 105
|
||||
#define TK_CTIME 106
|
||||
#define TK_WAL 107
|
||||
#define TK_FSYNC 108
|
||||
#define TK_COMP 109
|
||||
#define TK_PRECISION 110
|
||||
#define TK_UPDATE 111
|
||||
#define TK_CACHELAST 112
|
||||
#define TK_PARTITIONS 113
|
||||
#define TK_UNSIGNED 114
|
||||
#define TK_TAGS 115
|
||||
#define TK_USING 116
|
||||
#define TK_NULL 117
|
||||
#define TK_NOW 118
|
||||
#define TK_SELECT 119
|
||||
#define TK_UNION 120
|
||||
#define TK_ALL 121
|
||||
#define TK_DISTINCT 122
|
||||
#define TK_FROM 123
|
||||
#define TK_VARIABLE 124
|
||||
#define TK_INTERVAL 125
|
||||
#define TK_SESSION 126
|
||||
#define TK_STATE_WINDOW 127
|
||||
#define TK_FILL 128
|
||||
#define TK_SLIDING 129
|
||||
#define TK_ORDER 130
|
||||
#define TK_BY 131
|
||||
#define TK_ASC 132
|
||||
#define TK_DESC 133
|
||||
#define TK_GROUP 134
|
||||
#define TK_HAVING 135
|
||||
#define TK_LIMIT 136
|
||||
#define TK_OFFSET 137
|
||||
#define TK_SLIMIT 138
|
||||
#define TK_SOFFSET 139
|
||||
#define TK_WHERE 140
|
||||
#define TK_RESET 141
|
||||
#define TK_QUERY 142
|
||||
#define TK_SYNCDB 143
|
||||
#define TK_ADD 144
|
||||
#define TK_COLUMN 145
|
||||
#define TK_MODIFY 146
|
||||
#define TK_TAG 147
|
||||
#define TK_CHANGE 148
|
||||
#define TK_SET 149
|
||||
#define TK_KILL 150
|
||||
#define TK_CONNECTION 151
|
||||
#define TK_STREAM 152
|
||||
#define TK_COLON 153
|
||||
#define TK_ABORT 154
|
||||
#define TK_AFTER 155
|
||||
#define TK_ATTACH 156
|
||||
#define TK_BEFORE 157
|
||||
#define TK_BEGIN 158
|
||||
#define TK_CASCADE 159
|
||||
#define TK_CLUSTER 160
|
||||
#define TK_CONFLICT 161
|
||||
#define TK_COPY 162
|
||||
#define TK_DEFERRED 163
|
||||
#define TK_DELIMITERS 164
|
||||
#define TK_DETACH 165
|
||||
#define TK_EACH 166
|
||||
#define TK_END 167
|
||||
#define TK_EXPLAIN 168
|
||||
#define TK_FAIL 169
|
||||
#define TK_FOR 170
|
||||
#define TK_IGNORE 171
|
||||
#define TK_IMMEDIATE 172
|
||||
#define TK_INITIALLY 173
|
||||
#define TK_INSTEAD 174
|
||||
#define TK_MATCH 175
|
||||
#define TK_KEY 176
|
||||
#define TK_OF 177
|
||||
#define TK_RAISE 178
|
||||
#define TK_REPLACE 179
|
||||
#define TK_RESTRICT 180
|
||||
#define TK_ROW 181
|
||||
#define TK_STATEMENT 182
|
||||
#define TK_TRIGGER 183
|
||||
#define TK_VIEW 184
|
||||
#define TK_DOT 60
|
||||
#define TK_CREATE 61
|
||||
#define TK_TABLE 62
|
||||
#define TK_STABLE 63
|
||||
#define TK_DATABASE 64
|
||||
#define TK_TABLES 65
|
||||
#define TK_STABLES 66
|
||||
#define TK_VGROUPS 67
|
||||
#define TK_DROP 68
|
||||
#define TK_TOPIC 69
|
||||
#define TK_FUNCTION 70
|
||||
#define TK_DNODE 71
|
||||
#define TK_USER 72
|
||||
#define TK_ACCOUNT 73
|
||||
#define TK_USE 74
|
||||
#define TK_DESCRIBE 75
|
||||
#define TK_ALTER 76
|
||||
#define TK_PASS 77
|
||||
#define TK_PRIVILEGE 78
|
||||
#define TK_LOCAL 79
|
||||
#define TK_COMPACT 80
|
||||
#define TK_LP 81
|
||||
#define TK_RP 82
|
||||
#define TK_IF 83
|
||||
#define TK_EXISTS 84
|
||||
#define TK_AS 85
|
||||
#define TK_OUTPUTTYPE 86
|
||||
#define TK_AGGREGATE 87
|
||||
#define TK_BUFSIZE 88
|
||||
#define TK_PPS 89
|
||||
#define TK_TSERIES 90
|
||||
#define TK_DBS 91
|
||||
#define TK_STORAGE 92
|
||||
#define TK_QTIME 93
|
||||
#define TK_CONNS 94
|
||||
#define TK_STATE 95
|
||||
#define TK_COMMA 96
|
||||
#define TK_KEEP 97
|
||||
#define TK_CACHE 98
|
||||
#define TK_REPLICA 99
|
||||
#define TK_QUORUM 100
|
||||
#define TK_DAYS 101
|
||||
#define TK_MINROWS 102
|
||||
#define TK_MAXROWS 103
|
||||
#define TK_BLOCKS 104
|
||||
#define TK_CTIME 105
|
||||
#define TK_WAL 106
|
||||
#define TK_FSYNC 107
|
||||
#define TK_COMP 108
|
||||
#define TK_PRECISION 109
|
||||
#define TK_UPDATE 110
|
||||
#define TK_CACHELAST 111
|
||||
#define TK_PARTITIONS 112
|
||||
#define TK_UNSIGNED 113
|
||||
#define TK_TAGS 114
|
||||
#define TK_USING 115
|
||||
#define TK_NULL 116
|
||||
#define TK_NOW 117
|
||||
#define TK_SELECT 118
|
||||
#define TK_UNION 119
|
||||
#define TK_ALL 120
|
||||
#define TK_DISTINCT 121
|
||||
#define TK_FROM 122
|
||||
#define TK_VARIABLE 123
|
||||
#define TK_INTERVAL 124
|
||||
#define TK_SESSION 125
|
||||
#define TK_STATE_WINDOW 126
|
||||
#define TK_FILL 127
|
||||
#define TK_SLIDING 128
|
||||
#define TK_ORDER 129
|
||||
#define TK_BY 130
|
||||
#define TK_ASC 131
|
||||
#define TK_DESC 132
|
||||
#define TK_GROUP 133
|
||||
#define TK_HAVING 134
|
||||
#define TK_LIMIT 135
|
||||
#define TK_OFFSET 136
|
||||
#define TK_SLIMIT 137
|
||||
#define TK_SOFFSET 138
|
||||
#define TK_WHERE 139
|
||||
#define TK_RESET 140
|
||||
#define TK_QUERY 141
|
||||
#define TK_SYNCDB 142
|
||||
#define TK_ADD 143
|
||||
#define TK_COLUMN 144
|
||||
#define TK_MODIFY 145
|
||||
#define TK_TAG 146
|
||||
#define TK_CHANGE 147
|
||||
#define TK_SET 148
|
||||
#define TK_KILL 149
|
||||
#define TK_CONNECTION 150
|
||||
#define TK_STREAM 151
|
||||
#define TK_COLON 152
|
||||
#define TK_ABORT 153
|
||||
#define TK_AFTER 154
|
||||
#define TK_ATTACH 155
|
||||
#define TK_BEFORE 156
|
||||
#define TK_BEGIN 157
|
||||
#define TK_CASCADE 158
|
||||
#define TK_CLUSTER 159
|
||||
#define TK_CONFLICT 160
|
||||
#define TK_COPY 161
|
||||
#define TK_DEFERRED 162
|
||||
#define TK_DELIMITERS 163
|
||||
#define TK_DETACH 164
|
||||
#define TK_EACH 165
|
||||
#define TK_END 166
|
||||
#define TK_EXPLAIN 167
|
||||
#define TK_FAIL 168
|
||||
#define TK_FOR 169
|
||||
#define TK_IGNORE 170
|
||||
#define TK_IMMEDIATE 171
|
||||
#define TK_INITIALLY 172
|
||||
#define TK_INSTEAD 173
|
||||
#define TK_MATCH 174
|
||||
#define TK_KEY 175
|
||||
#define TK_OF 176
|
||||
#define TK_RAISE 177
|
||||
#define TK_REPLACE 178
|
||||
#define TK_RESTRICT 179
|
||||
#define TK_ROW 180
|
||||
#define TK_STATEMENT 181
|
||||
#define TK_TRIGGER 182
|
||||
#define TK_VIEW 183
|
||||
#define TK_IPTOKEN 184
|
||||
#define TK_SEMI 185
|
||||
#define TK_NONE 186
|
||||
#define TK_PREV 187
|
||||
|
|
|
@ -47,7 +47,8 @@ typedef struct {
|
|||
|
||||
|
||||
// this data type is internally used only in 'in' query to hold the values
|
||||
#define TSDB_DATA_TYPE_ARRAY (1000)
|
||||
#define TSDB_DATA_TYPE_POINTER_ARRAY (1000)
|
||||
#define TSDB_DATA_TYPE_VALUE_ARRAY (1001)
|
||||
|
||||
#define GET_TYPED_DATA(_v, _finalType, _type, _data) \
|
||||
do { \
|
||||
|
@ -181,6 +182,8 @@ typedef struct tDataTypeDescriptor {
|
|||
int16_t nameLen;
|
||||
int32_t bytes;
|
||||
char * name;
|
||||
int64_t minValue;
|
||||
int64_t maxValue;
|
||||
int (*compFunc)(const char *const input, int inputSize, const int nelements, char *const output, int outputSize,
|
||||
char algorithm, char *const buffer, int bufferSize);
|
||||
int (*decompFunc)(const char *const input, int compressedSize, const int nelements, char *const output,
|
||||
|
@ -200,6 +203,9 @@ const void *getNullValue(int32_t type);
|
|||
|
||||
void assignVal(char *val, const char *src, int32_t len, int32_t type);
|
||||
void tsDataSwap(void *pLeft, void *pRight, int32_t type, int32_t size, void* buf);
|
||||
void operateVal(void *dst, void *s1, void *s2, int32_t optr, int32_t type);
|
||||
void* getDataMin(int32_t type);
|
||||
void* getDataMax(int32_t type);
|
||||
|
||||
int32_t tStrToInteger(const char* z, int16_t type, int32_t n, int64_t* value, bool issigned);
|
||||
|
||||
|
|
|
@ -98,7 +98,6 @@ TAOS *shellInit(SShellArguments *_args) {
|
|||
}
|
||||
|
||||
if (con == NULL) {
|
||||
printf("taos connect failed, reason: %s.\n\n", tstrerror(terrno));
|
||||
fflush(stdout);
|
||||
return con;
|
||||
}
|
||||
|
|
|
@ -80,6 +80,7 @@ typedef struct SMnodeObj {
|
|||
int8_t updateEnd[4];
|
||||
int32_t refCount;
|
||||
int8_t role;
|
||||
int64_t roleTime;
|
||||
int8_t reserved2[3];
|
||||
} SMnodeObj;
|
||||
|
||||
|
|
|
@ -1181,7 +1181,7 @@ static int32_t mnodeGetVnodeMeta(STableMetaMsg *pMeta, SShowObj *pShow, void *pC
|
|||
|
||||
pShow->bytes[cols] = 4;
|
||||
pSchema[cols].type = TSDB_DATA_TYPE_INT;
|
||||
strcpy(pSchema[cols].name, "vnode");
|
||||
strcpy(pSchema[cols].name, "vgId");
|
||||
pSchema[cols].bytes = htons(pShow->bytes[cols]);
|
||||
cols++;
|
||||
|
||||
|
@ -1243,8 +1243,10 @@ static int32_t mnodeRetrieveVnodes(SShowObj *pShow, char *data, int32_t rows, vo
|
|||
cols++;
|
||||
|
||||
pWrite = data + pShow->offset[cols] * rows + pShow->bytes[cols] * numOfRows;
|
||||
strcpy(pWrite, syncRole[pVgid->role]);
|
||||
STR_TO_VARSTR(pWrite, syncRole[pVgid->role]);
|
||||
cols++;
|
||||
|
||||
numOfRows++;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -122,6 +122,7 @@ static int32_t mnodeMnodeActionRestored() {
|
|||
void *pIter = mnodeGetNextMnode(NULL, &pMnode);
|
||||
if (pMnode != NULL) {
|
||||
pMnode->role = TAOS_SYNC_ROLE_MASTER;
|
||||
pMnode->roleTime = taosGetTimestampMs();
|
||||
mnodeDecMnodeRef(pMnode);
|
||||
}
|
||||
mnodeCancelGetNextMnode(pIter);
|
||||
|
@ -496,7 +497,13 @@ static int32_t mnodeGetMnodeMeta(STableMetaMsg *pMeta, SShowObj *pShow, void *pC
|
|||
strcpy(pSchema[cols].name, "role");
|
||||
pSchema[cols].bytes = htons(pShow->bytes[cols]);
|
||||
cols++;
|
||||
|
||||
|
||||
pShow->bytes[cols] = 8;
|
||||
pSchema[cols].type = TSDB_DATA_TYPE_TIMESTAMP;
|
||||
strcpy(pSchema[cols].name, "role_time");
|
||||
pSchema[cols].bytes = htons(pShow->bytes[cols]);
|
||||
cols++;
|
||||
|
||||
pShow->bytes[cols] = 8;
|
||||
pSchema[cols].type = TSDB_DATA_TYPE_TIMESTAMP;
|
||||
strcpy(pSchema[cols].name, "create_time");
|
||||
|
@ -552,6 +559,10 @@ static int32_t mnodeRetrieveMnodes(SShowObj *pShow, char *data, int32_t rows, vo
|
|||
STR_WITH_MAXSIZE_TO_VARSTR(pWrite, roles, pShow->bytes[cols]);
|
||||
cols++;
|
||||
|
||||
pWrite = data + pShow->offset[cols] * rows + pShow->bytes[cols] * numOfRows;
|
||||
*(int64_t *)pWrite = pMnode->roleTime;
|
||||
cols++;
|
||||
|
||||
pWrite = data + pShow->offset[cols] * rows + pShow->bytes[cols] * numOfRows;
|
||||
*(int64_t *)pWrite = pMnode->createdTime;
|
||||
cols++;
|
||||
|
|
|
@ -227,6 +227,7 @@ void sdbUpdateMnodeRoles() {
|
|||
SMnodeObj *pMnode = mnodeGetMnode(roles.nodeId[i]);
|
||||
if (pMnode != NULL) {
|
||||
if (pMnode->role != roles.role[i]) {
|
||||
pMnode->roleTime = taosGetTimestampMs();
|
||||
bnNotify();
|
||||
}
|
||||
|
||||
|
|
|
@ -72,36 +72,55 @@ int64_t user_mktime64(const unsigned int year0, const unsigned int mon0,
|
|||
|
||||
// ==== mktime() kernel code =================//
|
||||
static int64_t m_deltaUtc = 0;
|
||||
void deltaToUtcInitOnce() {
|
||||
void deltaToUtcInitOnce() {
|
||||
struct tm tm = {0};
|
||||
|
||||
|
||||
(void)strptime("1970-01-01 00:00:00", (const char *)("%Y-%m-%d %H:%M:%S"), &tm);
|
||||
m_deltaUtc = (int64_t)mktime(&tm);
|
||||
//printf("====delta:%lld\n\n", seconds);
|
||||
//printf("====delta:%lld\n\n", seconds);
|
||||
return;
|
||||
}
|
||||
|
||||
static int64_t parseFraction(char* str, char** end, int32_t timePrec);
|
||||
static int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec);
|
||||
static int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec, char delim);
|
||||
static int32_t parseLocaltime(char* timestr, int64_t* time, int32_t timePrec);
|
||||
static int32_t parseLocaltimeWithDst(char* timestr, int64_t* time, int32_t timePrec);
|
||||
static char* forwardToTimeStringEnd(char* str);
|
||||
static bool checkTzPresent(char *str, int32_t len);
|
||||
|
||||
static int32_t (*parseLocaltimeFp[]) (char* timestr, int64_t* time, int32_t timePrec) = {
|
||||
parseLocaltime,
|
||||
parseLocaltimeWithDst
|
||||
};
|
||||
};
|
||||
|
||||
int32_t taosGetTimestampSec() { return (int32_t)time(NULL); }
|
||||
|
||||
int32_t taosParseTime(char* timestr, int64_t* time, int32_t len, int32_t timePrec, int8_t day_light) {
|
||||
/* parse datatime string in with tz */
|
||||
if (strnchr(timestr, 'T', len, false) != NULL) {
|
||||
return parseTimeWithTz(timestr, time, timePrec);
|
||||
return parseTimeWithTz(timestr, time, timePrec, 'T');
|
||||
} else if (checkTzPresent(timestr, len)) {
|
||||
return parseTimeWithTz(timestr, time, timePrec, 0);
|
||||
} else {
|
||||
return (*parseLocaltimeFp[day_light])(timestr, time, timePrec);
|
||||
}
|
||||
}
|
||||
|
||||
bool checkTzPresent(char *str, int32_t len) {
|
||||
char *seg = forwardToTimeStringEnd(str);
|
||||
int32_t seg_len = len - (int32_t)(seg - str);
|
||||
|
||||
char *c = &seg[seg_len - 1];
|
||||
for (int i = 0; i < seg_len; ++i) {
|
||||
if (*c == 'Z' || *c == 'z' || *c == '+' || *c == '-') {
|
||||
return true;
|
||||
}
|
||||
c--;
|
||||
}
|
||||
return false;
|
||||
|
||||
}
|
||||
|
||||
char* forwardToTimeStringEnd(char* str) {
|
||||
int32_t i = 0;
|
||||
int32_t numOfSep = 0;
|
||||
|
@ -187,6 +206,13 @@ int32_t parseTimezone(char* str, int64_t* tzOffset) {
|
|||
i += 2;
|
||||
}
|
||||
|
||||
//return error if there're illegal charaters after min(2 Digits)
|
||||
char *minStr = &str[i];
|
||||
if (minStr[1] != '\0' && minStr[2] != '\0') {
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
int64_t minute = strnatoi(&str[i], 2);
|
||||
if (minute > 59) {
|
||||
return -1;
|
||||
|
@ -213,14 +239,23 @@ int32_t parseTimezone(char* str, int64_t* tzOffset) {
|
|||
* 2013-04-12T15:52:01+0800
|
||||
* 2013-04-12T15:52:01.123+0800
|
||||
*/
|
||||
int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec) {
|
||||
int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec, char delim) {
|
||||
|
||||
int64_t factor = (timePrec == TSDB_TIME_PRECISION_MILLI) ? 1000 :
|
||||
(timePrec == TSDB_TIME_PRECISION_MICRO ? 1000000 : 1000000000);
|
||||
int64_t tzOffset = 0;
|
||||
|
||||
struct tm tm = {0};
|
||||
char* str = strptime(timestr, "%Y-%m-%dT%H:%M:%S", &tm);
|
||||
|
||||
char* str;
|
||||
if (delim == 'T') {
|
||||
str = strptime(timestr, "%Y-%m-%dT%H:%M:%S", &tm);
|
||||
} else if (delim == 0) {
|
||||
str = strptime(timestr, "%Y-%m-%d %H:%M:%S", &tm);
|
||||
} else {
|
||||
str = NULL;
|
||||
}
|
||||
|
||||
if (str == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
@ -236,7 +271,7 @@ int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec) {
|
|||
int64_t fraction = 0;
|
||||
str = forwardToTimeStringEnd(timestr);
|
||||
|
||||
if (str[0] == 'Z' || str[0] == 'z') {
|
||||
if ((str[0] == 'Z' || str[0] == 'z') && str[1] == '\0') {
|
||||
/* utc time, no millisecond, return directly*/
|
||||
*time = seconds * factor;
|
||||
} else if (str[0] == '.') {
|
||||
|
@ -250,6 +285,8 @@ int32_t parseTimeWithTz(char* timestr, int64_t* time, int32_t timePrec) {
|
|||
char seg = str[0];
|
||||
if (seg != 'Z' && seg != 'z' && seg != '+' && seg != '-') {
|
||||
return -1;
|
||||
} else if ((seg == 'Z' || seg == 'z') && str[1] != '\0') {
|
||||
return -1;
|
||||
} else if (seg == '+' || seg == '-') {
|
||||
// parse the timezone
|
||||
if (parseTimezone(str, &tzOffset) == -1) {
|
||||
|
|
|
@ -252,8 +252,10 @@ typedef struct SQueryAttr {
|
|||
int32_t numOfFilterCols;
|
||||
int64_t* fillVal;
|
||||
SOrderedPrjQueryInfo prjInfo; // limit value for each vgroup, only available in global order projection query.
|
||||
SSingleColumnFilterInfo* pFilterInfo;
|
||||
|
||||
SSingleColumnFilterInfo* pFilterInfo;
|
||||
SFilterInfo *pFilters;
|
||||
|
||||
void* tsdb;
|
||||
SMemRef memRef;
|
||||
STableGroupInfo tableGroupInfo; // table <tid, last_key> list SArray<STableKeyInfo>
|
||||
|
@ -384,6 +386,7 @@ typedef struct SQInfo {
|
|||
typedef struct SQueryParam {
|
||||
char *sql;
|
||||
char *tagCond;
|
||||
char *colCond;
|
||||
char *tbnameCond;
|
||||
char *prevResult;
|
||||
SArray *pTableIdList;
|
||||
|
@ -392,6 +395,8 @@ typedef struct SQueryParam {
|
|||
SExprInfo *pExprs;
|
||||
SExprInfo *pSecExprs;
|
||||
|
||||
SFilterInfo *pFilters;
|
||||
|
||||
SColIndex *pGroupColIndex;
|
||||
SColumnInfo *pTagColumnInfo;
|
||||
SGroupbyExpr *pGroupbyExpr;
|
||||
|
@ -577,6 +582,7 @@ SSDataBlock* doSLimit(void* param, bool* newgroup);
|
|||
|
||||
int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols, SSingleColumnFilterInfo** pFilterInfo, uint64_t qId);
|
||||
void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, SSDataBlock* pBlock);
|
||||
void doSetFilterColInfo(SFilterInfo *pFilters, SSDataBlock* pBlock);
|
||||
bool doFilterDataBlock(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, int32_t numOfRows, int8_t* p);
|
||||
void doCompactSDataBlock(SSDataBlock* pBlock, int32_t numOfRows, int8_t* p);
|
||||
|
||||
|
@ -598,9 +604,11 @@ int32_t createQueryFunc(SQueriedTableInfo* pTableInfo, int32_t numOfOutput, SExp
|
|||
int32_t createIndirectQueryFuncExprFromMsg(SQueryTableMsg *pQueryMsg, int32_t numOfOutput, SExprInfo **pExprInfo,
|
||||
SSqlExpr **pExpr, SExprInfo *prevExpr, SUdfInfo *pUdfInfo);
|
||||
|
||||
int32_t createQueryFilter(char *data, uint16_t len, SFilterInfo** pFilters);
|
||||
|
||||
SGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *pColIndex, int32_t *code);
|
||||
SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SGroupbyExpr *pGroupbyExpr, SExprInfo *pExprs,
|
||||
SExprInfo *pSecExprs, STableGroupInfo *pTableGroupInfo, SColumnInfo* pTagCols, int32_t vgId, char* sql, uint64_t qId, SUdfInfo* pUdfInfo);
|
||||
SExprInfo *pSecExprs, STableGroupInfo *pTableGroupInfo, SColumnInfo* pTagCols, SFilterInfo* pFilters, int32_t vgId, char* sql, uint64_t qId, SUdfInfo* pUdfInfo);
|
||||
|
||||
int32_t initQInfo(STsBufInfo* pTsBufInfo, void* tsdb, void* sourceOptr, SQInfo* pQInfo, SQueryParam* param, char* start,
|
||||
int32_t prevResultLen, void* merger);
|
||||
|
|
|
@ -0,0 +1,324 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef TDENGINE_QFILTER_H
|
||||
#define TDENGINE_QFILTER_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include "texpr.h"
|
||||
#include "hash.h"
|
||||
#include "tname.h"
|
||||
|
||||
#define FILTER_DEFAULT_GROUP_SIZE 4
|
||||
#define FILTER_DEFAULT_UNIT_SIZE 4
|
||||
#define FILTER_DEFAULT_FIELD_SIZE 4
|
||||
#define FILTER_DEFAULT_VALUE_SIZE 4
|
||||
#define FILTER_DEFAULT_GROUP_UNIT_SIZE 2
|
||||
|
||||
#define FILTER_DUMMY_EMPTY_OPTR 127
|
||||
#define FILTER_DUMMY_RANGE_OPTR 126
|
||||
|
||||
#define MAX_NUM_STR_SIZE 40
|
||||
|
||||
enum {
|
||||
FLD_TYPE_COLUMN = 1,
|
||||
FLD_TYPE_VALUE = 2,
|
||||
FLD_TYPE_MAX = 3,
|
||||
FLD_DESC_NO_FREE = 4,
|
||||
FLD_DATA_NO_FREE = 8,
|
||||
FLD_DATA_IS_HASH = 16,
|
||||
};
|
||||
|
||||
enum {
|
||||
MR_ST_START = 1,
|
||||
MR_ST_FIN = 2,
|
||||
MR_ST_ALL = 4,
|
||||
MR_ST_EMPTY = 8,
|
||||
};
|
||||
|
||||
enum {
|
||||
RANGE_FLG_EXCLUDE = 1,
|
||||
RANGE_FLG_INCLUDE = 2,
|
||||
RANGE_FLG_NULL = 4,
|
||||
};
|
||||
|
||||
enum {
|
||||
FI_OPTION_NO_REWRITE = 1,
|
||||
FI_OPTION_TIMESTAMP = 2,
|
||||
FI_OPTION_NEED_UNIQE = 4,
|
||||
};
|
||||
|
||||
enum {
|
||||
FI_STATUS_ALL = 1,
|
||||
FI_STATUS_EMPTY = 2,
|
||||
FI_STATUS_REWRITE = 4,
|
||||
FI_STATUS_CLONED = 8,
|
||||
};
|
||||
|
||||
enum {
|
||||
RANGE_TYPE_UNIT = 1,
|
||||
RANGE_TYPE_VAR_HASH = 2,
|
||||
RANGE_TYPE_MR_CTX = 3,
|
||||
};
|
||||
|
||||
typedef struct OptrStr {
|
||||
uint16_t optr;
|
||||
char *str;
|
||||
} OptrStr;
|
||||
|
||||
typedef struct SFilterRange {
|
||||
int64_t s;
|
||||
int64_t e;
|
||||
char sflag;
|
||||
char eflag;
|
||||
} SFilterRange;
|
||||
|
||||
typedef struct SFilterColRange {
|
||||
uint16_t idx; //column field idx
|
||||
bool isNull;
|
||||
bool notNull;
|
||||
bool isRange;
|
||||
SFilterRange ra;
|
||||
} SFilterColRange;
|
||||
|
||||
typedef bool (*rangeCompFunc) (const void *, const void *, const void *, const void *, __compar_fn_t);
|
||||
typedef int32_t(*filter_desc_compare_func)(const void *, const void *);
|
||||
typedef bool(*filter_exec_func)(void *, int32_t, int8_t*);
|
||||
|
||||
typedef struct SFilterRangeCompare {
|
||||
int64_t s;
|
||||
int64_t e;
|
||||
rangeCompFunc func;
|
||||
} SFilterRangeCompare;
|
||||
|
||||
typedef struct SFilterRangeNode {
|
||||
struct SFilterRangeNode* prev;
|
||||
struct SFilterRangeNode* next;
|
||||
union {
|
||||
SFilterRange ra;
|
||||
SFilterRangeCompare rc;
|
||||
};
|
||||
} SFilterRangeNode;
|
||||
|
||||
typedef struct SFilterRangeCtx {
|
||||
int32_t type;
|
||||
int32_t options;
|
||||
int8_t status;
|
||||
bool isnull;
|
||||
bool notnull;
|
||||
bool isrange;
|
||||
int16_t colId;
|
||||
__compar_fn_t pCompareFunc;
|
||||
SFilterRangeNode *rf; //freed
|
||||
SFilterRangeNode *rs;
|
||||
} SFilterRangeCtx ;
|
||||
|
||||
typedef struct SFilterVarCtx {
|
||||
int32_t type;
|
||||
int32_t options;
|
||||
int8_t status;
|
||||
bool isnull;
|
||||
bool notnull;
|
||||
bool isrange;
|
||||
SHashObj *wild;
|
||||
SHashObj *value;
|
||||
} SFilterVarCtx;
|
||||
|
||||
typedef struct SFilterField {
|
||||
uint16_t flag;
|
||||
void* desc;
|
||||
void* data;
|
||||
} SFilterField;
|
||||
|
||||
typedef struct SFilterFields {
|
||||
uint16_t size;
|
||||
uint16_t num;
|
||||
SFilterField *fields;
|
||||
} SFilterFields;
|
||||
|
||||
typedef struct SFilterFieldId {
|
||||
uint16_t type;
|
||||
uint16_t idx;
|
||||
} SFilterFieldId;
|
||||
|
||||
typedef struct SFilterGroup {
|
||||
uint16_t unitSize;
|
||||
uint16_t unitNum;
|
||||
uint16_t *unitIdxs;
|
||||
uint8_t *unitFlags; // !unit result
|
||||
} SFilterGroup;
|
||||
|
||||
typedef struct SFilterColInfo {
|
||||
uint8_t type;
|
||||
int32_t dataType;
|
||||
void *info;
|
||||
} SFilterColInfo;
|
||||
|
||||
typedef struct SFilterGroupCtx {
|
||||
uint16_t colNum;
|
||||
uint16_t *colIdx;
|
||||
SFilterColInfo *colInfo;
|
||||
} SFilterGroupCtx;
|
||||
|
||||
typedef struct SFilterColCtx {
|
||||
uint16_t colIdx;
|
||||
void* ctx;
|
||||
} SFilterColCtx;
|
||||
|
||||
typedef struct SFilterCompare {
|
||||
uint8_t type;
|
||||
uint8_t optr;
|
||||
uint8_t optr2;
|
||||
} SFilterCompare;
|
||||
|
||||
typedef struct SFilterUnit {
|
||||
SFilterCompare compare;
|
||||
SFilterFieldId left;
|
||||
SFilterFieldId right;
|
||||
SFilterFieldId right2;
|
||||
} SFilterUnit;
|
||||
|
||||
typedef struct SFilterComUnit {
|
||||
void *colData;
|
||||
void *valData;
|
||||
void *valData2;
|
||||
uint16_t dataSize;
|
||||
uint8_t dataType;
|
||||
uint8_t optr;
|
||||
int8_t func;
|
||||
int8_t rfunc;
|
||||
} SFilterComUnit;
|
||||
|
||||
typedef struct SFilterPCtx {
|
||||
SHashObj *valHash;
|
||||
SHashObj *unitHash;
|
||||
} SFilterPCtx;
|
||||
|
||||
typedef struct SFilterInfo {
|
||||
uint32_t options;
|
||||
uint32_t status;
|
||||
uint16_t unitSize;
|
||||
uint16_t unitNum;
|
||||
uint16_t groupNum;
|
||||
uint16_t colRangeNum;
|
||||
SFilterFields fields[FLD_TYPE_MAX];
|
||||
SFilterGroup *groups;
|
||||
uint16_t *cgroups;
|
||||
SFilterUnit *units;
|
||||
SFilterComUnit *cunits;
|
||||
uint8_t *unitRes; // result
|
||||
uint8_t *unitFlags; // got result
|
||||
SFilterRangeCtx **colRange;
|
||||
filter_exec_func func;
|
||||
|
||||
SFilterPCtx pctx;
|
||||
} SFilterInfo;
|
||||
|
||||
#define COL_FIELD_SIZE (sizeof(SFilterField) + 2 * sizeof(int64_t))
|
||||
|
||||
#define FILTER_NO_MERGE_DATA_TYPE(t) ((t) == TSDB_DATA_TYPE_BINARY || (t) == TSDB_DATA_TYPE_NCHAR)
|
||||
#define FILTER_NO_MERGE_OPTR(o) ((o) == TSDB_RELATION_ISNULL || (o) == TSDB_RELATION_NOTNULL || (o) == FILTER_DUMMY_EMPTY_OPTR)
|
||||
|
||||
#define MR_EMPTY_RES(ctx) (ctx->rs == NULL)
|
||||
|
||||
#define SET_AND_OPTR(ctx, o) do {if (o == TSDB_RELATION_ISNULL) { (ctx)->isnull = true; } else if (o == TSDB_RELATION_NOTNULL) { if (!(ctx)->isrange) { (ctx)->notnull = true; } } else if (o != FILTER_DUMMY_EMPTY_OPTR) { (ctx)->isrange = true; (ctx)->notnull = false; } } while (0)
|
||||
#define SET_OR_OPTR(ctx,o) do {if (o == TSDB_RELATION_ISNULL) { (ctx)->isnull = true; } else if (o == TSDB_RELATION_NOTNULL) { (ctx)->notnull = true; (ctx)->isrange = false; } else if (o != FILTER_DUMMY_EMPTY_OPTR) { if (!(ctx)->notnull) { (ctx)->isrange = true; } } } while (0)
|
||||
#define CHK_OR_OPTR(ctx) ((ctx)->isnull == true && (ctx)->notnull == true)
|
||||
#define CHK_AND_OPTR(ctx) ((ctx)->isnull == true && (((ctx)->notnull == true) || ((ctx)->isrange == true)))
|
||||
|
||||
|
||||
#define FILTER_GET_FLAG(st, f) (st & f)
|
||||
#define FILTER_SET_FLAG(st, f) st |= (f)
|
||||
#define FILTER_CLR_FLAG(st, f) st &= (~f)
|
||||
|
||||
#define SIMPLE_COPY_VALUES(dst, src) *((int64_t *)dst) = *((int64_t *)src)
|
||||
#define FILTER_PACKAGE_UNIT_HASH_KEY(v, optr, idx1, idx2) do { char *_t = (char *)v; _t[0] = optr; *(uint16_t *)(_t + 1) = idx1; *(uint16_t *)(_t + 3) = idx2; } while (0)
|
||||
#define FILTER_GREATER(cr,sflag,eflag) ((cr > 0) || ((cr == 0) && (FILTER_GET_FLAG(sflag,RANGE_FLG_EXCLUDE) || FILTER_GET_FLAG(eflag,RANGE_FLG_EXCLUDE))))
|
||||
#define FILTER_COPY_RA(dst, src) do { (dst)->sflag = (src)->sflag; (dst)->eflag = (src)->eflag; (dst)->s = (src)->s; (dst)->e = (src)->e; } while (0)
|
||||
|
||||
#define RESET_RANGE(ctx, r) do { (r)->next = (ctx)->rf; (ctx)->rf = r; } while (0)
|
||||
#define FREE_RANGE(ctx, r) do { if ((r)->prev) { (r)->prev->next = (r)->next; } else { (ctx)->rs = (r)->next;} if ((r)->next) { (r)->next->prev = (r)->prev; } RESET_RANGE(ctx, r); } while (0)
|
||||
#define FREE_FROM_RANGE(ctx, r) do { SFilterRangeNode *_r = r; if ((_r)->prev) { (_r)->prev->next = NULL; } else { (ctx)->rs = NULL;} while (_r) {SFilterRangeNode *n = (_r)->next; RESET_RANGE(ctx, _r); _r = n; } } while (0)
|
||||
#define INSERT_RANGE(ctx, r, ra) do { SFilterRangeNode *n = filterNewRange(ctx, ra); n->prev = (r)->prev; if ((r)->prev) { (r)->prev->next = n; } else { (ctx)->rs = n; } (r)->prev = n; n->next = r; } while (0)
|
||||
#define APPEND_RANGE(ctx, r, ra) do { SFilterRangeNode *n = filterNewRange(ctx, ra); n->prev = (r); if (r) { (r)->next = n; } else { (ctx)->rs = n; } } while (0)
|
||||
|
||||
#define ERR_RET(c) do { int32_t _code = c; if (_code != TSDB_CODE_SUCCESS) { return _code; } } while (0)
|
||||
#define ERR_LRET(c,...) do { int32_t _code = c; if (_code != TSDB_CODE_SUCCESS) { qError(__VA_ARGS__); return _code; } } while (0)
|
||||
#define ERR_JRET(c) do { code = c; if (code != TSDB_CODE_SUCCESS) { goto _return; } } while (0)
|
||||
|
||||
#define CHK_RETV(c) do { if (c) { return; } } while (0)
|
||||
#define CHK_RET(c, r) do { if (c) { return r; } } while (0)
|
||||
#define CHK_JMP(c) do { if (c) { goto _return; } } while (0)
|
||||
#define CHK_LRETV(c,...) do { if (c) { qError(__VA_ARGS__); return; } } while (0)
|
||||
#define CHK_LRET(c, r,...) do { if (c) { qError(__VA_ARGS__); return r; } } while (0)
|
||||
|
||||
#define FILTER_GET_FIELD(i, id) (&((i)->fields[(id).type].fields[(id).idx]))
|
||||
#define FILTER_GET_COL_FIELD(i, idx) (&((i)->fields[FLD_TYPE_COLUMN].fields[idx]))
|
||||
#define FILTER_GET_COL_FIELD_TYPE(fi) (((SSchema *)((fi)->desc))->type)
|
||||
#define FILTER_GET_COL_FIELD_SIZE(fi) (((SSchema *)((fi)->desc))->bytes)
|
||||
#define FILTER_GET_COL_FIELD_DESC(fi) ((SSchema *)((fi)->desc))
|
||||
#define FILTER_GET_COL_FIELD_DATA(fi, ri) ((char *)(fi)->data + ((SSchema *)((fi)->desc))->bytes * (ri))
|
||||
#define FILTER_GET_VAL_FIELD_TYPE(fi) (((tVariant *)((fi)->desc))->nType)
|
||||
#define FILTER_GET_VAL_FIELD_DATA(fi) ((char *)(fi)->data)
|
||||
#define FILTER_GET_TYPE(fl) ((fl) & FLD_TYPE_MAX)
|
||||
|
||||
#define FILTER_GROUP_UNIT(i, g, uid) ((i)->units + (g)->unitIdxs[uid])
|
||||
#define FILTER_UNIT_LEFT_FIELD(i, u) FILTER_GET_FIELD(i, (u)->left)
|
||||
#define FILTER_UNIT_RIGHT_FIELD(i, u) FILTER_GET_FIELD(i, (u)->right)
|
||||
#define FILTER_UNIT_DATA_TYPE(u) ((u)->compare.type)
|
||||
#define FILTER_UNIT_COL_DESC(i, u) FILTER_GET_COL_FIELD_DESC(FILTER_UNIT_LEFT_FIELD(i, u))
|
||||
#define FILTER_UNIT_COL_DATA(i, u, ri) FILTER_GET_COL_FIELD_DATA(FILTER_UNIT_LEFT_FIELD(i, u), ri)
|
||||
#define FILTER_UNIT_COL_SIZE(i, u) FILTER_GET_COL_FIELD_SIZE(FILTER_UNIT_LEFT_FIELD(i, u))
|
||||
#define FILTER_UNIT_VAL_DATA(i, u) FILTER_GET_VAL_FIELD_DATA(FILTER_UNIT_RIGHT_FIELD(i, u))
|
||||
#define FILTER_UNIT_COL_IDX(u) ((u)->left.idx)
|
||||
#define FILTER_UNIT_OPTR(u) ((u)->compare.optr)
|
||||
#define FILTER_UNIT_COMP_FUNC(u) ((u)->compare.func)
|
||||
|
||||
#define FILTER_UNIT_CLR_F(i) memset((i)->unitFlags, 0, (i)->unitNum * sizeof(*info->unitFlags))
|
||||
#define FILTER_UNIT_SET_F(i, idx) (i)->unitFlags[idx] = 1
|
||||
#define FILTER_UNIT_GET_F(i, idx) ((i)->unitFlags[idx])
|
||||
#define FILTER_UNIT_GET_R(i, idx) ((i)->unitRes[idx])
|
||||
#define FILTER_UNIT_SET_R(i, idx, v) (i)->unitRes[idx] = (v)
|
||||
|
||||
#define FILTER_PUSH_UNIT(colInfo, u) do { (colInfo).type = RANGE_TYPE_UNIT; (colInfo).dataType = FILTER_UNIT_DATA_TYPE(u);taosArrayPush((SArray *)((colInfo).info), &u);} while (0)
|
||||
#define FILTER_PUSH_VAR_HASH(colInfo, ha) do { (colInfo).type = RANGE_TYPE_VAR_HASH; (colInfo).info = ha;} while (0)
|
||||
#define FILTER_PUSH_CTX(colInfo, ctx) do { (colInfo).type = RANGE_TYPE_MR_CTX; (colInfo).info = ctx;} while (0)
|
||||
|
||||
#define FILTER_COPY_IDX(dst, src, n) do { *(dst) = malloc(sizeof(uint16_t) * n); memcpy(*(dst), src, sizeof(uint16_t) * n);} while (0)
|
||||
|
||||
#define FILTER_ADD_CTX_TO_GRES(gres, idx, ctx) do { if ((gres)->colCtxs == NULL) { (gres)->colCtxs = taosArrayInit(gres->colNum, sizeof(SFilterColCtx)); } SFilterColCtx cCtx = {idx, ctx}; taosArrayPush((gres)->colCtxs, &cCtx); } while (0)
|
||||
|
||||
|
||||
#define FILTER_ALL_RES(i) FILTER_GET_FLAG((i)->status, FI_STATUS_ALL)
|
||||
#define FILTER_EMPTY_RES(i) FILTER_GET_FLAG((i)->status, FI_STATUS_EMPTY)
|
||||
|
||||
|
||||
extern int32_t filterInitFromTree(tExprNode* tree, SFilterInfo **pinfo, uint32_t options);
|
||||
extern bool filterExecute(SFilterInfo *info, int32_t numOfRows, int8_t* p);
|
||||
extern int32_t filterSetColFieldData(SFilterInfo *info, int16_t colId, void *data);
|
||||
extern int32_t filterGetTimeRange(SFilterInfo *info, STimeWindow *win);
|
||||
extern int32_t filterConverNcharColumns(SFilterInfo* pFilterInfo, int32_t rows, bool *gotNchar);
|
||||
extern int32_t filterFreeNcharColumns(SFilterInfo* pFilterInfo);
|
||||
extern void filterFreeInfo(SFilterInfo *info);
|
||||
extern bool filterRangeExecute(SFilterInfo *info, SDataStatis *pDataStatis, int32_t numOfCols, int32_t numOfRows);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif // TDENGINE_QFILTER_H
|
|
@ -3,6 +3,7 @@
|
|||
|
||||
#include "tsdb.h" //todo tsdb should not be here
|
||||
#include "qSqlparser.h"
|
||||
#include "qFilter.h"
|
||||
|
||||
typedef struct SFieldInfo {
|
||||
int16_t numOfOutput; // number of column in result
|
||||
|
@ -16,6 +17,14 @@ typedef struct SCond {
|
|||
char * cond;
|
||||
} SCond;
|
||||
|
||||
typedef struct STblCond {
|
||||
uint64_t uid;
|
||||
int16_t idx; //table index
|
||||
int32_t len; // length of tag query condition data
|
||||
char * cond;
|
||||
} STblCond;
|
||||
|
||||
|
||||
typedef struct SJoinNode {
|
||||
uint64_t uid;
|
||||
int16_t tagColId;
|
||||
|
@ -89,6 +98,11 @@ typedef struct STableMetaInfo {
|
|||
struct SQInfo; // global merge operator
|
||||
struct SQueryAttr; // query object
|
||||
|
||||
typedef struct STableFilter {
|
||||
uint64_t uid;
|
||||
SFilterInfo info;
|
||||
} STableFilter;
|
||||
|
||||
typedef struct SQueryInfo {
|
||||
int16_t command; // the command may be different for each subclause, so keep it seperately.
|
||||
uint32_t type; // query/insert type
|
||||
|
@ -106,8 +120,11 @@ typedef struct SQueryInfo {
|
|||
SLimitVal slimit;
|
||||
STagCond tagCond;
|
||||
|
||||
SArray * colCond;
|
||||
|
||||
SOrderVal order;
|
||||
int16_t numOfTables;
|
||||
int16_t curTableIdx;
|
||||
STableMetaInfo **pTableMetaInfo;
|
||||
struct STSBuf *tsBuf;
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ cmd ::= SHOW SCORES. { setShowOptions(pInfo, TSDB_MGMT_TABLE_SCORES, 0, 0);
|
|||
cmd ::= SHOW GRANTS. { setShowOptions(pInfo, TSDB_MGMT_TABLE_GRANTS, 0, 0); }
|
||||
|
||||
cmd ::= SHOW VNODES. { setShowOptions(pInfo, TSDB_MGMT_TABLE_VNODES, 0, 0); }
|
||||
cmd ::= SHOW VNODES IPTOKEN(X). { setShowOptions(pInfo, TSDB_MGMT_TABLE_VNODES, &X, 0); }
|
||||
cmd ::= SHOW VNODES ids(X). { setShowOptions(pInfo, TSDB_MGMT_TABLE_VNODES, &X, 0); }
|
||||
|
||||
|
||||
%type dbPrefix {SStrToken}
|
||||
|
|
|
@ -4079,9 +4079,9 @@ void block_func_merge(SQLFunctionCtx* pCtx) {
|
|||
STableBlockDist info = {0};
|
||||
int32_t len = *(int32_t*) pCtx->pInput;
|
||||
blockDistInfoFromBinary(((char*)pCtx->pInput) + sizeof(int32_t), len, &info);
|
||||
|
||||
SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx);
|
||||
mergeTableBlockDist(pResInfo, &info);
|
||||
taosArrayDestroy(info.dataBlockInfos);
|
||||
|
||||
pResInfo->numOfRes = 1;
|
||||
pResInfo->hasResult = DATA_SET_FLAG;
|
||||
|
|
|
@ -2718,77 +2718,14 @@ static void getIntermediateBufInfo(SQueryRuntimeEnv* pRuntimeEnv, int32_t* ps, i
|
|||
|
||||
#define IS_PREFILTER_TYPE(_t) ((_t) != TSDB_DATA_TYPE_BINARY && (_t) != TSDB_DATA_TYPE_NCHAR)
|
||||
|
||||
static bool doFilterByBlockStatistics(SQueryRuntimeEnv* pRuntimeEnv, SDataStatis *pDataStatis, SQLFunctionCtx *pCtx, int32_t numOfRows) {
|
||||
static FORCE_INLINE bool doFilterByBlockStatistics(SQueryRuntimeEnv* pRuntimeEnv, SDataStatis *pDataStatis, SQLFunctionCtx *pCtx, int32_t numOfRows) {
|
||||
SQueryAttr* pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
|
||||
if (pDataStatis == NULL || pQueryAttr->numOfFilterCols == 0) {
|
||||
if (pDataStatis == NULL || pQueryAttr->pFilters == NULL) {
|
||||
return true;
|
||||
}
|
||||
bool ret = true;
|
||||
for (int32_t k = 0; k < pQueryAttr->numOfFilterCols; ++k) {
|
||||
SSingleColumnFilterInfo *pFilterInfo = &pQueryAttr->pFilterInfo[k];
|
||||
|
||||
int32_t index = -1;
|
||||
for(int32_t i = 0; i < pQueryAttr->numOfCols; ++i) {
|
||||
if (pDataStatis[i].colId == pFilterInfo->info.colId) {
|
||||
index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// no statistics data, load the true data block
|
||||
if (index == -1) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// not support pre-filter operation on binary/nchar data type
|
||||
if (!IS_PREFILTER_TYPE(pFilterInfo->info.type)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// all data in current column are NULL, no need to check its boundary value
|
||||
if (pDataStatis[index].numOfNull == numOfRows) {
|
||||
|
||||
// if isNULL query exists, load the null data column
|
||||
for (int32_t j = 0; j < pFilterInfo->numOfFilters; ++j) {
|
||||
SColumnFilterElem *pFilterElem = &pFilterInfo->pFilters[j];
|
||||
if (pFilterElem->fp == isNullOperator) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
SDataStatis* pDataBlockst = &pDataStatis[index];
|
||||
|
||||
if (pFilterInfo->info.type == TSDB_DATA_TYPE_FLOAT) {
|
||||
float minval = (float)(*(double *)(&pDataBlockst->min));
|
||||
float maxval = (float)(*(double *)(&pDataBlockst->max));
|
||||
|
||||
for (int32_t i = 0; i < pFilterInfo->numOfFilters; ++i) {
|
||||
if (pFilterInfo->pFilters[i].filterInfo.lowerRelOptr == TSDB_RELATION_IN) {
|
||||
continue;
|
||||
}
|
||||
ret &= pFilterInfo->pFilters[i].fp(&pFilterInfo->pFilters[i], (char *)&minval, (char *)&maxval, TSDB_DATA_TYPE_FLOAT);
|
||||
if (ret == false) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for (int32_t i = 0; i < pFilterInfo->numOfFilters; ++i) {
|
||||
if (pFilterInfo->pFilters[i].filterInfo.lowerRelOptr == TSDB_RELATION_IN) {
|
||||
continue;
|
||||
}
|
||||
ret &= pFilterInfo->pFilters[i].fp(&pFilterInfo->pFilters[i], (char *)&pDataBlockst->min, (char *)&pDataBlockst->max, pFilterInfo->info.type);
|
||||
if (ret == false) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
return filterRangeExecute(pQueryAttr->pFilters, pDataStatis, pQueryAttr->numOfCols, numOfRows);
|
||||
}
|
||||
|
||||
static bool overlapWithTimeWindow(SQueryAttr* pQueryAttr, SDataBlockInfo* pBlockInfo) {
|
||||
|
@ -3007,6 +2944,49 @@ void filterRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSingleColumnFilterInf
|
|||
tfree(p);
|
||||
}
|
||||
|
||||
void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock, bool ascQuery) {
|
||||
int32_t numOfRows = pBlock->info.rows;
|
||||
|
||||
int8_t *p = calloc(numOfRows, sizeof(int8_t));
|
||||
bool all = true;
|
||||
|
||||
if (pRuntimeEnv->pTsBuf != NULL) {
|
||||
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0);
|
||||
|
||||
TSKEY* k = (TSKEY*) pColInfoData->pData;
|
||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
||||
int32_t offset = ascQuery? i:(numOfRows - i - 1);
|
||||
int32_t ret = doTSJoinFilter(pRuntimeEnv, k[offset], ascQuery);
|
||||
if (ret == TS_JOIN_TAG_NOT_EQUALS) {
|
||||
break;
|
||||
} else if (ret == TS_JOIN_TS_NOT_EQUALS) {
|
||||
all = false;
|
||||
continue;
|
||||
} else {
|
||||
assert(ret == TS_JOIN_TS_EQUAL);
|
||||
p[offset] = true;
|
||||
}
|
||||
|
||||
if (!tsBufNextPos(pRuntimeEnv->pTsBuf)) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// save the cursor status
|
||||
pRuntimeEnv->current->cur = tsBufGetCursor(pRuntimeEnv->pTsBuf);
|
||||
} else {
|
||||
all = filterExecute(pRuntimeEnv->pQueryAttr->pFilters, numOfRows, p);
|
||||
}
|
||||
|
||||
if (!all) {
|
||||
doCompactSDataBlock(pBlock, numOfRows, p);
|
||||
}
|
||||
|
||||
tfree(p);
|
||||
}
|
||||
|
||||
|
||||
|
||||
static SColumnInfo* doGetTagColumnInfoById(SColumnInfo* pTagColList, int32_t numOfTags, int16_t colId);
|
||||
static void doSetTagValueInParam(void* pTable, int32_t tagColId, tVariant *tag, int16_t type, int16_t bytes);
|
||||
|
||||
|
@ -3048,6 +3028,15 @@ void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFi
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
void doSetFilterColInfo(SFilterInfo * pFilters, SSDataBlock* pBlock) {
|
||||
for (int32_t j = 0; j < pBlock->info.numOfCols; ++j) {
|
||||
SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, j);
|
||||
|
||||
filterSetColFieldData(pFilters, pColInfo->info.colId, pColInfo->pData);
|
||||
}
|
||||
}
|
||||
|
||||
int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTableScanInfo, SSDataBlock* pBlock,
|
||||
uint32_t* status) {
|
||||
*status = BLK_DATA_NO_NEEDED;
|
||||
|
@ -3084,7 +3073,7 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa
|
|||
|
||||
// Calculate all time windows that are overlapping or contain current data block.
|
||||
// If current data block is contained by all possible time window, do not load current data block.
|
||||
if (pQueryAttr->numOfFilterCols > 0 || pQueryAttr->groupbyColumn || pQueryAttr->sw.gap > 0 ||
|
||||
if (pQueryAttr->pFilters || pQueryAttr->groupbyColumn || pQueryAttr->sw.gap > 0 ||
|
||||
(QUERY_IS_INTERVAL_QUERY(pQueryAttr) && overlapWithTimeWindow(pQueryAttr, &pBlock->info))) {
|
||||
(*status) = BLK_DATA_ALL_NEEDED;
|
||||
}
|
||||
|
@ -3198,9 +3187,12 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa
|
|||
return terrno;
|
||||
}
|
||||
|
||||
doSetFilterColumnInfo(pQueryAttr->pFilterInfo, pQueryAttr->numOfFilterCols, pBlock);
|
||||
if (pQueryAttr->numOfFilterCols > 0 || pRuntimeEnv->pTsBuf != NULL) {
|
||||
filterRowsInDataBlock(pRuntimeEnv, pQueryAttr->pFilterInfo, pQueryAttr->numOfFilterCols, pBlock, ascQuery);
|
||||
if (pQueryAttr->pFilters != NULL) {
|
||||
doSetFilterColInfo(pQueryAttr->pFilters, pBlock);
|
||||
}
|
||||
|
||||
if (pQueryAttr->pFilters != NULL || pRuntimeEnv->pTsBuf != NULL) {
|
||||
filterColRowsInDataBlock(pRuntimeEnv, pBlock, ascQuery);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -7253,7 +7245,8 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
|
|||
pQueryMsg->numOfCols = htons(pQueryMsg->numOfCols);
|
||||
pQueryMsg->numOfOutput = htons(pQueryMsg->numOfOutput);
|
||||
pQueryMsg->numOfGroupCols = htons(pQueryMsg->numOfGroupCols);
|
||||
pQueryMsg->tagCondLen = htonl(pQueryMsg->tagCondLen);
|
||||
pQueryMsg->tagCondLen = htons(pQueryMsg->tagCondLen);
|
||||
pQueryMsg->colCondLen = htons(pQueryMsg->colCondLen);
|
||||
pQueryMsg->tsBuf.tsOffset = htonl(pQueryMsg->tsBuf.tsOffset);
|
||||
pQueryMsg->tsBuf.tsLen = htonl(pQueryMsg->tsBuf.tsLen);
|
||||
pQueryMsg->tsBuf.tsNumOfBlocks = htonl(pQueryMsg->tsBuf.tsNumOfBlocks);
|
||||
|
@ -7284,7 +7277,7 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
|
|||
pColInfo->colId = htons(pColInfo->colId);
|
||||
pColInfo->type = htons(pColInfo->type);
|
||||
pColInfo->bytes = htons(pColInfo->bytes);
|
||||
pColInfo->flist.numOfFilters = htons(pColInfo->flist.numOfFilters);
|
||||
pColInfo->flist.numOfFilters = 0;
|
||||
|
||||
if (!isValidDataType(pColInfo->type)) {
|
||||
qDebug("qmsg:%p, invalid data type in source column, index:%d, type:%d", pQueryMsg, col, pColInfo->type);
|
||||
|
@ -7292,6 +7285,7 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
|
|||
goto _cleanup;
|
||||
}
|
||||
|
||||
/*
|
||||
int32_t numOfFilters = pColInfo->flist.numOfFilters;
|
||||
if (numOfFilters > 0) {
|
||||
pColInfo->flist.filterInfo = calloc(numOfFilters, sizeof(SColumnFilterInfo));
|
||||
|
@ -7305,8 +7299,21 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
|
|||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _cleanup;
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
if (pQueryMsg->colCondLen > 0) {
|
||||
param->colCond = calloc(1, pQueryMsg->colCondLen);
|
||||
if (param->colCond == NULL) {
|
||||
code = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
goto _cleanup;
|
||||
}
|
||||
|
||||
memcpy(param->colCond, pMsg, pQueryMsg->colCondLen);
|
||||
pMsg += pQueryMsg->colCondLen;
|
||||
}
|
||||
|
||||
|
||||
param->tableScanOperator = pQueryMsg->tableScanOperator;
|
||||
param->pExpr = calloc(pQueryMsg->numOfOutput, POINTER_BYTES);
|
||||
if (param->pExpr == NULL) {
|
||||
|
@ -7880,6 +7887,28 @@ int32_t createQueryFunc(SQueriedTableInfo* pTableInfo, int32_t numOfOutput, SExp
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t createQueryFilter(char *data, uint16_t len, SFilterInfo** pFilters) {
|
||||
tExprNode* expr = NULL;
|
||||
|
||||
TRY(TSDB_MAX_TAG_CONDITIONS) {
|
||||
expr = exprTreeFromBinary(data, len);
|
||||
} CATCH( code ) {
|
||||
CLEANUP_EXECUTE();
|
||||
return code;
|
||||
} END_TRY
|
||||
|
||||
if (expr == NULL) {
|
||||
qError("failed to create expr tree");
|
||||
return TSDB_CODE_QRY_APP_ERROR;
|
||||
}
|
||||
|
||||
int32_t ret = filterInitFromTree(expr, pFilters, 0);
|
||||
tExprTreeDestroy(expr, NULL);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
// todo refactor
|
||||
int32_t createIndirectQueryFuncExprFromMsg(SQueryTableMsg* pQueryMsg, int32_t numOfOutput, SExprInfo** pExprInfo,
|
||||
SSqlExpr** pExpr, SExprInfo* prevExpr, SUdfInfo *pUdfInfo) {
|
||||
|
@ -8029,7 +8058,7 @@ void* doDestroyFilterInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFil
|
|||
|
||||
int32_t createFilterInfo(SQueryAttr* pQueryAttr, uint64_t qId) {
|
||||
for (int32_t i = 0; i < pQueryAttr->numOfCols; ++i) {
|
||||
if (pQueryAttr->tableCols[i].flist.numOfFilters > 0) {
|
||||
if (pQueryAttr->tableCols[i].flist.numOfFilters > 0 && pQueryAttr->tableCols[i].flist.filterInfo != NULL) {
|
||||
pQueryAttr->numOfFilterCols++;
|
||||
}
|
||||
}
|
||||
|
@ -8112,7 +8141,7 @@ FORCE_INLINE bool checkQIdEqual(void *qHandle, uint64_t qId) {
|
|||
}
|
||||
|
||||
SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, SExprInfo* pExprs,
|
||||
SExprInfo* pSecExprs, STableGroupInfo* pTableGroupInfo, SColumnInfo* pTagCols, int32_t vgId,
|
||||
SExprInfo* pSecExprs, STableGroupInfo* pTableGroupInfo, SColumnInfo* pTagCols, SFilterInfo* pFilters, int32_t vgId,
|
||||
char* sql, uint64_t qId, SUdfInfo* pUdfInfo) {
|
||||
int16_t numOfCols = pQueryMsg->numOfCols;
|
||||
int16_t numOfOutput = pQueryMsg->numOfOutput;
|
||||
|
@ -8164,7 +8193,8 @@ SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, S
|
|||
pQueryAttr->needReverseScan = pQueryMsg->needReverseScan;
|
||||
pQueryAttr->stateWindow = pQueryMsg->stateWindow;
|
||||
pQueryAttr->vgId = vgId;
|
||||
|
||||
pQueryAttr->pFilters = pFilters;
|
||||
|
||||
pQueryAttr->tableCols = calloc(numOfCols, sizeof(SSingleColumnFilterInfo));
|
||||
if (pQueryAttr->tableCols == NULL) {
|
||||
goto _cleanup;
|
||||
|
@ -8197,10 +8227,6 @@ SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, S
|
|||
}
|
||||
|
||||
doUpdateExprColumnIndex(pQueryAttr);
|
||||
int32_t ret = createFilterInfo(pQueryAttr, pQInfo->qId);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
goto _cleanup;
|
||||
}
|
||||
|
||||
if (pSecExprs != NULL) {
|
||||
int32_t resultRowSize = 0;
|
||||
|
@ -8315,6 +8341,8 @@ _cleanup_qinfo:
|
|||
|
||||
tfree(pExprs);
|
||||
|
||||
filterFreeInfo(pFilters);
|
||||
|
||||
_cleanup:
|
||||
freeQInfo(pQInfo);
|
||||
return NULL;
|
||||
|
@ -8670,6 +8698,8 @@ void freeQueryAttr(SQueryAttr* pQueryAttr) {
|
|||
taosArrayDestroy(pQueryAttr->pGroupbyExpr->columnInfo);
|
||||
tfree(pQueryAttr->pGroupbyExpr);
|
||||
}
|
||||
|
||||
filterFreeInfo(pQueryAttr->pFilters);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -423,6 +423,9 @@ tSqlExpr *tSqlExprClone(tSqlExpr *pSrc) {
|
|||
pExpr->pRight = tSqlExprClone(pSrc->pRight);
|
||||
}
|
||||
|
||||
memset(&pExpr->value, 0, sizeof(pExpr->value));
|
||||
tVariantAssign(&pExpr->value, &pSrc->value);
|
||||
|
||||
//we don't clone paramList now because clone is only used for between/and
|
||||
assert(pSrc->Expr.paramList == NULL);
|
||||
return pExpr;
|
||||
|
@ -478,9 +481,7 @@ static void doDestroySqlExprNode(tSqlExpr *pExpr) {
|
|||
return;
|
||||
}
|
||||
|
||||
if (pExpr->tokenId == TK_STRING) {
|
||||
tVariantDestroy(&pExpr->value);
|
||||
}
|
||||
tVariantDestroy(&pExpr->value);
|
||||
|
||||
tSqlExprListDestroy(pExpr->Expr.paramList);
|
||||
free(pExpr);
|
||||
|
@ -954,6 +955,8 @@ void SqlInfoDestroy(SSqlInfo *pInfo) {
|
|||
taosArrayDestroy(pInfo->pAlterInfo->pAddColumns);
|
||||
tfree(pInfo->pAlterInfo->tagData.data);
|
||||
tfree(pInfo->pAlterInfo);
|
||||
} else if (pInfo->type == TSDB_SQL_COMPACT_VNODE) {
|
||||
tSqlExprListDestroy(pInfo->list);
|
||||
} else {
|
||||
if (pInfo->pMiscInfo != NULL) {
|
||||
taosArrayDestroy(pInfo->pMiscInfo->a);
|
||||
|
|
|
@ -103,6 +103,12 @@ int32_t qCreateQueryInfo(void* tsdb, int32_t vgId, SQueryTableMsg* pQueryMsg, qi
|
|||
}
|
||||
}
|
||||
|
||||
if (param.colCond != NULL) {
|
||||
if ((code = createQueryFilter(param.colCond, pQueryMsg->colCondLen, ¶m.pFilters)) != TSDB_CODE_SUCCESS) {
|
||||
goto _over;
|
||||
}
|
||||
}
|
||||
|
||||
param.pGroupbyExpr = createGroupbyExprFromMsg(pQueryMsg, param.pGroupColIndex, &code);
|
||||
if ((param.pGroupbyExpr == NULL && pQueryMsg->numOfGroupCols != 0) || code != TSDB_CODE_SUCCESS) {
|
||||
goto _over;
|
||||
|
@ -162,13 +168,19 @@ int32_t qCreateQueryInfo(void* tsdb, int32_t vgId, SQueryTableMsg* pQueryMsg, qi
|
|||
|
||||
assert(pQueryMsg->stableQuery == isSTableQuery);
|
||||
(*pQInfo) = createQInfoImpl(pQueryMsg, param.pGroupbyExpr, param.pExprs, param.pSecExprs, &tableGroupInfo,
|
||||
param.pTagColumnInfo, vgId, param.sql, qId, param.pUdfInfo);
|
||||
param.pTagColumnInfo, param.pFilters, vgId, param.sql, qId, param.pUdfInfo);
|
||||
|
||||
param.sql = NULL;
|
||||
param.pExprs = NULL;
|
||||
param.pSecExprs = NULL;
|
||||
param.pGroupbyExpr = NULL;
|
||||
param.pTagColumnInfo = NULL;
|
||||
param.pFilters = NULL;
|
||||
|
||||
if ((*pQInfo) == NULL) {
|
||||
code = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
goto _over;
|
||||
}
|
||||
param.pUdfInfo = NULL;
|
||||
|
||||
code = initQInfo(&pQueryMsg->tsBuf, tsdb, NULL, *pQInfo, ¶m, (char*)pQueryMsg, pQueryMsg->prevResultLen, NULL);
|
||||
|
@ -178,6 +190,8 @@ int32_t qCreateQueryInfo(void* tsdb, int32_t vgId, SQueryTableMsg* pQueryMsg, qi
|
|||
taosArrayDestroy(param.pGroupbyExpr->columnInfo);
|
||||
}
|
||||
|
||||
tfree(param.colCond);
|
||||
|
||||
destroyUdfInfo(param.pUdfInfo);
|
||||
|
||||
taosArrayDestroy(param.pTableIdList);
|
||||
|
@ -190,6 +204,8 @@ int32_t qCreateQueryInfo(void* tsdb, int32_t vgId, SQueryTableMsg* pQueryMsg, qi
|
|||
freeColumnFilterInfo(column->flist.filterInfo, column->flist.numOfFilters);
|
||||
}
|
||||
|
||||
filterFreeInfo(param.pFilters);
|
||||
|
||||
//pQInfo already freed in initQInfo, but *pQInfo may not pointer to null;
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
*pQInfo = NULL;
|
||||
|
|
3095
src/query/src/sql.c
3095
src/query/src/sql.c
File diff suppressed because it is too large
Load Diff
|
@ -27,3 +27,4 @@ SET_SOURCE_FILES_PROPERTIES(./percentileTest.cpp PROPERTIES COMPILE_FLAGS -w)
|
|||
SET_SOURCE_FILES_PROPERTIES(./resultBufferTest.cpp PROPERTIES COMPILE_FLAGS -w)
|
||||
SET_SOURCE_FILES_PROPERTIES(./tsBufTest.cpp PROPERTIES COMPILE_FLAGS -w)
|
||||
SET_SOURCE_FILES_PROPERTIES(./unitTest.cpp PROPERTIES COMPILE_FLAGS -w)
|
||||
SET_SOURCE_FILES_PROPERTIES(./rangeMergeTest.cpp PROPERTIES COMPILE_FLAGS -w)
|
||||
|
|
|
@ -0,0 +1,367 @@
|
|||
#include <gtest/gtest.h>
|
||||
#include <iostream>
|
||||
|
||||
#include "qResultbuf.h"
|
||||
#include "taos.h"
|
||||
#include "taosdef.h"
|
||||
|
||||
#include "qFilter.h"
|
||||
|
||||
#pragma GCC diagnostic ignored "-Wunused-function"
|
||||
#pragma GCC diagnostic ignored "-Wunused-variable"
|
||||
|
||||
extern "C" {
|
||||
extern void* filterInitRangeCtx(int32_t type, int32_t options);
|
||||
extern int32_t filterGetRangeNum(void* h, int32_t* num);
|
||||
extern int32_t filterGetRangeRes(void* h, SFilterRange *ra);
|
||||
extern int32_t filterFreeRangeCtx(void* h);
|
||||
extern int32_t filterAddRange(void* h, SFilterRange* ra, int32_t optr);
|
||||
}
|
||||
|
||||
namespace {
|
||||
|
||||
|
||||
void intDataTest() {
|
||||
printf("running %s\n", __FUNCTION__);
|
||||
int32_t asize = 0;
|
||||
SFilterRange ra[10] = {0};
|
||||
int64_t *s =NULL;
|
||||
int64_t *e =NULL;
|
||||
int64_t s0[3] = {-100, 1, 3};
|
||||
int64_t e0[3] = {0 , 2, 4};
|
||||
int64_t s1[3] = {INT64_MIN, 0 , 3};
|
||||
int64_t e1[3] = {100 , 50, 4};
|
||||
int64_t s2[5] = {1 , 3 , 10,30,70};
|
||||
int64_t e2[5] = {10, 100, 20,50,120};
|
||||
int64_t s3[3] = {1 , 20 , 5};
|
||||
int64_t e3[3] = {10, 100, 25};
|
||||
int64_t s4[2] = {10, 0};
|
||||
int64_t e4[2] = {20, 5};
|
||||
int64_t s5[3] = {0, 6 ,7};
|
||||
int64_t e5[3] = {4, 10,20};
|
||||
|
||||
int64_t rs[10];
|
||||
int64_t re[10];
|
||||
|
||||
int32_t num = 0;
|
||||
void *h = NULL;
|
||||
|
||||
s = s0;
|
||||
e = e0;
|
||||
asize = sizeof(s0)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 3);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, -100);
|
||||
ASSERT_EQ(ra[0].e, 0);
|
||||
ASSERT_EQ(ra[1].s, 1);
|
||||
ASSERT_EQ(ra[1].e, 2);
|
||||
ASSERT_EQ(ra[2].s, 3);
|
||||
ASSERT_EQ(ra[2].e, 4);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, FI_OPTION_TIMESTAMP);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, -100);
|
||||
ASSERT_EQ(ra[0].e, 4);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
s = s1;
|
||||
e = e1;
|
||||
asize = sizeof(s1)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 3);
|
||||
ASSERT_EQ(ra[0].e, 4);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, INT64_MIN);
|
||||
ASSERT_EQ(ra[0].e, 100);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
|
||||
s = s2;
|
||||
e = e2;
|
||||
asize = sizeof(s2)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 1);
|
||||
ASSERT_EQ(ra[0].e, 120);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, i % 2 ? TSDB_RELATION_OR : TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, i % 2 ? TSDB_RELATION_AND : TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 70);
|
||||
ASSERT_EQ(ra[0].e, 120);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
s = s3;
|
||||
e = e3;
|
||||
asize = sizeof(s3)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 1);
|
||||
ASSERT_EQ(ra[0].e, 100);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
|
||||
|
||||
s = s4;
|
||||
e = e4;
|
||||
asize = sizeof(s4)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 2);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 0);
|
||||
ASSERT_EQ(ra[0].e, 5);
|
||||
ASSERT_EQ(ra[1].s, 10);
|
||||
ASSERT_EQ(ra[1].e, 20);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
s = s5;
|
||||
e = e5;
|
||||
asize = sizeof(s5)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 2);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 0);
|
||||
ASSERT_EQ(ra[0].e, 4);
|
||||
ASSERT_EQ(ra[1].s, 6);
|
||||
ASSERT_EQ(ra[1].e, 20);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].s = s[i];
|
||||
ra[0].e = e[i];
|
||||
|
||||
filterAddRange(h, ra, (i == (asize -1)) ? TSDB_RELATION_AND : TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 1);
|
||||
filterGetRangeRes(h, ra);
|
||||
ASSERT_EQ(ra[0].s, 7);
|
||||
ASSERT_EQ(ra[0].e, 10);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
|
||||
int64_t s6[2] = {0, 4};
|
||||
int64_t e6[2] = {4, 6};
|
||||
s = s6;
|
||||
e = e6;
|
||||
asize = sizeof(s6)/sizeof(s[0]);
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].eflag = 1;
|
||||
ra[1].sflag = 4;
|
||||
|
||||
ra[i].s = s[i];
|
||||
ra[i].e = e[i];
|
||||
|
||||
filterAddRange(h, ra + i, TSDB_RELATION_AND);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 0);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
|
||||
|
||||
memset(ra, 0, sizeof(ra));
|
||||
h = filterInitRangeCtx(TSDB_DATA_TYPE_BIGINT, 0);
|
||||
for (int32_t i = 0; i < asize; ++i) {
|
||||
ra[0].eflag = 1;
|
||||
ra[1].sflag = 1;
|
||||
|
||||
ra[i].s = s[i];
|
||||
ra[i].e = e[i];
|
||||
|
||||
filterAddRange(h, ra + i, TSDB_RELATION_OR);
|
||||
}
|
||||
filterGetRangeNum(h, &num);
|
||||
ASSERT_EQ(num, 2);
|
||||
ASSERT_EQ(ra[0].s, 0);
|
||||
ASSERT_EQ(ra[0].e, 4);
|
||||
ASSERT_EQ(ra[0].eflag, 1);
|
||||
ASSERT_EQ(ra[1].s, 4);
|
||||
ASSERT_EQ(ra[1].e, 6);
|
||||
ASSERT_EQ(ra[1].sflag, 1);
|
||||
filterFreeRangeCtx(h);
|
||||
|
||||
}
|
||||
|
||||
|
||||
} // namespace
|
||||
|
||||
TEST(testCase, rangeMergeTest) {
|
||||
intDataTest();
|
||||
|
||||
}
|
|
@ -407,11 +407,56 @@ TEST(testCase, parse_time) {
|
|||
taosParseTime(t41, &time, strlen(t41), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, 852048000999);
|
||||
|
||||
int64_t k = timezone;
|
||||
char t42[] = "1997-1-1T0:0:0.999999999Z";
|
||||
taosParseTime(t42, &time, strlen(t42), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, 852048000999 - timezone * MILLISECOND_PER_SECOND);
|
||||
|
||||
// "%Y-%m-%d %H:%M:%S" format with TimeZone appendix is also treated as legal
|
||||
// and TimeZone will be processed
|
||||
char t60[] = "2017-4-3 1:1:2.980";
|
||||
char t61[] = "2017-4-3 2:1:2.98+9:00";
|
||||
taosParseTime(t60, &time, strlen(t60), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t61, &time1, strlen(t61), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t62[] = "2017-4-3 2:1:2.98+09:00";
|
||||
taosParseTime(t62, &time, strlen(t62), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t61, &time1, strlen(t61), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t63[] = "2017-4-3 2:1:2.98+0900";
|
||||
taosParseTime(t63, &time, strlen(t63), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t62, &time1, strlen(t62), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t64[] = "2017-4-2 17:1:2.98Z";
|
||||
taosParseTime(t63, &time, strlen(t63), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t64, &time1, strlen(t64), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
// "%Y-%m-%d%H:%M:%S" format with TimeZone appendix is also treated as legal
|
||||
// and TimeZone will be processed
|
||||
char t80[] = "2017-4-51:1:2.980";
|
||||
char t81[] = "2017-4-52:1:2.98+9:00";
|
||||
taosParseTime(t80, &time, strlen(t80), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t81, &time1, strlen(t81), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t82[] = "2017-4-52:1:2.98+09:00";
|
||||
taosParseTime(t82, &time, strlen(t82), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t81, &time1, strlen(t81), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t83[] = "2017-4-52:1:2.98+0900";
|
||||
taosParseTime(t83, &time, strlen(t83), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t82, &time1, strlen(t82), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
char t84[] = "2017-4-417:1:2.98Z";
|
||||
taosParseTime(t83, &time, strlen(t83), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
taosParseTime(t84, &time1, strlen(t84), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
EXPECT_EQ(time, time1);
|
||||
|
||||
////////////////////////////////////////////////////////////////////
|
||||
// illegal timestamp format
|
||||
char t15[] = "2017-12-33 0:0:0";
|
||||
|
@ -430,8 +475,7 @@ TEST(testCase, parse_time) {
|
|||
EXPECT_EQ(taosParseTime(t19, &time, strlen(t19), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t20[] = "2017-12-31 9:0:0.1+12:99";
|
||||
EXPECT_EQ(taosParseTime(t20, &time, strlen(t20), TSDB_TIME_PRECISION_MILLI, 0), 0);
|
||||
EXPECT_EQ(time, 1514682000100);
|
||||
EXPECT_EQ(taosParseTime(t20, &time, strlen(t20), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t21[] = "2017-12-31T9:0:0.1+12:99";
|
||||
EXPECT_EQ(taosParseTime(t21, &time, strlen(t21), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
@ -441,8 +485,103 @@ TEST(testCase, parse_time) {
|
|||
|
||||
char t23[] = "2017-12-31T9:0:0.1+13:1";
|
||||
EXPECT_EQ(taosParseTime(t23, &time, strlen(t23), TSDB_TIME_PRECISION_MILLI, 0), 0);
|
||||
|
||||
char t24[] = "2017-12-31T9:0:0.1+13:001";
|
||||
EXPECT_EQ(taosParseTime(t24, &time, strlen(t24), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t25[] = "2017-12-31T9:0:0.1+13:00abc";
|
||||
EXPECT_EQ(taosParseTime(t25, &time, strlen(t25), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t26[] = "2017-12-31T9:0:0.1+13001";
|
||||
EXPECT_EQ(taosParseTime(t26, &time, strlen(t26), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t27[] = "2017-12-31T9:0:0.1+1300abc";
|
||||
EXPECT_EQ(taosParseTime(t27, &time, strlen(t27), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t28[] = "2017-12-31T9:0:0Z+12:00";
|
||||
EXPECT_EQ(taosParseTime(t28, &time, strlen(t28), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t29[] = "2017-12-31T9:0:0.123Z+12:00";
|
||||
EXPECT_EQ(taosParseTime(t29, &time, strlen(t29), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t65[] = "2017-12-31 9:0:0.1+13:001";
|
||||
EXPECT_EQ(taosParseTime(t65, &time, strlen(t65), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t66[] = "2017-12-31 9:0:0.1+13:00abc";
|
||||
EXPECT_EQ(taosParseTime(t66, &time, strlen(t66), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t67[] = "2017-12-31 9:0:0.1+13001";
|
||||
EXPECT_EQ(taosParseTime(t67, &time, strlen(t67), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t68[] = "2017-12-31 9:0:0.1+1300abc";
|
||||
EXPECT_EQ(taosParseTime(t68, &time, strlen(t68), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t69[] = "2017-12-31 9:0:0Z+12:00";
|
||||
EXPECT_EQ(taosParseTime(t69, &time, strlen(t69), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
char t70[] = "2017-12-31 9:0:0.123Z+12:00";
|
||||
EXPECT_EQ(taosParseTime(t70, &time, strlen(t70), TSDB_TIME_PRECISION_MILLI, 0), -1);
|
||||
|
||||
}
|
||||
|
||||
/* test parse time profiling */
|
||||
TEST(testCase, parse_time_profile) {
|
||||
taos_options(TSDB_OPTION_TIMEZONE, "GMT-8");
|
||||
char t1[] = "2018-1-8 1:1:1.952";
|
||||
char t2[] = "2018-1-8T1:1:1.952+0800";
|
||||
char t3[] = "2018-1-8 1:1:1.952+0800";
|
||||
char t4[] = "2018-1-81:1:1.952+0800";
|
||||
|
||||
char t5[] = "2018-1-8 1:1:1.952";
|
||||
char t6[] = "2018-1-8T1:1:1.952+08:00";
|
||||
char t7[] = "2018-1-8 1:1:1.952+08:00";
|
||||
char t8[] = "2018-1-81:1:1.952+08:00";
|
||||
|
||||
char t9[] = "2018-1-8 1:1:1.952";
|
||||
char t10[] = "2018-1-8T1:1:1.952Z";
|
||||
char t11[] = "2018-1-8 1:1:1.952z";
|
||||
char t12[] = "2018-1-81:1:1.952Z";
|
||||
|
||||
struct timeval start, end;
|
||||
int64_t time = 0, time1 = 0;
|
||||
|
||||
int32_t total_run = 100000000;
|
||||
long total_time_us;
|
||||
|
||||
gettimeofday(&start, NULL);
|
||||
for (int i = 0; i < total_run; ++i) {
|
||||
taosParseTime(t1, &time, strlen(t1), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
}
|
||||
gettimeofday(&end, NULL);
|
||||
total_time_us = ((end.tv_sec - start.tv_sec)* 1000000) + (end.tv_usec - start.tv_usec);
|
||||
printf("[t1] The elapsed time is %f seconds in %d run, average:%fns\n", total_time_us/1000000.0, total_run, 1000*(float)total_time_us/(float)total_run);
|
||||
|
||||
gettimeofday(&start, NULL);
|
||||
for (int i = 0; i < total_run; ++i) {
|
||||
taosParseTime(t2, &time, strlen(t2), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
}
|
||||
gettimeofday(&end, NULL);
|
||||
total_time_us = ((end.tv_sec - start.tv_sec)* 1000000) + (end.tv_usec - start.tv_usec);
|
||||
printf("[t2] The elapsed time is %f seconds in %d run, average:%fns\n", total_time_us/1000000.0, total_run, 1000*(float)total_time_us/(float)total_run);
|
||||
|
||||
gettimeofday(&start, NULL);
|
||||
for (int i = 0; i < total_run; ++i) {
|
||||
taosParseTime(t3, &time, strlen(t3), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
}
|
||||
gettimeofday(&end, NULL);
|
||||
total_time_us = ((end.tv_sec - start.tv_sec)* 1000000) + (end.tv_usec - start.tv_usec);
|
||||
printf("[t3] The elapsed time is %f seconds in %d run, average:%fns\n", total_time_us/1000000.0, total_run, 1000*(float)total_time_us/(float)total_run);
|
||||
|
||||
gettimeofday(&start, NULL);
|
||||
for (int i = 0; i < total_run; ++i) {
|
||||
taosParseTime(t4, &time, strlen(t4), TSDB_TIME_PRECISION_MILLI, 0);
|
||||
}
|
||||
gettimeofday(&end, NULL);
|
||||
total_time_us = ((end.tv_sec - start.tv_sec)* 1000000) + (end.tv_usec - start.tv_usec);
|
||||
printf("[t4] The elapsed time is %f seconds in %d run, average:%fns\n", total_time_us/1000000.0, total_run, 1000*(float)total_time_us/(float)total_run);
|
||||
}
|
||||
|
||||
|
||||
TEST(testCase, tvariant_convert) {
|
||||
// 1. bool data to all other data types
|
||||
tVariant t = {0};
|
||||
|
|
|
@ -1133,8 +1133,8 @@ static void rpcNotifyClient(SRpcReqContext *pContext, SRpcMsg *pMsg) {
|
|||
} else {
|
||||
// for asynchronous API
|
||||
SRpcEpSet *pEpSet = NULL;
|
||||
if (pContext->epSet.inUse != pContext->oldInUse || pContext->redirect)
|
||||
pEpSet = &pContext->epSet;
|
||||
//if (pContext->epSet.inUse != pContext->oldInUse || pContext->redirect)
|
||||
pEpSet = &pContext->epSet;
|
||||
|
||||
(*pRpc->cfp)(pMsg, pEpSet);
|
||||
}
|
||||
|
|
|
@ -261,11 +261,20 @@ int tfsMkdirRecurAt(const char *rname, int level, int id) {
|
|||
// Try to create upper
|
||||
char *s = strdup(rname);
|
||||
|
||||
if (tfsMkdirRecurAt(dirname(s), level, id) < 0) {
|
||||
tfree(s);
|
||||
// Make a copy of dirname(s) because the implementation of 'dirname' differs on different platforms.
|
||||
// Some platform may modify the contents of the string passed into dirname(). Others may return a pointer to
|
||||
// internal static storage space that will be overwritten by next call. For case like that, we should not use
|
||||
// the pointer directly in this recursion.
|
||||
// See https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/dirname.3.html
|
||||
char *dir = strdup(dirname(s));
|
||||
|
||||
if (tfsMkdirRecurAt(dir, level, id) < 0) {
|
||||
free(s);
|
||||
free(dir);
|
||||
return -1;
|
||||
}
|
||||
tfree(s);
|
||||
free(s);
|
||||
free(dir);
|
||||
|
||||
if (tfsMkdirAt(rname, level, id) < 0) {
|
||||
return -1;
|
||||
|
|
|
@ -42,7 +42,7 @@ typedef struct SHashNode {
|
|||
|
||||
#define GET_HASH_NODE_KEY(_n) ((char*)(_n) + sizeof(SHashNode) + (_n)->dataLen)
|
||||
#define GET_HASH_NODE_DATA(_n) ((char*)(_n) + sizeof(SHashNode))
|
||||
#define GET_HASH_PNODE(_n) ((char*)(_n) - sizeof(SHashNode));
|
||||
#define GET_HASH_PNODE(_n) ((SHashNode *)((char*)(_n) - sizeof(SHashNode)))
|
||||
|
||||
typedef enum SHashLockTypeE {
|
||||
HASH_NO_LOCK = 0,
|
||||
|
@ -170,6 +170,10 @@ void *taosHashIterate(SHashObj *pHashObj, void *p);
|
|||
|
||||
void taosHashCancelIterate(SHashObj *pHashObj, void *p);
|
||||
|
||||
void *taosHashGetDataKey(SHashObj *pHashObj, void *data);
|
||||
|
||||
uint32_t taosHashGetDataKeyLen(SHashObj *pHashObj, void *data);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -25,7 +25,7 @@ extern "C" {
|
|||
#define TSDB_PATTERN_MATCH 0
|
||||
#define TSDB_PATTERN_NOMATCH 1
|
||||
#define TSDB_PATTERN_NOWILDCARDMATCH 2
|
||||
#define TSDB_PATTERN_STRING_MAX_LEN 20
|
||||
#define TSDB_PATTERN_STRING_MAX_LEN 100
|
||||
|
||||
#define FLT_COMPAR_TOL_FACTOR 4
|
||||
#define FLT_EQUAL(_x, _y) (fabs((_x) - (_y)) <= (FLT_COMPAR_TOL_FACTOR * FLT_EPSILON))
|
||||
|
@ -53,6 +53,38 @@ __compar_fn_t getComparFunc(int32_t type, int32_t optr);
|
|||
|
||||
int32_t taosArrayCompareString(const void* a, const void* b);
|
||||
|
||||
int32_t setCompareBytes1(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t setCompareBytes2(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t setCompareBytes4(const void *pLeft, const void *pRight);
|
||||
int32_t setCompareBytes8(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareInt32Val(const void *pLeft, const void *pRight);
|
||||
int32_t compareInt64Val(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareInt16Val(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareInt8Val(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareUint32Val(const void *pLeft, const void *pRight);
|
||||
int32_t compareUint64Val(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareUint16Val(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareUint8Val(const void* pLeft, const void* pRight);
|
||||
|
||||
int32_t compareFloatVal(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareDoubleVal(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareLenPrefixedStr(const void *pLeft, const void *pRight);
|
||||
|
||||
int32_t compareLenPrefixedWStr(const void *pLeft, const void *pRight);
|
||||
int32_t compareStrPatternComp(const void* pLeft, const void* pRight);
|
||||
int32_t compareFindItemInSet(const void *pLeft, const void* pRight);
|
||||
int32_t compareWStrPatternComp(const void* pLeft, const void* pRight);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -776,6 +776,17 @@ size_t taosHashGetMemSize(const SHashObj *pHashObj) {
|
|||
return (pHashObj->capacity * (sizeof(SHashEntry) + POINTER_BYTES)) + sizeof(SHashNode) * taosHashGetSize(pHashObj) + sizeof(SHashObj);
|
||||
}
|
||||
|
||||
FORCE_INLINE void *taosHashGetDataKey(SHashObj *pHashObj, void *data) {
|
||||
SHashNode * node = GET_HASH_PNODE(data);
|
||||
return GET_HASH_NODE_KEY(node);
|
||||
}
|
||||
|
||||
FORCE_INLINE uint32_t taosHashGetDataKeyLen(SHashObj *pHashObj, void *data) {
|
||||
SHashNode * node = GET_HASH_PNODE(data);
|
||||
return node->keyLen;
|
||||
}
|
||||
|
||||
|
||||
// release the pNode, return next pNode, and lock the current entry
|
||||
static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) {
|
||||
|
||||
|
|
|
@ -19,6 +19,22 @@
|
|||
#include "tarray.h"
|
||||
#include "hash.h"
|
||||
|
||||
int32_t setCompareBytes1(const void *pLeft, const void *pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, pLeft, 1) ? 1 : 0;
|
||||
}
|
||||
|
||||
int32_t setCompareBytes2(const void *pLeft, const void *pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, pLeft, 2) ? 1 : 0;
|
||||
}
|
||||
|
||||
int32_t setCompareBytes4(const void *pLeft, const void *pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, pLeft, 4) ? 1 : 0;
|
||||
}
|
||||
|
||||
int32_t setCompareBytes8(const void *pLeft, const void *pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, pLeft, 8) ? 1 : 0;
|
||||
}
|
||||
|
||||
int32_t compareInt32Val(const void *pLeft, const void *pRight) {
|
||||
int32_t left = GET_INT32_VAL(pLeft), right = GET_INT32_VAL(pRight);
|
||||
if (left > right) return 1;
|
||||
|
@ -48,21 +64,21 @@ int32_t compareInt8Val(const void *pLeft, const void *pRight) {
|
|||
}
|
||||
|
||||
int32_t compareUint32Val(const void *pLeft, const void *pRight) {
|
||||
int32_t left = GET_UINT32_VAL(pLeft), right = GET_UINT32_VAL(pRight);
|
||||
uint32_t left = GET_UINT32_VAL(pLeft), right = GET_UINT32_VAL(pRight);
|
||||
if (left > right) return 1;
|
||||
if (left < right) return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t compareUint64Val(const void *pLeft, const void *pRight) {
|
||||
int64_t left = GET_UINT64_VAL(pLeft), right = GET_UINT64_VAL(pRight);
|
||||
uint64_t left = GET_UINT64_VAL(pLeft), right = GET_UINT64_VAL(pRight);
|
||||
if (left > right) return 1;
|
||||
if (left < right) return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t compareUint16Val(const void *pLeft, const void *pRight) {
|
||||
int16_t left = GET_UINT16_VAL(pLeft), right = GET_UINT16_VAL(pRight);
|
||||
uint16_t left = GET_UINT16_VAL(pLeft), right = GET_UINT16_VAL(pRight);
|
||||
if (left > right) return 1;
|
||||
if (left < right) return -1;
|
||||
return 0;
|
||||
|
@ -262,27 +278,28 @@ int WCSPatternMatch(const wchar_t *patterStr, const wchar_t *str, size_t size, c
|
|||
return (str[j] == 0 || j >= size) ? TSDB_PATTERN_MATCH : TSDB_PATTERN_NOMATCH;
|
||||
}
|
||||
|
||||
static int32_t compareStrPatternComp(const void* pLeft, const void* pRight) {
|
||||
int32_t compareStrPatternComp(const void* pLeft, const void* pRight) {
|
||||
SPatternCompareInfo pInfo = {'%', '_'};
|
||||
|
||||
char pattern[128] = {0};
|
||||
|
||||
assert(varDataLen(pRight) <= TSDB_MAX_FIELD_LEN);
|
||||
char *pattern = calloc(varDataLen(pRight) + 1, sizeof(char));
|
||||
memcpy(pattern, varDataVal(pRight), varDataLen(pRight));
|
||||
assert(varDataLen(pRight) < 128);
|
||||
|
||||
size_t sz = varDataLen(pLeft);
|
||||
char *buf = malloc(sz + 1);
|
||||
memcpy(buf, varDataVal(pLeft), sz);
|
||||
char *buf = malloc(sz + 1);
|
||||
memcpy(buf, varDataVal(pLeft), sz);
|
||||
buf[sz] = 0;
|
||||
|
||||
int32_t ret = patternMatch(pattern, buf, sz, &pInfo);
|
||||
free(buf);
|
||||
free(pattern);
|
||||
return (ret == TSDB_PATTERN_MATCH) ? 0 : 1;
|
||||
}
|
||||
|
||||
int32_t taosArrayCompareString(const void* a, const void* b) {
|
||||
const char* x = *(const char**)a;
|
||||
const char* y = *(const char**)b;
|
||||
|
||||
|
||||
return compareLenPrefixedStr(x, y);
|
||||
}
|
||||
|
||||
|
@ -290,25 +307,48 @@ int32_t taosArrayCompareString(const void* a, const void* b) {
|
|||
// const SArray* arr = (const SArray*) pRight;
|
||||
// return taosArraySearchString(arr, pLeft, taosArrayCompareString, TD_EQ) == NULL ? 0 : 1;
|
||||
//}
|
||||
static int32_t compareFindItemInSet(const void *pLeft, const void* pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, varDataVal(pLeft), varDataLen(pLeft)) ? 1 : 0;
|
||||
int32_t compareFindItemInSet(const void *pLeft, const void* pRight) {
|
||||
return NULL != taosHashGet((SHashObj *)pRight, varDataVal(pLeft), varDataLen(pLeft)) ? 1 : 0;
|
||||
}
|
||||
|
||||
static int32_t compareWStrPatternComp(const void* pLeft, const void* pRight) {
|
||||
int32_t compareWStrPatternComp(const void* pLeft, const void* pRight) {
|
||||
SPatternCompareInfo pInfo = {'%', '_'};
|
||||
|
||||
wchar_t pattern[128] = {0};
|
||||
assert(TSDB_PATTERN_STRING_MAX_LEN < 128);
|
||||
assert(varDataLen(pRight) <= TSDB_MAX_FIELD_LEN * TSDB_NCHAR_SIZE);
|
||||
wchar_t *pattern = calloc(varDataLen(pRight) + 1, sizeof(wchar_t));
|
||||
|
||||
memcpy(pattern, varDataVal(pRight), varDataLen(pRight));
|
||||
assert(varDataLen(pRight) < 128);
|
||||
|
||||
|
||||
int32_t ret = WCSPatternMatch(pattern, varDataVal(pLeft), varDataLen(pLeft)/TSDB_NCHAR_SIZE, &pInfo);
|
||||
free(pattern);
|
||||
return (ret == TSDB_PATTERN_MATCH) ? 0 : 1;
|
||||
}
|
||||
|
||||
__compar_fn_t getComparFunc(int32_t type, int32_t optr) {
|
||||
__compar_fn_t comparFn = NULL;
|
||||
|
||||
if (optr == TSDB_RELATION_IN && (type != TSDB_DATA_TYPE_BINARY && type != TSDB_DATA_TYPE_NCHAR)) {
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
case TSDB_DATA_TYPE_TINYINT:
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
return setCompareBytes1;
|
||||
case TSDB_DATA_TYPE_SMALLINT:
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
return setCompareBytes2;
|
||||
case TSDB_DATA_TYPE_INT:
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
return setCompareBytes4;
|
||||
case TSDB_DATA_TYPE_BIGINT:
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
return setCompareBytes8;
|
||||
default:
|
||||
assert(0);
|
||||
}
|
||||
}
|
||||
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
|
@ -334,6 +374,8 @@ __compar_fn_t getComparFunc(int32_t type, int32_t optr) {
|
|||
case TSDB_DATA_TYPE_NCHAR: {
|
||||
if (optr == TSDB_RELATION_LIKE) {
|
||||
comparFn = compareWStrPatternComp;
|
||||
} else if (optr == TSDB_RELATION_IN) {
|
||||
comparFn = compareFindItemInSet;
|
||||
} else {
|
||||
comparFn = compareLenPrefixedWStr;
|
||||
}
|
||||
|
|
|
@ -280,6 +280,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_QRY_IN_EXEC, "Multiple retrieval of
|
|||
TAOS_DEFINE_ERROR(TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW, "Too many time window in query")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_QRY_NOT_ENOUGH_BUFFER, "Query buffer limit has reached")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_QRY_INCONSISTAN, "File inconsistance in replica")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_QRY_INVALID_TIME_CONDITION, "One valid time range condition expected")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_QRY_SYS_ERROR, "System error")
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,176 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import datetime
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
|
||||
tdSql.execute("create database timezone")
|
||||
tdSql.execute("use timezone")
|
||||
tdSql.execute("create stable st (ts timestamp, id int ) tags (index int)")
|
||||
|
||||
tdSql.execute("insert into tb0 using st tags (1) values ('2021-07-01 00:00:00.000',0)")
|
||||
tdSql.query("select ts from tb0")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb1 using st tags (1) values ('2021-07-01T00:00:00.000+07:50',1)")
|
||||
tdSql.query("select ts from tb1")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb2 using st tags (1) values ('2021-07-01T00:00:00.000+08:00',2)")
|
||||
tdSql.query("select ts from tb2")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb3 using st tags (1) values ('2021-07-01T00:00:00.000Z',3)")
|
||||
tdSql.query("select ts from tb3")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb4 using st tags (1) values ('2021-07-01 00:00:00.000+07:50',4)")
|
||||
tdSql.query("select ts from tb4")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb5 using st tags (1) values ('2021-07-01 00:00:00.000Z',5)")
|
||||
tdSql.query("select ts from tb5")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb6 using st tags (1) values ('2021-07-01T00:00:00.000+0800',6)")
|
||||
tdSql.query("select ts from tb6")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb7 using st tags (1) values ('2021-07-01 00:00:00.000+0800',7)")
|
||||
tdSql.query("select ts from tb7")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb8 using st tags (1) values ('2021-07-0100:00:00.000',8)")
|
||||
tdSql.query("select ts from tb8")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb9 using st tags (1) values ('2021-07-0100:00:00.000+0800',9)")
|
||||
tdSql.query("select ts from tb9")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb10 using st tags (1) values ('2021-07-0100:00:00.000+08:00',10)")
|
||||
tdSql.query("select ts from tb10")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb11 using st tags (1) values ('2021-07-0100:00:00.000+07:00',11)")
|
||||
tdSql.query("select ts from tb11")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb12 using st tags (1) values ('2021-07-0100:00:00.000+0700',12)")
|
||||
tdSql.query("select ts from tb12")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb13 using st tags (1) values ('2021-07-0100:00:00.000+07:12',13)")
|
||||
tdSql.query("select ts from tb13")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:48:00.000")
|
||||
|
||||
tdSql.execute("insert into tb14 using st tags (1) values ('2021-07-0100:00:00.000+712',14)")
|
||||
tdSql.query("select ts from tb14")
|
||||
tdSql.checkData(0, 0, "2021-06-28 08:58:00.000")
|
||||
|
||||
tdSql.execute("insert into tb15 using st tags (1) values ('2021-07-0100:00:00.000Z',15)")
|
||||
tdSql.query("select ts from tb15")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb16 using st tags (1) values ('2021-7-1 00:00:00.000Z',16)")
|
||||
tdSql.query("select ts from tb16")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb17 using st tags (1) values ('2021-07-0100:00:00.000+0750',17)")
|
||||
tdSql.query("select ts from tb17")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb18 using st tags (1) values ('2021-07-0100:00:00.000+0752',18)")
|
||||
tdSql.query("select ts from tb18")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:08:00.000")
|
||||
|
||||
tdSql.execute("insert into tb19 using st tags (1) values ('2021-07-0100:00:00.000+075',19)")
|
||||
tdSql.query("select ts from tb19")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:55:00.000")
|
||||
|
||||
tdSql.execute("insert into tb20 using st tags (1) values ('2021-07-0100:00:00.000+75',20)")
|
||||
tdSql.query("select ts from tb20")
|
||||
tdSql.checkData(0, 0, "2021-06-28 05:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb21 using st tags (1) values ('2021-7-1 1:1:1.234+075',21)")
|
||||
tdSql.query("select ts from tb21")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
|
||||
|
||||
tdSql.execute("insert into tb22 using st tags (1) values ('2021-7-1T1:1:1.234+075',22)")
|
||||
tdSql.query("select ts from tb22")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
|
||||
|
||||
tdSql.execute("insert into tb23 using st tags (1) values ('2021-7-131:1:1.234+075',22)")
|
||||
tdSql.query("select ts from tb23")
|
||||
tdSql.checkData(0, 0, "2021-07-13 01:56:01.234")
|
||||
|
||||
|
||||
tdSql.error("insert into tberror using st tags (1) values ('20210701 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021070100:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('202171 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021 07 01 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021 -07-0100:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021-7-11:1:1.234+075',0)")
|
||||
|
||||
os.system("rm -rf ./TimeZone/*.py.sql")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,174 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import datetime
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def checkCommunity(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosdump" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
|
||||
|
||||
def run(self):
|
||||
|
||||
# clear envs
|
||||
|
||||
tdSql.execute(" create database ZoneTime precision 'us' ")
|
||||
tdSql.execute(" use ZoneTime ")
|
||||
tdSql.execute(" create stable st (ts timestamp , id int , val float) tags (tag1 timestamp ,tag2 int) ")
|
||||
|
||||
# standard case for Timestamp
|
||||
|
||||
tdSql.execute(" insert into tb1 using st tags (\"2021-07-01 00:00:00.000\" , 2) values( \"2021-07-01 00:00:00.000\" , 1 , 1.0 ) ")
|
||||
case1 = (tdSql.getResult("select * from tb1"))
|
||||
print(case1)
|
||||
if case1 == [(datetime.datetime(2021, 7, 1, 0, 0), 1, 1.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000' ")
|
||||
|
||||
# RCF-3339 : it allows "T" is replaced by " "
|
||||
|
||||
tdSql.execute(" insert into tb2 using st tags (\"2021-07-01T00:00:00.000+07:50\" , 2) values( \"2021-07-01T00:00:00.000+07:50\" , 2 , 2.0 ) ")
|
||||
case2 = (tdSql.getResult("select * from tb2"))
|
||||
print(case2)
|
||||
if case2 == [(datetime.datetime(2021, 7, 1, 0, 10), 2, 2.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+07:50'! ")
|
||||
|
||||
tdSql.execute(" insert into tb3 using st tags (\"2021-07-01T00:00:00.000+08:00\" , 3) values( \"2021-07-01T00:00:00.000+08:00\" , 3 , 3.0 ) ")
|
||||
case3 = (tdSql.getResult("select * from tb3"))
|
||||
print(case3)
|
||||
if case3 == [(datetime.datetime(2021, 7, 1, 0, 0), 3, 3.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+08:00'! ")
|
||||
|
||||
tdSql.execute(" insert into tb4 using st tags (\"2021-07-01T00:00:00.000Z\" , 4) values( \"2021-07-01T00:00:00.000Z\" , 4 , 4.0 ) ")
|
||||
case4 = (tdSql.getResult("select * from tb4"))
|
||||
print(case4)
|
||||
if case4 == [(datetime.datetime(2021, 7, 1, 8, 0), 4, 4.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000Z'! ")
|
||||
|
||||
tdSql.execute(" insert into tb5 using st tags (\"2021-07-01 00:00:00.000+07:50\" , 5) values( \"2021-07-01 00:00:00.000+07:50\" , 5 , 5.0 ) ")
|
||||
case5 = (tdSql.getResult("select * from tb5"))
|
||||
print(case5)
|
||||
if case5 == [(datetime.datetime(2021, 7, 1, 0, 10), 5, 5.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000+08:00 ")
|
||||
|
||||
tdSql.execute(" insert into tb6 using st tags (\"2021-07-01 00:00:00.000Z\" , 6) values( \"2021-07-01 00:00:00.000Z\" , 6 , 6.0 ) ")
|
||||
case6 = (tdSql.getResult("select * from tb6"))
|
||||
print(case6)
|
||||
if case6 == [(datetime.datetime(2021, 7, 1, 8, 0), 6, 6.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000Z'! ")
|
||||
|
||||
# ISO 8610 timestamp format , time days and hours must be split by "T"
|
||||
|
||||
tdSql.execute(" insert into tb7 using st tags (\"2021-07-01T00:00:00.000+0800\" , 7) values( \"2021-07-01T00:00:00.000+0800\" , 7 , 7.0 ) ")
|
||||
case7 = (tdSql.getResult("select * from tb7"))
|
||||
print(case7)
|
||||
if case7 == [(datetime.datetime(2021, 7, 1, 0, 0), 7, 7.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+0800'! ")
|
||||
|
||||
tdSql.execute(" insert into tb8 using st tags (\"2021-07-01T00:00:00.000+08\" , 8) values( \"2021-07-01T00:00:00.000+08\" , 8 , 8.0 ) ")
|
||||
case8 = (tdSql.getResult("select * from tb8"))
|
||||
print(case8)
|
||||
if case8 == [(datetime.datetime(2021, 7, 1, 0, 0), 8, 8.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+08'! ")
|
||||
|
||||
# Non-standard case for Timestamp
|
||||
|
||||
tdSql.execute(" insert into tb9 using st tags (\"2021-07-01 00:00:00.000+0800\" , 9) values( \"2021-07-01 00:00:00.000+0800\" , 9 , 9.0 ) ")
|
||||
case9 = (tdSql.getResult("select * from tb9"))
|
||||
print(case9)
|
||||
|
||||
tdSql.execute(" insert into tb10 using st tags (\"2021-07-0100:00:00.000\" , 10) values( \"2021-07-0100:00:00.000\" , 10 , 10.0 ) ")
|
||||
case10 = (tdSql.getResult("select * from tb10"))
|
||||
print(case10)
|
||||
|
||||
tdSql.execute(" insert into tb11 using st tags (\"2021-07-0100:00:00.000+0800\" , 11) values( \"2021-07-0100:00:00.000+0800\" , 11 , 11.0 ) ")
|
||||
case11 = (tdSql.getResult("select * from tb11"))
|
||||
print(case11)
|
||||
|
||||
tdSql.execute(" insert into tb12 using st tags (\"2021-07-0100:00:00.000+08:00\" , 12) values( \"2021-07-0100:00:00.000+08:00\" , 12 , 12.0 ) ")
|
||||
case12 = (tdSql.getResult("select * from tb12"))
|
||||
print(case12)
|
||||
|
||||
tdSql.execute(" insert into tb13 using st tags (\"2021-07-0100:00:00.000Z\" , 13) values( \"2021-07-0100:00:00.000Z\" , 13 , 13.0 ) ")
|
||||
case13 = (tdSql.getResult("select * from tb13"))
|
||||
print(case13)
|
||||
|
||||
tdSql.execute(" insert into tb14 using st tags (\"2021-07-0100:00:00.000Z\" , 14) values( \"2021-07-0100:00:00.000Z\" , 14 , 14.0 ) ")
|
||||
case14 = (tdSql.getResult("select * from tb14"))
|
||||
print(case14)
|
||||
|
||||
tdSql.execute(" insert into tb15 using st tags (\"2021-07-0100:00:00.000+08\" , 15) values( \"2021-07-0100:00:00.000+08\" , 15 , 15.0 ) ")
|
||||
case15 = (tdSql.getResult("select * from tb15"))
|
||||
print(case15)
|
||||
|
||||
tdSql.execute(" insert into tb16 using st tags (\"2021-07-0100:00:00.000+07:50\" , 16) values( \"2021-07-0100:00:00.000+07:50\" , 16 , 16.0 ) ")
|
||||
case16 = (tdSql.getResult("select * from tb16"))
|
||||
print(case16)
|
||||
|
||||
os.system("rm -rf *.py.sql")
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -16,6 +16,7 @@ from util.log import *
|
|||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
from datetime import timedelta
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
|
@ -36,15 +37,36 @@ class TDTestCase:
|
|||
|
||||
ret = tdSql.query('show dnodes')
|
||||
|
||||
ret = tdSql.execute('alter dnode "%s" debugFlag 135' % tdSql.getData(0,0))
|
||||
tdLog.info('alter dnode "%s" debugFlag 135 -> ret: %d' % (tdSql.getData(0, 0), ret))
|
||||
dnodeId = tdSql.getData(0, 0);
|
||||
dnodeEndpoint = tdSql.getData(0, 1);
|
||||
|
||||
ret = tdSql.execute('alter dnode "%s" debugFlag 135' % dnodeId)
|
||||
tdLog.info('alter dnode "%s" debugFlag 135 -> ret: %d' % (dnodeId, ret))
|
||||
|
||||
|
||||
ret = tdSql.query('show mnodes')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 2, "master")
|
||||
|
||||
role_time = tdSql.getData(0, 3)
|
||||
create_time = tdSql.getData(0, 4)
|
||||
time_delta = timedelta(milliseconds=100)
|
||||
|
||||
if create_time-time_delta < role_time < create_time+time_delta:
|
||||
tdLog.info("role_time {} and create_time {} expected within range".format(role_time, create_time))
|
||||
else:
|
||||
tdLog.exit("role_time {} and create_time {} not expected within range".format(role_time, create_time))
|
||||
|
||||
ret = tdSql.query('show vgroups')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.execute('create stable st (ts timestamp, f int) tags(t int)')
|
||||
tdSql.execute('create table ct1 using st tags(1)');
|
||||
tdSql.execute('create table ct2 using st tags(2)');
|
||||
ret = tdSql.query('show vnodes "{}"'.format(dnodeEndpoint))
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.checkData(0, 1, "master")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -18108,4 +18108,4 @@
|
|||
fun:_PyEval_EvalFrameDefault
|
||||
fun:_PyEval_EvalCodeWithName
|
||||
fun:_PyFunction_Vectorcall
|
||||
}
|
||||
}
|
||||
|
|
|
@ -29,6 +29,10 @@ python3 ./test.py -f insert/in_function.py
|
|||
python3 ./test.py -f insert/modify_column.py
|
||||
python3 ./test.py -f insert/line_insert.py
|
||||
|
||||
# timezone
|
||||
|
||||
python3 ./test.py -f TimeZone/TestCaseTimeZone.py
|
||||
|
||||
#table
|
||||
python3 ./test.py -f table/alter_wal0.py
|
||||
python3 ./test.py -f table/column_name.py
|
||||
|
|
|
@ -25,7 +25,7 @@ class TDTestCase:
|
|||
|
||||
def run(self):
|
||||
tdSql.query("show variables")
|
||||
tdSql.checkData(53, 1, 864000)
|
||||
tdSql.checkData(54, 1, 864000)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -101,7 +101,7 @@ class TDTestCase:
|
|||
# tdSql.query(f"select * from t1 where c2 between {pow(10,38)*3.4} and {pow(10,38)*3.4+1}")
|
||||
# tdSql.checkRows(1)
|
||||
tdSql.query(f"select * from t2 where c2 between {-3.4*10**38-1} and {-3.4*10**38}")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.checkRows(2)
|
||||
tdSql.error(f"select * from t2 where c2 between null and {-3.4*10**38}")
|
||||
# tdSql.checkRows(3)
|
||||
|
||||
|
@ -203,4 +203,4 @@ class TDTestCase:
|
|||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
|
|
|
@ -0,0 +1,207 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
from copy import deepcopy
|
||||
import string
|
||||
import random
|
||||
from util.log import tdLog
|
||||
from util.cases import tdCases
|
||||
from util.sql import tdSql
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def cleanTb(self):
|
||||
query_sql = "show stables"
|
||||
res_row_list = tdSql.query(query_sql, True)
|
||||
stb_list = map(lambda x: x[0], res_row_list)
|
||||
for stb in stb_list:
|
||||
tdSql.execute(f'drop table if exists {stb}')
|
||||
|
||||
query_sql = "show tables"
|
||||
res_row_list = tdSql.query(query_sql, True)
|
||||
tb_list = map(lambda x: x[0], res_row_list)
|
||||
for tb in tb_list:
|
||||
tdSql.execute(f'drop table if exists {tb}')
|
||||
|
||||
def getLongWildcardStr(self, len=None):
|
||||
"""
|
||||
generate long wildcard str
|
||||
"""
|
||||
maxWildCardsLength = int(tdSql.getVariable('maxWildCardsLength')[0])
|
||||
if len:
|
||||
chars = ''.join(random.choice(string.ascii_letters.lower()) for i in range(len))
|
||||
else:
|
||||
chars = ''.join(random.choice(string.ascii_letters.lower()) for i in range(maxWildCardsLength+1))
|
||||
return chars
|
||||
|
||||
def genTableName(self):
|
||||
'''
|
||||
generate table name
|
||||
hp_name--->'%str'
|
||||
lp_name--->'str%'
|
||||
ul_name--->'st_r'
|
||||
'''
|
||||
table_name = self.getLongWildcardStr()
|
||||
table_name_list = list(table_name)
|
||||
table_name_list.pop(-1)
|
||||
|
||||
if len(table_name_list) > 1:
|
||||
lp_name = deepcopy(table_name_list)
|
||||
lp_name[-1] = '%'
|
||||
lp_name = ''.join(lp_name)
|
||||
|
||||
ul_name = list(lp_name)
|
||||
ul_name[int(len(ul_name)/2)] = '_'
|
||||
ul_name = ''.join(ul_name)
|
||||
|
||||
table_name_list = list(table_name)
|
||||
hp_name = deepcopy(table_name_list)
|
||||
hp_name.pop(1)
|
||||
hp_name[0] = '%'
|
||||
hp_name = ''.join(hp_name)
|
||||
else:
|
||||
hp_name = '%'
|
||||
lp_name = '%'
|
||||
ul_name = '_'
|
||||
return table_name, hp_name, lp_name, ul_name
|
||||
|
||||
def checkRegularTableWildcardLength(self):
|
||||
'''
|
||||
check regular table wildcard length with % and _
|
||||
'''
|
||||
self.cleanTb()
|
||||
table_name, hp_name, lp_name, ul_name = self.genTableName()
|
||||
tdSql.execute(f"CREATE TABLE {table_name} (ts timestamp, a1 int)")
|
||||
sql_list = [f'show tables like "{hp_name}"', f'show tables like "{lp_name}"', f'show tables like "{ul_name}"']
|
||||
for sql in sql_list:
|
||||
tdSql.query(sql)
|
||||
if len(table_name) >= 1:
|
||||
tdSql.checkRows(1)
|
||||
else:
|
||||
tdSql.error(sql)
|
||||
|
||||
exceed_sql_list = [f'show tables like "%{hp_name}"', f'show tables like "{lp_name}%"', f'show tables like "{ul_name}%"']
|
||||
for sql in exceed_sql_list:
|
||||
tdSql.error(sql)
|
||||
|
||||
def checkSuperTableWildcardLength(self):
|
||||
'''
|
||||
check super table wildcard length with % and _
|
||||
'''
|
||||
self.cleanTb()
|
||||
table_name, hp_name, lp_name, ul_name = self.genTableName()
|
||||
tdSql.execute(f"CREATE TABLE {table_name} (ts timestamp, c1 int) tags (t1 int)")
|
||||
sql_list = [f'show stables like "{hp_name}"', f'show stables like "{lp_name}"', f'show stables like "{ul_name}"']
|
||||
for sql in sql_list:
|
||||
tdSql.query(sql)
|
||||
if len(table_name) >= 1:
|
||||
tdSql.checkRows(1)
|
||||
else:
|
||||
tdSql.error(sql)
|
||||
|
||||
exceed_sql_list = [f'show stables like "%{hp_name}"', f'show stables like "{lp_name}%"', f'show stables like "{ul_name}%"']
|
||||
for sql in exceed_sql_list:
|
||||
tdSql.error(sql)
|
||||
|
||||
def checkRegularWildcardSelectLength(self):
|
||||
'''
|
||||
check regular table wildcard select length with % and _
|
||||
'''
|
||||
self.cleanTb()
|
||||
table_name, hp_name, lp_name, ul_name = self.genTableName()
|
||||
tdSql.execute(f"CREATE TABLE {table_name} (ts timestamp, bi1 binary(200), nc1 nchar(200))")
|
||||
tdSql.execute(f'insert into {table_name} values (now, "{table_name}", "{table_name}")')
|
||||
sql_list = [f'select * from {table_name} where bi1 like "{hp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{lp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{ul_name}"',
|
||||
f'select * from {table_name} where nc1 like "{hp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{lp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{ul_name}"']
|
||||
for sql in sql_list:
|
||||
tdSql.query(sql)
|
||||
if len(table_name) >= 1:
|
||||
tdSql.checkRows(1)
|
||||
else:
|
||||
tdSql.error(sql)
|
||||
|
||||
exceed_sql_list = [f'select * from {table_name} where bi1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where bi1 like "{ul_name}%"',
|
||||
f'select * from {table_name} where nc1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where nc1 like "{ul_name}%"']
|
||||
for sql in exceed_sql_list:
|
||||
tdSql.error(sql)
|
||||
|
||||
def checkStbWildcardSelectLength(self):
|
||||
'''
|
||||
check stb wildcard select length with % and _
|
||||
'''
|
||||
self.cleanTb()
|
||||
table_name, hp_name, lp_name, ul_name = self.genTableName()
|
||||
|
||||
tdSql.execute(f'CREATE TABLE {table_name} (ts timestamp, bi1 binary(200), nc1 nchar(200)) tags (si1 binary(200), sc1 nchar(200))')
|
||||
tdSql.execute(f'create table {table_name}_sub1 using {table_name} tags ("{table_name}", "{table_name}")')
|
||||
tdSql.execute(f'insert into {table_name}_sub1 values (now, "{table_name}", "{table_name}");')
|
||||
|
||||
sql_list = [f'select * from {table_name} where bi1 like "{hp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{lp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{ul_name}"',
|
||||
f'select * from {table_name} where nc1 like "{hp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{lp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{ul_name}"',
|
||||
f'select * from {table_name} where si1 like "{hp_name}"',
|
||||
f'select * from {table_name} where si1 like "{lp_name}"',
|
||||
f'select * from {table_name} where si1 like "{ul_name}"',
|
||||
f'select * from {table_name} where sc1 like "{hp_name}"',
|
||||
f'select * from {table_name} where sc1 like "{lp_name}"',
|
||||
f'select * from {table_name} where sc1 like "{ul_name}"']
|
||||
|
||||
for sql in sql_list:
|
||||
tdSql.query(sql)
|
||||
if len(table_name) >= 1:
|
||||
tdSql.checkRows(1)
|
||||
else:
|
||||
tdSql.error(sql)
|
||||
exceed_sql_list = [f'select * from {table_name} where bi1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where bi1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where bi1 like "{ul_name}%"',
|
||||
f'select * from {table_name} where nc1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where nc1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where nc1 like "{ul_name}%"',
|
||||
f'select * from {table_name} where si1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where si1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where si1 like "{ul_name}%"',
|
||||
f'select * from {table_name} where sc1 like "%{hp_name}"',
|
||||
f'select * from {table_name} where sc1 like "{lp_name}%"',
|
||||
f'select * from {table_name} where sc1 like "{ul_name}%"']
|
||||
for sql in exceed_sql_list:
|
||||
tdSql.error(sql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
self.checkRegularTableWildcardLength()
|
||||
self.checkSuperTableWildcardLength()
|
||||
self.checkRegularWildcardSelectLength()
|
||||
self.checkStbWildcardSelectLength()
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
|
|
@ -81,6 +81,22 @@ class TDSql:
|
|||
return self.queryResult
|
||||
return self.queryRows
|
||||
|
||||
def getVariable(self, search_attr):
|
||||
'''
|
||||
get variable of search_attr access "show variables"
|
||||
'''
|
||||
try:
|
||||
sql = 'show variables'
|
||||
param_list = self.query(sql, row_tag=True)
|
||||
for param in param_list:
|
||||
if param[0] == search_attr:
|
||||
return param[1], param_list
|
||||
except Exception as e:
|
||||
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
args = (caller.filename, caller.lineno, sql, repr(e))
|
||||
tdLog.notice("%s(%d) failed: sql:%s, %s" % args)
|
||||
raise Exception(repr(e))
|
||||
|
||||
def getColNameList(self, sql, col_tag=None):
|
||||
self.sql = sql
|
||||
try:
|
||||
|
|
|
@ -0,0 +1,111 @@
|
|||
system sh/stop_dnodes.sh
|
||||
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
||||
system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
|
||||
sleep 100
|
||||
sql connect
|
||||
|
||||
sql drop database if exists cdb
|
||||
sql create database if not exists cdb
|
||||
sql use cdb
|
||||
sql create table stb1 (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool, c8 binary(10), c9 nchar(9)) TAGS(t1 int, t2 binary(10), t3 double)
|
||||
|
||||
sql create table tb1 using stb1 tags(1,'1',1.0)
|
||||
sql create table tb2 using stb1 tags(2,'2',2.0)
|
||||
sql create table tb3 using stb1 tags(3,'3',3.0)
|
||||
sql create table tb4 using stb1 tags(4,'4',4.0)
|
||||
sql create table tb5 using stb1 tags(5,'5',5.0)
|
||||
sql create table tb6 using stb1 tags(6,'6',6.0)
|
||||
|
||||
sql insert into tb1 values ('2021-05-05 18:19:00',1,1.0,1,1,1,1.0,true ,'1','1')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:01',2,2.0,2,2,2,2.0,true ,'2','2')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:02',3,3.0,3,3,3,3.0,false,'3','3')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:03',4,4.0,4,4,4,4.0,false,'4','4')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:04',11,11.0,11,11,11,11.0,true ,'11','11')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:05',12,12.0,12,12,12,12.0,true ,'12','12')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:06',13,13.0,13,13,13,13.0,false,'13','13')
|
||||
sql insert into tb1 values ('2021-05-05 18:19:07',14,14.0,14,14,14,14.0,false,'14','14')
|
||||
sql insert into tb2 values ('2021-05-05 18:19:08',21,21.0,21,21,21,21.0,true ,'21','21')
|
||||
sql insert into tb2 values ('2021-05-05 18:19:09',22,22.0,22,22,22,22.0,true ,'22','22')
|
||||
sql insert into tb2 values ('2021-05-05 18:19:10',23,23.0,23,23,23,23.0,false,'23','23')
|
||||
sql insert into tb2 values ('2021-05-05 18:19:11',24,24.0,24,24,24,24.0,false,'24','24')
|
||||
sql insert into tb3 values ('2021-05-05 18:19:12',31,31.0,31,31,31,31.0,true ,'31','31')
|
||||
sql insert into tb3 values ('2021-05-05 18:19:13',32,32.0,32,32,32,32.0,true ,'32','32')
|
||||
sql insert into tb3 values ('2021-05-05 18:19:14',33,33.0,33,33,33,33.0,false,'33','33')
|
||||
sql insert into tb3 values ('2021-05-05 18:19:15',34,34.0,34,34,34,34.0,false,'34','34')
|
||||
sql insert into tb4 values ('2021-05-05 18:19:16',41,41.0,41,41,41,41.0,true ,'41','41')
|
||||
sql insert into tb4 values ('2021-05-05 18:19:17',42,42.0,42,42,42,42.0,true ,'42','42')
|
||||
sql insert into tb4 values ('2021-05-05 18:19:18',43,43.0,43,43,43,43.0,false,'43','43')
|
||||
sql insert into tb4 values ('2021-05-05 18:19:19',44,44.0,44,44,44,44.0,false,'44','44')
|
||||
sql insert into tb5 values ('2021-05-05 18:19:20',51,51.0,51,51,51,51.0,true ,'51','51')
|
||||
sql insert into tb5 values ('2021-05-05 18:19:21',52,52.0,52,52,52,52.0,true ,'52','52')
|
||||
sql insert into tb5 values ('2021-05-05 18:19:22',53,53.0,53,53,53,53.0,false,'53','53')
|
||||
sql insert into tb5 values ('2021-05-05 18:19:23',54,54.0,54,54,54,54.0,false,'54','54')
|
||||
sql insert into tb6 values ('2021-05-05 18:19:24',61,61.0,61,61,61,61.0,true ,'61','61')
|
||||
sql insert into tb6 values ('2021-05-05 18:19:25',62,62.0,62,62,62,62.0,true ,'62','62')
|
||||
sql insert into tb6 values ('2021-05-05 18:19:26',63,63.0,63,63,63,63.0,false,'63','63')
|
||||
sql insert into tb6 values ('2021-05-05 18:19:27',64,64.0,64,64,64,64.0,false,'64','64')
|
||||
sql insert into tb6 values ('2021-05-05 18:19:28',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
|
||||
|
||||
sql create table stb2 (ts timestamp, u1 int unsigned, u2 bigint unsigned, u3 smallint unsigned, u4 tinyint unsigned, ts2 timestamp) TAGS(t1 int unsigned, t2 bigint unsigned, t3 timestamp, t4 int)
|
||||
|
||||
sql create table tb2_1 using stb2 tags(1,1,'2021-05-05 18:38:38',1)
|
||||
sql create table tb2_2 using stb2 tags(2,2,'2021-05-05 18:58:58',2)
|
||||
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:00',1,2,3,4,'2021-05-05 18:28:01')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:01',5,6,7,8,'2021-05-05 18:28:02')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:02',2,2,3,4,'2021-05-05 18:28:03')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:03',5,6,7,8,'2021-05-05 18:28:04')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:04',3,2,3,4,'2021-05-05 18:28:05')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:05',5,6,7,8,'2021-05-05 18:28:06')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:06',4,2,3,4,'2021-05-05 18:28:07')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:07',5,6,7,8,'2021-05-05 18:28:08')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:08',5,2,3,4,'2021-05-05 18:28:09')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:09',5,6,7,8,'2021-05-05 18:28:10')
|
||||
sql insert into tb2_1 values ('2021-05-05 18:19:10',6,2,3,4,'2021-05-05 18:28:11')
|
||||
sql insert into tb2_2 values ('2021-05-05 18:19:11',5,6,7,8,'2021-05-05 18:28:12')
|
||||
sql insert into tb2_2 values ('2021-05-05 18:19:12',7,2,3,4,'2021-05-05 18:28:13')
|
||||
sql insert into tb2_2 values ('2021-05-05 18:19:13',5,6,7,8,'2021-05-05 18:28:14')
|
||||
sql insert into tb2_2 values ('2021-05-05 18:19:14',8,2,3,4,'2021-05-05 18:28:15')
|
||||
sql insert into tb2_2 values ('2021-05-05 18:19:15',5,6,7,8,'2021-05-05 18:28:16')
|
||||
|
||||
sql create table stb3 (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool, c8 binary(10), c9 nchar(9)) TAGS(t1 int, t2 binary(10), t3 double)
|
||||
|
||||
sql create table tb3_1 using stb3 tags(1,'1',1.0)
|
||||
sql create table tb3_2 using stb3 tags(2,'2',2.0)
|
||||
|
||||
sql insert into tb3_1 values ('2021-01-05 18:19:00',1,1.0,1,1,1,1.0,true ,'1','1')
|
||||
sql insert into tb3_1 values ('2021-02-05 18:19:01',2,2.0,2,2,2,2.0,true ,'2','2')
|
||||
sql insert into tb3_1 values ('2021-03-05 18:19:02',3,3.0,3,3,3,3.0,false,'3','3')
|
||||
sql insert into tb3_1 values ('2021-04-05 18:19:03',4,4.0,4,4,4,4.0,false,'4','4')
|
||||
sql insert into tb3_1 values ('2021-05-05 18:19:28',5,NULL,5,NULL,5,NULL,true,NULL,'5')
|
||||
sql insert into tb3_1 values ('2021-06-05 18:19:28',NULL,6.0,NULL,6,NULL,6.0,NULL,'6',NULL)
|
||||
sql insert into tb3_1 values ('2021-07-05 18:19:28',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
|
||||
|
||||
sql insert into tb3_2 values ('2021-01-06 18:19:00',11,11.0,11,11,11,11.0,true ,'11','11')
|
||||
sql insert into tb3_2 values ('2021-02-06 18:19:01',12,12.0,12,12,12,12.0,true ,'12','12')
|
||||
sql insert into tb3_2 values ('2021-03-06 18:19:02',13,13.0,13,13,13,13.0,false,'13','13')
|
||||
sql insert into tb3_2 values ('2021-04-06 18:19:03',14,14.0,14,14,14,14.0,false,'14','14')
|
||||
sql insert into tb3_2 values ('2021-05-06 18:19:28',15,NULL,15,NULL,15,NULL,true,NULL,'15')
|
||||
sql insert into tb3_2 values ('2021-06-06 18:19:28',NULL,16.0,NULL,16,NULL,16.0,NULL,'16',NULL)
|
||||
sql insert into tb3_2 values ('2021-07-06 18:19:28',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
|
||||
|
||||
sleep 100
|
||||
|
||||
sql connect
|
||||
|
||||
run general/parser/condition_query.sim
|
||||
|
||||
print ================== restart server to commit data into disk
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
sleep 100
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
print ================== server restart completed
|
||||
sql connect
|
||||
sleep 100
|
||||
|
||||
run general/parser/condition_query.sim
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -83,10 +83,7 @@ while $i < $tbNum
|
|||
endw
|
||||
|
||||
print ================== all tags have been changed!
|
||||
sql select tbname from $stb where t3 = 'NULL'
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
endi
|
||||
sql_error select tbname from $stb where t3 = 'NULL'
|
||||
|
||||
print ================== set tag to NULL
|
||||
sql create table stb1_tg (ts timestamp, c1 int) tags(t1 int,t2 bigint,t3 double,t4 float,t5 smallint,t6 tinyint)
|
||||
|
@ -227,4 +224,4 @@ if $data04 != NULL then
|
|||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
|
|
@ -65,7 +65,14 @@ $tb = $tbPrefix . $i
|
|||
sql_error select * from $tb where c7
|
||||
|
||||
# TBASE-654 : invalid filter expression cause server crashed
|
||||
sql_error select count(*) from $tb where c1<10 and c1<>2
|
||||
sql select count(*) from $tb where c1<10 and c1<>2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 900 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select * from $tb where c7 = false
|
||||
$val = $rowNum / 100
|
||||
|
@ -253,30 +260,11 @@ sql insert into tb_where_NULL values ('2019-01-01 09:00:02.000', 2, 'val2')
|
|||
sql_error select * from tb_where_NULL where c1 = NULL
|
||||
sql_error select * from tb_where_NULL where c1 <> NULL
|
||||
sql_error select * from tb_where_NULL where c1 < NULL
|
||||
sql select * from tb_where_NULL where c1 = "NULL"
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select * from tb_where_NULL where c1 <> "NULL"
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
sql select * from tb_where_NULL where c1 <> "nulL"
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select * from tb_where_NULL where c1 > "NULL"
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select * from tb_where_NULL where c1 >= "NULL"
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql_error select * from tb_where_NULL where c1 = "NULL"
|
||||
sql_error select * from tb_where_NULL where c1 <> "NULL"
|
||||
sql_error select * from tb_where_NULL where c1 <> "nulL"
|
||||
sql_error select * from tb_where_NULL where c1 > "NULL"
|
||||
sql_error select * from tb_where_NULL where c1 >= "NULL"
|
||||
sql select * from tb_where_NULL where c2 = "NULL"
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
|
|
|
@ -121,6 +121,7 @@ echo "rpcDebugFlag 143" >> $TAOS_CFG
|
|||
echo "tmrDebugFlag 131" >> $TAOS_CFG
|
||||
echo "cDebugFlag 143" >> $TAOS_CFG
|
||||
echo "udebugFlag 143" >> $TAOS_CFG
|
||||
echo "debugFlag 143" >> $TAOS_CFG
|
||||
echo "wal 0" >> $TAOS_CFG
|
||||
echo "asyncLog 0" >> $TAOS_CFG
|
||||
echo "locale en_US.UTF-8" >> $TAOS_CFG
|
||||
|
|
Loading…
Reference in New Issue