Merge branch '3.0' of https://github.com/taosdata/TDengine into enh/tsdb_optimize
This commit is contained in:
commit
e2de2d1286
|
@ -2,7 +2,7 @@
|
|||
# taos-tools
|
||||
ExternalProject_Add(taos-tools
|
||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||
GIT_TAG 41affde
|
||||
GIT_TAG d11f210
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
|
|||
|
||||
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
|
||||
|
||||
|
||||
Tips:The default data subscription is to consume data from the wal. If the wal is deleted, the consumed data will be incomplete. At this time, you can set the parameter experimental.snapshot.enable to true to obtain all data from the tsdb, but in this way, the consumption order of the data cannot be guaranteed. Therefore, it is recommended to set a reasonable retention policy for WAL based on your consumption situation to ensure that you can subscribe all data from WAL.
|
||||
|
||||
## Data Schema and API
|
||||
|
||||
|
@ -285,18 +285,17 @@ You configure the following parameters when creating a consumer:
|
|||
|
||||
| Parameter | Type | Description | Remarks |
|
||||
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
|
||||
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
|
||||
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
|
||||
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
|
||||
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
|
||||
| `client.id` | string | Client ID | Maximum length: 192. |
|
||||
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
|
||||
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
|
||||
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
|
||||
| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
|
||||
| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from TSDB | |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
|
||||
| `experimental.snapshot.enable` | boolean | Specify whether to consume data in TSDB; true: both data in WAL and in TSDB can be consumed; false: only data in WAL can be consumed | default value: false |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
|
||||
|
||||
The method of specifying these parameters depends on the language used:
|
||||
|
||||
|
|
|
@ -179,6 +179,14 @@ TRIM DATABASE db_name;
|
|||
|
||||
The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||
|
||||
## Flush Data
|
||||
|
||||
```sql
|
||||
FLUSH DATABASE db_name;
|
||||
```
|
||||
|
||||
Flush data from memory onto disk. Before shutting down a node, executing this command can avoid data restore after restarting and speed up the startup process.
|
||||
|
||||
## Redistribute Vgroup
|
||||
|
||||
```sql
|
||||
|
|
|
@ -75,6 +75,16 @@ These pseudocolumns occur after the aggregation clause.
|
|||
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
|
||||
|
||||
In the above filling modes, except for `NONE` mode, the `fill` clause will be ignored if there is no data in the defined time range, i.e. no data would be filled and the query result would be empty. This behavior is reasonable when the filling mode is `PREV`, `NEXT`, `LINEAR`, because filling can't be performed if there is not any data. For filling modes `NULL` and `VALUE`, however, filling can be performed even though there is not any data, filling or not depends on the choice of user's application. To accomplish the need of this force filling behavior and not break the behavior of existing filling modes, TDengine added two new filling modes since version 3.0.3.0.
|
||||
|
||||
1. NULL_F: Fill `NULL` by force
|
||||
2. VALUE_F: Fill `VALUE` by force
|
||||
|
||||
The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described below:
|
||||
- When used with `INTERVAL`: `NULL_F` and `VALUE_F` are filling by force;`NULL` and `VALUE` don't fill by force. The behavior of each filling mode is exactly same as what the name suggests.
|
||||
- When used with `INTERVAL` in stream processing: `NULL_F` and `NULL` are same, i.e. don't fill by force; `VALUE_F` and `VALUE` and same, i.e. don't fill by force. It's suggested that there is no filling by force in stream processing.
|
||||
- When used with `INTERP`: `NULL` and `NULL_F` and same, i.e. filling by force; `VALUE` and `VALUE_F` are same, i.e. filling by force. It's suggested that there is always filling by force when used with `INTERP`.
|
||||
|
||||
:::info
|
||||
|
||||
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
|
||||
|
|
|
@ -308,9 +308,11 @@ Query OK, 24 row(s) in set (0.002444s)
|
|||
</code></pre>
|
||||
</details>
|
||||
|
||||
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
|
||||
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
|
||||
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
|
||||
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
|
||||
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
|
||||
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
|
||||
|
||||
Note that only the information about the data blocks in the data file will be displayed here, and the information about the data in the stt file will not be displayed.
|
||||
|
||||
## SHOW TAGS
|
||||
|
||||
|
|
|
@ -111,8 +111,20 @@ taos tools is uninstalled successfully!
|
|||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows uninstall" value="windows">
|
||||
Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system.
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Mac uninstall" value="mac">
|
||||
|
||||
TDengine can be uninstalled as below:
|
||||
|
||||
```
|
||||
$ rmtaos
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
@ -150,9 +162,9 @@ There are two aspects in upgrade operation: upgrade installation package and upg
|
|||
|
||||
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
|
||||
|
||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 2 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
- Stop inserting data
|
||||
- Make sure all data is persisted to disk
|
||||
- Make sure all data is persisted to disk, please use command `flush database`
|
||||
- Stop the cluster of TDengine
|
||||
- Uninstall old version and install new version
|
||||
- Start the cluster of TDengine
|
||||
|
|
|
@ -725,7 +725,7 @@ consumer.close()
|
|||
|
||||
For more information, see [Data Subscription](../../../develop/tmq).
|
||||
|
||||
### Usage examples
|
||||
#### Full Sample Code
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
|
|
@ -403,11 +403,11 @@ See [General Configuration Parameters](#General Configuration Parameters) for de
|
|||
|
||||
#### Configuration parameters for executing the specified query statement
|
||||
|
||||
The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`.
|
||||
The configuration parameters for querying the specified table (it can be a super table, a sub-table or a normal table) are set in `specified_table_query`.
|
||||
|
||||
- **query_interval** : The query interval in seconds, the default value is 0.
|
||||
|
||||
- **threads**: The number of threads to execute the query SQL, the default value is 1.
|
||||
- **threads/concurrent**: The number of threads to execute the query SQL, the default value is 1.
|
||||
|
||||
- **sqls**.
|
||||
- **sql**: the SQL command to be executed.
|
||||
|
@ -434,9 +434,9 @@ The configuration parameters of the super table query are set in `super_table_qu
|
|||
|
||||
#### Configuration parameters for executing the specified subscription statement
|
||||
|
||||
The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`.
|
||||
The configuration parameters for subscribing to a specified table (it can be a super table, a sub-table or a generic table) are set in `specified_table_query`.
|
||||
|
||||
- **threads**: The number of threads to execute SQL, default is 1.
|
||||
- **threads/concurrent**: The number of threads to execute SQL, default is 1.
|
||||
|
||||
- **interval**: The time interval to execute the subscription, in seconds, default is 0.
|
||||
|
||||
|
|
|
@ -61,12 +61,14 @@ And many more parameters.
|
|||
- -c CONFIGDIR: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
|
||||
- -C: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
|
||||
- -d DATABASE: Specify the database to use when connecting to the server
|
||||
- -E dsn: connect to the TDengine Cloud or a server which provides WebSocket connection
|
||||
- -f FILE: Execute the SQL script file in non-interactive mode Note that each SQL statement in the script file must be only one line.
|
||||
- -k: Test the operational status of the server. 0: unavailable; 1: network ok; 2: service ok; 3: service degraded; 4: exiting
|
||||
- -l PKTLEN: Test package size to be used for network testing
|
||||
- -n NETROLE: test scope for network connection test, default is `client`. The value can be `client` or `server`.
|
||||
- -N PKTNUM: Number of packets used for network testing
|
||||
- -r: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
|
||||
- -R: Use RESTful mode when connecting
|
||||
- -s COMMAND: execute SQL commands in non-interactive mode
|
||||
- -t: Test the boot status of the server. The statuses of -k apply.
|
||||
- -w DISPLAYWIDTH: Specify the number of columns of the server display.
|
||||
|
|
|
@ -19,17 +19,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
|||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
|
||||
## 学习 TDengine 知识地图
|
||||
|
||||
TDengine 知识地图中涵盖了 TDengine 的各种知识点,揭示了各概念实体之间的调用关系和数据流向。学习和了解 TDengine 知识地图有助于你快速掌握 TDengine 的知识体系。
|
||||
|
||||
<figure>
|
||||
<center>
|
||||
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
|
||||
<figcaption>图 1. TDengine 知识地图</figcaption>
|
||||
</center>
|
||||
</figure>
|
||||
|
||||
## 加入 TDengine 官方社区
|
||||
|
||||
微信扫描以下二维码,学习了解 TDengine 的最新技术,与大家共同交流物联网大数据技术应用、TDengine 使用问题和技巧等话题。
|
||||
|
@ -41,7 +30,7 @@ TDengine 知识地图中涵盖了 TDengine 的各种知识点,揭示了各概
|
|||
<td style={{padding:'1em 3em',border:0}}><img src={official_account} alt="TDengine 微信公众号" width="200" /></td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td style={{padding:'1em 3em',border:0}}>加入“物联网大数据技术群”<br/>与大家进行技术交流</td>
|
||||
<td style={{padding:'1em 3em',border:0}}>加入 TDengine 微信群<br/>了解学习最新物联网技术</td>
|
||||
<td style={{padding:'1em 3em',border:0}}>关注 TDengine 视频号<br/>收看技术直播与教学视频</td>
|
||||
<td style={{padding:'1em 3em',border:0}}>关注 TDengine 公众号<br/>阅读技术文章与行业案例</td>
|
||||
</tr>
|
||||
|
|
|
@ -25,6 +25,7 @@ import CDemo from "./_sub_c.mdx";
|
|||
|
||||
本文档不对消息队列本身的基础知识做介绍,如果需要了解,请自行搜索。
|
||||
|
||||
注意:默认是从wal消费数据,如果wal被删除,消费到的数据会不全,此时可以将参数 experimental.snapshot.enable 设置为true,从tsdb获取全部数据,但是这样的话就不能保证数据的消费顺序。所以建议根据自己的消费情况合理的设置wal的保留策略,保证可以从wal里订阅到全部数据。
|
||||
## 主要数据结构和 API
|
||||
|
||||
不同语言下, TMQ 订阅相关的 API 及数据结构如下:
|
||||
|
@ -283,18 +284,17 @@ CREATE TOPIC topic_name AS DATABASE db_name;
|
|||
|
||||
| 参数名称 | 类型 | 参数说明 | 备注 |
|
||||
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||
| `td.connect.ip` | string | 用于创建连接,同 `taos_connect` | |
|
||||
| `td.connect.user` | string | 用于创建连接,同 `taos_connect` | |
|
||||
| `td.connect.pass` | string | 用于创建连接,同 `taos_connect` | |
|
||||
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | |
|
||||
| `td.connect.ip` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
|
||||
| `td.connect.user` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
|
||||
| `td.connect.pass` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
|
||||
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
|
||||
| `group.id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 |
|
||||
| `client.id` | string | 客户端 ID | 最大长度:192。 |
|
||||
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default;从头开始订阅; <br/>`latest`: 仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
|
||||
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交 | 合法值:`true`, `false`。 |
|
||||
| `auto.commit.interval.ms` | integer | 以毫秒为单位的消费记录自动提交消费位点时间间隔 | 默认 5000 m |
|
||||
| `enable.heartbeat.background` | boolean | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 默认开启 |
|
||||
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据 | 实验功能,默认关闭 |
|
||||
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) | |
|
||||
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
|
||||
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
|
||||
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据。当其关闭时,只能消费依据 WAL 保留策略仍然在WAL中的数据;当其打开时,除WAL中的数据以外,也能够消费已经从WAL中删除但落盘到TSDB中的数据 | 实验功能,默认关闭 |
|
||||
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) |默认关闭 |
|
||||
|
||||
对于不同编程语言,其设置方式如下:
|
||||
|
||||
|
|
|
@ -728,7 +728,7 @@ consumer.close()
|
|||
|
||||
详情请参考:[数据订阅](../../../develop/tmq)
|
||||
|
||||
### 使用示例如下:
|
||||
#### 完整示例
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="原生连接">
|
||||
|
|
|
@ -179,6 +179,14 @@ TRIM DATABASE db_name;
|
|||
|
||||
删除过期数据,并根据多级存储的配置归整数据。
|
||||
|
||||
## 落盘内存数据
|
||||
|
||||
```sql
|
||||
FLUSH DATABASE db_name;
|
||||
```
|
||||
|
||||
落盘内存中的数据。在关闭节点之前,执行这条命令可以避免重启后的数据回放,加速启动过程。
|
||||
|
||||
## 调整VGROUP中VNODE的分布
|
||||
|
||||
```sql
|
||||
|
|
|
@ -69,6 +69,16 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填
|
|||
5. LINEAR 填充:根据前后距离最近的非 NULL 值做线性插值填充。例如:FILL(LINEAR)。
|
||||
6. NEXT 填充:使用下一个非 NULL 值填充数据。例如:FILL(NEXT)。
|
||||
|
||||
以上填充模式中,除了 NONE 模式默认不填充值之外,其他模式在查询的整个时间范围内如果没有数据 FILL 子句将被忽略,即不产生填充数据,查询结果为空。这种行为在部分模式(PREV、NEXT、LINEAR)下具有合理性,因为在这些模式下没有数据意味着无法产生填充数值。而对另外一些模式(NULL、VALUE)来说,理论上是可以产生填充数值的,至于需不需要输出填充数值,取决于应用的需求。所以为了满足这类需要强制填充数据或 NULL 的应用的需求,同时不破坏现有填充模式的行为兼容性,从 3.0.3.0 版本开始,增加了两种新的填充模式:
|
||||
|
||||
7. NULL_F: 强制填充 NULL 值
|
||||
8. VALUE_F: 强制填充 VALUE 值
|
||||
|
||||
NULL, NULL_F, VALUE, VALUE_F 这几种填充模式针对不同场景区别如下:
|
||||
- INTERVAL 子句: NULL_F, VALUE_F 为强制填充模式;NULL, VALUE 为非强制模式。在这种模式下下各自的语义与名称相符
|
||||
- 流计算中的 INTERVAL 子句:NULL_F 与 NULL 行为相同,均为非强制模式;VALUE_F 与 VALUE 行为相同,均为非强制模式。即流计算中的 INTERVAL 没有强制模式
|
||||
- INTERP 子句:NULL 与 NULL_F 行为相同,均为强制模式;VALUE 与 VALUE_F 行为相同,均为强制模式。即 INTERP 中没有非强制模式。
|
||||
|
||||
:::info
|
||||
|
||||
1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。
|
||||
|
|
|
@ -253,8 +253,9 @@ Query OK, 24 row(s) in set (0.002444s)
|
|||
</code></pre>
|
||||
</details>
|
||||
|
||||
上面是块中包含数据行数的块儿分布情况图,这里的 0100 0299 0498 … 表示的是每个块中包含的数据行数,上面的意思就是这个表的 5 个块,分布在 3483 ~3681 行的块有 1 个,占整个块的 20%,分布在 3881 ~ 4096(最大行数)的块数为 4 个,占整个块的 80%, 其它区域内分布块数为 0。
|
||||
上面是块中包含数据行数的块儿分布情况图,这里的 0100 0299 0498 … 表示的是每个块中包含的数据行数,上面的意思就是这个表的 5 个块,分布在 3483 ~3681 行的块有 1 个,占整个块的 20%,分布在 3881 ~ 4096(最大行数)的块数为 4 个,占整个块的 80%, 其它区域内分布块数为 0。
|
||||
|
||||
需要注意,这里只会显示 data 文件中数据块的信息,stt 文件中的数据的信息不会被显示。
|
||||
|
||||
## SHOW TAGS
|
||||
|
||||
|
|
|
@ -400,11 +400,11 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
|||
|
||||
#### 执行指定查询语句的配置参数
|
||||
|
||||
查询子表或者普通表的配置参数在 `specified_table_query` 中设置。
|
||||
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
|
||||
|
||||
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
|
||||
|
||||
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
||||
- **threads/concurrent** : 执行查询 SQL 的线程数,默认值为 1。
|
||||
|
||||
- **sqls**:
|
||||
- **sql**: 执行的 SQL 命令,必填。
|
||||
|
@ -431,7 +431,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
|||
|
||||
#### 执行指定订阅语句的配置参数
|
||||
|
||||
订阅子表或者普通表的配置参数在 `specified_table_query` 中设置。
|
||||
订阅指定表(可以指定超级表、子表或者普通表)的配置参数在 `specified_table_query` 中设置。
|
||||
|
||||
- **threads/concurrent** : 执行 SQL 的线程数,默认为 1。
|
||||
|
||||
|
|
|
@ -61,12 +61,14 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
|
|||
- -c CONFIGDIR: 指定配置文件目录,Linux 环境下默认为 `/etc/taos`,该目录下的配置文件默认名称为 `taos.cfg`
|
||||
- -C: 打印 -c 指定的目录中 `taos.cfg` 的配置参数
|
||||
- -d DATABASE: 指定连接到服务端时使用的数据库
|
||||
- -E dsn: 使用 WebSocket DSN 连接云服务或者提供 WebSocket 连接的服务端
|
||||
- -f FILE: 以非交互模式执行 SQL 脚本文件。文件中一个 SQL 语句只能占一行
|
||||
- -k: 测试服务端运行状态,0: unavailable,1: network ok,2: service ok,3: service degraded,4: exiting
|
||||
- -l PKTLEN: 网络测试时使用的测试包大小
|
||||
- -n NETROLE: 网络连接测试时的测试范围,默认为 `client`, 可选值为 `client`、`server`
|
||||
- -N PKTNUM: 网络测试时使用的测试包数量
|
||||
- -r: 将时间输出出无符号 64 位整数类型(即 C 语音中 uint64_t)
|
||||
- -R: 使用 RESTful 模式连接服务端
|
||||
- -s COMMAND: 以非交互模式执行的 SQL 命令
|
||||
- -t: 测试服务端启动状态,状态同-k
|
||||
- -w DISPLAYWIDTH: 客户端列显示宽度
|
||||
|
|
|
@ -141,8 +141,20 @@ taos tools is uninstalled successfully!
|
|||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows 卸载" value="windows">
|
||||
在 C:\TDengine 目录下,通过运行 unins000.exe 卸载程序来卸载 TDengine。
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Mac 卸载" value="mac">
|
||||
|
||||
卸载 TDengine 命令如下:
|
||||
|
||||
```
|
||||
$ rmtaos
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
|
|
@ -4,5 +4,5 @@ if [ "$lua_header_installed" = "0" ]; then
|
|||
sudo apt install -y liblua5.3-dev
|
||||
fi
|
||||
|
||||
gcc -std=c99 lua_connector.c -fPIC -shared -o luaconnector.so -Wall -ltaos -I/usr/include/lua5.3 -I../../include/client
|
||||
gcc -g -std=c99 lua_connector.c -fPIC -shared -o luaconnector.so -Wall -ltaos -I/usr/include/lua5.3 -I../../include/client
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ static int l_connect(lua_State *L){
|
|||
|
||||
taos = taos_connect(host, user,password,database, port);
|
||||
if (taos == NULL) {
|
||||
printf("failed to connect server, reason:%s\n", taos_errstr(taos));
|
||||
//printf("failed to connect server, reason:%s\n", taos_errstr(taos));
|
||||
|
||||
lua_pushinteger(L, -1);
|
||||
lua_setfield(L, table_index, "code");
|
||||
|
@ -79,7 +79,7 @@ static int l_connect(lua_State *L){
|
|||
// printf("success to connect server\n");
|
||||
lua_pushinteger(L, 0);
|
||||
lua_setfield(L, table_index, "code");
|
||||
lua_pushstring(L, taos_errstr(taos));
|
||||
lua_pushstring(L, "success");
|
||||
lua_setfield(L, table_index, "error");
|
||||
lua_pushlightuserdata(L,taos);
|
||||
lua_setfield(L, table_index, "conn");
|
||||
|
|
|
@ -67,8 +67,7 @@ static int l_connect(lua_State *L){
|
|||
|
||||
taos = taos_connect(host, user,password,database, port);
|
||||
if (taos == NULL) {
|
||||
printf("failed to connect server, reason:%s\n", taos_errstr(taos));
|
||||
|
||||
// printf("failed to connect server, reason:%s\n", taos_errstr(NULL));
|
||||
lua_pushinteger(L, -1);
|
||||
lua_setfield(L, table_index, "code");
|
||||
lua_pushstring(L, taos_errstr(taos));
|
||||
|
@ -79,7 +78,7 @@ static int l_connect(lua_State *L){
|
|||
// printf("success to connect server\n");
|
||||
lua_pushinteger(L, 0);
|
||||
lua_setfield(L, table_index, "code");
|
||||
lua_pushstring(L, taos_errstr(taos));
|
||||
lua_pushstring(L, "success");
|
||||
lua_setfield(L, table_index, "error");
|
||||
lua_pushlightuserdata(L,taos);
|
||||
lua_setfield(L, table_index, "conn");
|
||||
|
|
|
@ -21,7 +21,7 @@ import json
|
|||
import random
|
||||
import time
|
||||
import datetime
|
||||
from multiprocessing import Manager, Pool, Lock
|
||||
from multiprocessing import Manager, Pool
|
||||
from multipledispatch import dispatch
|
||||
from concurrent.futures import ThreadPoolExecutor, wait, ALL_COMPLETED
|
||||
|
||||
|
@ -102,12 +102,7 @@ def restful_execute(host: str, port: int, user: str, password: str, cmd: str):
|
|||
v_print("resp status: %d", resp.status_code)
|
||||
|
||||
if debug:
|
||||
v_print(
|
||||
"resp text: %s",
|
||||
json.dumps(
|
||||
resp.json(),
|
||||
sort_keys=True,
|
||||
indent=2))
|
||||
v_print("resp text: %s", json.dumps(resp.json(), sort_keys=True, indent=2))
|
||||
else:
|
||||
print("resp: %s" % json.dumps(resp.json()))
|
||||
|
||||
|
@ -115,34 +110,29 @@ def restful_execute(host: str, port: int, user: str, password: str, cmd: str):
|
|||
def query_func(process: int, thread: int, cmd: str):
|
||||
v_print("%d process %d thread cmd: %s", process, thread, cmd)
|
||||
|
||||
if oneMoreHost != "NotSupported" and random.randint(
|
||||
0, 1) == 1:
|
||||
if oneMoreHost != "NotSupported" and random.randint(0, 1) == 1:
|
||||
v_print("%s", "Send to second host")
|
||||
if native:
|
||||
cursor2.execute(cmd)
|
||||
cursor.execute(cmd)
|
||||
else:
|
||||
restful_execute(
|
||||
oneMoreHost, port, user, password, cmd)
|
||||
restful_execute(oneMoreHost, port, user, password, cmd)
|
||||
else:
|
||||
v_print("%s%s%s", "Send ", cmd, " to the host")
|
||||
if native:
|
||||
pass
|
||||
# cursor.execute(cmd)
|
||||
# cursor.execute(cmd)
|
||||
else:
|
||||
restful_execute(
|
||||
host, port, user, password, cmd)
|
||||
restful_execute(host, port, user, password, cmd)
|
||||
|
||||
|
||||
def query_data_process(cmd: str):
|
||||
# establish connection if native
|
||||
if native:
|
||||
v_print("host:%s, user:%s passwd:%s configDir:%s ", host, user, password, configDir)
|
||||
v_print("host:%s, user:%s passwd:xxxxxx configDir:%s ", host, user, configDir)
|
||||
try:
|
||||
conn = taos.connect(
|
||||
host=host,
|
||||
user=user,
|
||||
password=password,
|
||||
config=configDir)
|
||||
host=host, user=user, password=password, config=configDir
|
||||
)
|
||||
v_print("conn: %s", str(conn.__class__))
|
||||
except Exception as e:
|
||||
print("Error: %s" % e.args[0])
|
||||
|
@ -160,6 +150,7 @@ def query_data_process(cmd: str):
|
|||
try:
|
||||
cursor.execute(cmd)
|
||||
cols = cursor.description
|
||||
print(cols)
|
||||
data = cursor.fetchall()
|
||||
|
||||
for col in data:
|
||||
|
@ -170,12 +161,7 @@ def query_data_process(cmd: str):
|
|||
sys.exit(1)
|
||||
|
||||
else:
|
||||
restful_execute(
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password,
|
||||
cmd)
|
||||
restful_execute(host, port, user, password, cmd)
|
||||
|
||||
if native:
|
||||
cursor.close()
|
||||
|
@ -186,21 +172,21 @@ def create_stb():
|
|||
for i in range(0, numOfStb):
|
||||
if native:
|
||||
cursor.execute(
|
||||
"CREATE TABLE IF NOT EXISTS %s%d (ts timestamp, value float) TAGS (uuid binary(50))" %
|
||||
(stbName, i))
|
||||
"CREATE TABLE IF NOT EXISTS %s%d (ts timestamp, value float) TAGS (uuid binary(50))"
|
||||
% (stbName, i)
|
||||
)
|
||||
else:
|
||||
restful_execute(
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password,
|
||||
"CREATE TABLE IF NOT EXISTS %s%d (ts timestamp, value float) TAGS (uuid binary(50))" %
|
||||
(stbName, i)
|
||||
"CREATE TABLE IF NOT EXISTS %s%d (ts timestamp, value float) TAGS (uuid binary(50))"
|
||||
% (stbName, i),
|
||||
)
|
||||
|
||||
|
||||
def use_database():
|
||||
|
||||
if native:
|
||||
cursor.execute("USE %s" % current_db)
|
||||
else:
|
||||
|
@ -212,15 +198,15 @@ def create_databases():
|
|||
v_print("will create database db%d", int(i))
|
||||
|
||||
if native:
|
||||
cursor.execute(
|
||||
"CREATE DATABASE IF NOT EXISTS %s%d" % (dbName, i))
|
||||
cursor.execute("CREATE DATABASE IF NOT EXISTS %s%d" % (dbName, i))
|
||||
else:
|
||||
restful_execute(
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password,
|
||||
"CREATE DATABASE IF NOT EXISTS %s%d" % (dbName, i))
|
||||
"CREATE DATABASE IF NOT EXISTS %s%d" % (dbName, i),
|
||||
)
|
||||
|
||||
|
||||
def drop_tables():
|
||||
|
@ -243,17 +229,11 @@ def drop_databases():
|
|||
v_print("will drop database db%d", int(i))
|
||||
|
||||
if native:
|
||||
cursor.execute(
|
||||
"DROP DATABASE IF EXISTS %s%d" %
|
||||
(dbName, i))
|
||||
cursor.execute("DROP DATABASE IF EXISTS %s%d" % (dbName, i))
|
||||
else:
|
||||
restful_execute(
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password,
|
||||
"DROP DATABASE IF EXISTS %s%d" %
|
||||
(dbName, i))
|
||||
host, port, user, password, "DROP DATABASE IF EXISTS %s%d" % (dbName, i)
|
||||
)
|
||||
|
||||
|
||||
def insert_func(process: int, thread: int):
|
||||
|
@ -266,13 +246,11 @@ def insert_func(process: int, thread: int):
|
|||
|
||||
# establish connection if native
|
||||
if native:
|
||||
v_print("host:%s, user:%s passwd:%s configDir:%s ", host, user, password, configDir)
|
||||
v_print("host:%s, user:%s passwd:xxxxxx configDir:%s ", host, user, configDir)
|
||||
try:
|
||||
conn = taos.connect(
|
||||
host=host,
|
||||
user=user,
|
||||
password=password,
|
||||
config=configDir)
|
||||
host=host, user=user, password=password, config=configDir
|
||||
)
|
||||
v_print("conn: %s", str(conn.__class__))
|
||||
except Exception as e:
|
||||
print("Error: %s" % e.args[0])
|
||||
|
@ -291,26 +269,29 @@ def insert_func(process: int, thread: int):
|
|||
row = 0
|
||||
while row < numOfRec:
|
||||
v_print("row: %d", row)
|
||||
sqlCmd = ['INSERT INTO ']
|
||||
sqlCmd = ["INSERT INTO "]
|
||||
try:
|
||||
sqlCmd.append(
|
||||
"%s.%s%d " % (current_db, tbName, thread))
|
||||
sqlCmd.append("%s.%s%d " % (current_db, tbName, thread))
|
||||
|
||||
if (numOfStb > 0 and autosubtable):
|
||||
sqlCmd.append("USING %s.%s%d TAGS('%s') " %
|
||||
(current_db, stbName, numOfStb - 1, uuid))
|
||||
if numOfStb > 0 and autosubtable:
|
||||
sqlCmd.append(
|
||||
"USING %s.%s%d TAGS('%s') "
|
||||
% (current_db, stbName, numOfStb - 1, uuid)
|
||||
)
|
||||
|
||||
start_time = datetime.datetime(
|
||||
2021, 1, 25) + datetime.timedelta(seconds=row)
|
||||
start_time = datetime.datetime(2021, 1, 25) + datetime.timedelta(
|
||||
seconds=row
|
||||
)
|
||||
|
||||
sqlCmd.append("VALUES ")
|
||||
for batchIter in range(0, batch):
|
||||
sqlCmd.append("('%s', %f) " %
|
||||
(
|
||||
start_time +
|
||||
datetime.timedelta(
|
||||
milliseconds=batchIter),
|
||||
random.random()))
|
||||
sqlCmd.append(
|
||||
"('%s', %f) "
|
||||
% (
|
||||
start_time + datetime.timedelta(milliseconds=batchIter),
|
||||
random.random(),
|
||||
)
|
||||
)
|
||||
row = row + 1
|
||||
if row >= numOfRec:
|
||||
v_print("BREAK, row: %d numOfRec:%d", row, numOfRec)
|
||||
|
@ -319,23 +300,21 @@ def insert_func(process: int, thread: int):
|
|||
except Exception as e:
|
||||
print("Error: %s" % e.args[0])
|
||||
|
||||
cmd = ' '.join(sqlCmd)
|
||||
cmd = " ".join(sqlCmd)
|
||||
|
||||
if measure:
|
||||
exec_start_time = datetime.datetime.now()
|
||||
|
||||
if native:
|
||||
affectedRows = cursor.execute(cmd)
|
||||
print("affectedRows: %d" % affectedRows)
|
||||
else:
|
||||
restful_execute(
|
||||
host, port, user, password, cmd)
|
||||
restful_execute(host, port, user, password, cmd)
|
||||
|
||||
if measure:
|
||||
exec_end_time = datetime.datetime.now()
|
||||
exec_delta = exec_end_time - exec_start_time
|
||||
v_print(
|
||||
"consume %d microseconds",
|
||||
exec_delta.microseconds)
|
||||
v_print("consume %d microseconds", exec_delta.microseconds)
|
||||
|
||||
v_print("cmd: %s, length:%d", cmd, len(cmd))
|
||||
|
||||
|
@ -355,51 +334,39 @@ def create_tb():
|
|||
if native:
|
||||
cursor.execute("USE %s%d" % (dbName, i))
|
||||
else:
|
||||
restful_execute(
|
||||
host, port, user, password, "USE %s%d" %
|
||||
(dbName, i))
|
||||
restful_execute(host, port, user, password, "USE %s%d" % (dbName, i))
|
||||
|
||||
for j in range(0, numOfTb):
|
||||
if native:
|
||||
cursor.execute(
|
||||
"CREATE TABLE %s%d (ts timestamp, value float)" %
|
||||
(tbName, j))
|
||||
"CREATE TABLE %s%d (ts timestamp, value float)" % (tbName, j)
|
||||
)
|
||||
else:
|
||||
restful_execute(
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password,
|
||||
"CREATE TABLE %s%d (ts timestamp, value float)" %
|
||||
(tbName, j))
|
||||
"CREATE TABLE %s%d (ts timestamp, value float)" % (tbName, j),
|
||||
)
|
||||
|
||||
|
||||
def insert_data_process(lock, i: int, begin: int, end: int):
|
||||
lock.acquire()
|
||||
tasks = end - begin
|
||||
v_print("insert_data_process:%d table from %d to %d, tasks %d", i, begin, end, tasks)
|
||||
v_print(
|
||||
"insert_data_process:%d table from %d to %d, tasks %d", i, begin, end, tasks
|
||||
)
|
||||
|
||||
if (threads < (end - begin)):
|
||||
if threads < (end - begin):
|
||||
for j in range(begin, end, threads):
|
||||
with ThreadPoolExecutor(max_workers=threads) as executor:
|
||||
k = end if ((j + threads) > end) else (j + threads)
|
||||
workers = [
|
||||
executor.submit(
|
||||
insert_func,
|
||||
i,
|
||||
n) for n in range(
|
||||
j,
|
||||
k)]
|
||||
workers = [executor.submit(insert_func, i, n) for n in range(j, k)]
|
||||
wait(workers, return_when=ALL_COMPLETED)
|
||||
else:
|
||||
with ThreadPoolExecutor(max_workers=threads) as executor:
|
||||
workers = [
|
||||
executor.submit(
|
||||
insert_func,
|
||||
i,
|
||||
j) for j in range(
|
||||
begin,
|
||||
end)]
|
||||
workers = [executor.submit(insert_func, i, j) for j in range(begin, end)]
|
||||
wait(workers, return_when=ALL_COMPLETED)
|
||||
|
||||
lock.release()
|
||||
|
@ -409,22 +376,18 @@ def query_db(i):
|
|||
if native:
|
||||
cursor.execute("USE %s%d" % (dbName, i))
|
||||
else:
|
||||
restful_execute(
|
||||
host, port, user, password, "USE %s%d" %
|
||||
(dbName, i))
|
||||
restful_execute(host, port, user, password, "USE %s%d" % (dbName, i))
|
||||
|
||||
for j in range(0, numOfTb):
|
||||
if native:
|
||||
cursor.execute(
|
||||
"SELECT COUNT(*) FROM %s%d" % (tbName, j))
|
||||
cursor.execute("SELECT COUNT(*) FROM %s%d" % (tbName, j))
|
||||
else:
|
||||
restful_execute(
|
||||
host, port, user, password, "SELECT COUNT(*) FROM %s%d" %
|
||||
(tbName, j))
|
||||
host, port, user, password, "SELECT COUNT(*) FROM %s%d" % (tbName, j)
|
||||
)
|
||||
|
||||
|
||||
def printConfig():
|
||||
|
||||
print("###################################################################")
|
||||
print("# Use native interface: %s" % native)
|
||||
print("# Server IP: %s" % host)
|
||||
|
@ -435,7 +398,6 @@ def printConfig():
|
|||
|
||||
print("# Configuration Dir: %s" % configDir)
|
||||
print("# User: %s" % user)
|
||||
print("# Password: %s" % password)
|
||||
print("# Number of Columns per record: %s" % colsPerRecord)
|
||||
print("# Number of Threads: %s" % threads)
|
||||
print("# Number of Processes: %s" % processes)
|
||||
|
@ -455,13 +417,14 @@ def printConfig():
|
|||
print("# Query command: %s" % queryCmd)
|
||||
print("# Insert Only: %s" % insertOnly)
|
||||
print("# Verbose output %s" % verbose)
|
||||
print("# Test time: %s" %
|
||||
datetime.datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
|
||||
print(
|
||||
"# Test time: %s"
|
||||
% datetime.datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
)
|
||||
print("###################################################################")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
native = False
|
||||
verbose = False
|
||||
debug = False
|
||||
|
@ -497,73 +460,131 @@ if __name__ == "__main__":
|
|||
skipPrompt = False
|
||||
|
||||
try:
|
||||
opts, args = getopt.gnu_getopt(sys.argv[1:],
|
||||
'Nh:p:u:P:d:a:m:Ms:Q:T:C:r:l:t:n:c:xOR:D:vgyH',
|
||||
[
|
||||
'native', 'host', 'port', 'user', 'password', 'dbname', 'replica', 'tbname',
|
||||
'stable', 'stbname', 'query', 'threads', 'processes',
|
||||
'recPerReq', 'colsPerRecord', 'numOfTb', 'numOfRec', 'config',
|
||||
'insertOnly', 'outOfOrder', 'rateOOOO', 'deleteMethod',
|
||||
'verbose', 'debug', 'skipPrompt', 'help'
|
||||
])
|
||||
opts, args = getopt.gnu_getopt(
|
||||
sys.argv[1:],
|
||||
"Nh:p:u:P:d:a:m:Ms:Q:T:C:r:l:t:n:c:xOR:D:vgyH",
|
||||
[
|
||||
"native",
|
||||
"host",
|
||||
"port",
|
||||
"user",
|
||||
"password",
|
||||
"dbname",
|
||||
"replica",
|
||||
"tbname",
|
||||
"stable",
|
||||
"stbname",
|
||||
"query",
|
||||
"threads",
|
||||
"processes",
|
||||
"recPerReq",
|
||||
"colsPerRecord",
|
||||
"numOfTb",
|
||||
"numOfRec",
|
||||
"config",
|
||||
"insertOnly",
|
||||
"outOfOrder",
|
||||
"rateOOOO",
|
||||
"deleteMethod",
|
||||
"verbose",
|
||||
"debug",
|
||||
"skipPrompt",
|
||||
"help",
|
||||
],
|
||||
)
|
||||
except getopt.GetoptError as err:
|
||||
print('ERROR:', err)
|
||||
print('Try `taosdemo.py --help` for more options.')
|
||||
print("ERROR:", err)
|
||||
print("Try `taosdemo.py --help` for more options.")
|
||||
sys.exit(1)
|
||||
|
||||
if bool(opts) is False:
|
||||
print('Try `taosdemo.py --help` for more options.')
|
||||
print("Try `taosdemo.py --help` for more options.")
|
||||
sys.exit(1)
|
||||
|
||||
for key, value in opts:
|
||||
if key in ['-H', '--help']:
|
||||
print('')
|
||||
print(
|
||||
'taosdemo.py for TDengine')
|
||||
print('')
|
||||
print('Author: Shuduo Sang <sangshuduo@gmail.com>')
|
||||
print('')
|
||||
if key in ["-H", "--help"]:
|
||||
print("")
|
||||
print("taosdemo.py for TDengine")
|
||||
print("")
|
||||
print("Author: Shuduo Sang <sangshuduo@gmail.com>")
|
||||
print("")
|
||||
|
||||
print('\t-H, --help Show usage.')
|
||||
print('')
|
||||
print("\t-H, --help Show usage.")
|
||||
print("")
|
||||
|
||||
print('\t-N, --native flag, Use native interface if set. Default is using RESTful interface.')
|
||||
print('\t-h, --host <hostname> host, The host to connect to TDengine. Default is localhost.')
|
||||
print('\t-p, --port <port> port, The TCP/IP port number to use for the connection. Default is 0.')
|
||||
print('\t-u, --user <username> user, The user name to use when connecting to the server. Default is \'root\'.')
|
||||
print('\t-P, --password <password> password, The password to use when connecting to the server. Default is \'taosdata\'.')
|
||||
print('\t-l, --colsPerRec <number> num_of_columns_per_record, The number of columns per record. Default is 3.')
|
||||
print(
|
||||
'\t-d, --dbname <dbname> database, Destination database. Default is \'test\'.')
|
||||
print('\t-a, --replica <replications> replica, Set the replica parameters of the database, Default 1, min: 1, max: 5.')
|
||||
"\t-N, --native flag, Use native interface if set. Default is using RESTful interface."
|
||||
)
|
||||
print(
|
||||
'\t-m, --tbname <table prefix> table_prefix, Table prefix name. Default is \'t\'.')
|
||||
"\t-h, --host <hostname> host, The host to connect to TDengine. Default is localhost."
|
||||
)
|
||||
print(
|
||||
'\t-M, --stable flag, Use super table. Default is no')
|
||||
"\t-p, --port <port> port, The TCP/IP port number to use for the connection. Default is 0."
|
||||
)
|
||||
print(
|
||||
'\t-s, --stbname <stable prefix> stable_prefix, STable prefix name. Default is \'st\'')
|
||||
print('\t-Q, --query [NO|EACHTB|command] query, Execute query command. set \'EACHTB\' means select * from each table')
|
||||
"\t-u, --user <username> user, The user name to use when connecting to the server. Default is 'root'."
|
||||
)
|
||||
print(
|
||||
'\t-T, --threads <number> num_of_threads, The number of threads. Default is 1.')
|
||||
"\t-P, --password <password> password, The password to use when connecting to the server. Default is 'taosdata'."
|
||||
)
|
||||
print(
|
||||
'\t-C, --processes <number> num_of_processes, The number of threads. Default is 1.')
|
||||
print('\t-r, --batch <number> num_of_records_per_req, The number of records per request. Default is 1000.')
|
||||
"\t-l, --colsPerRec <number> num_of_columns_per_record, The number of columns per record. Default is 3."
|
||||
)
|
||||
print(
|
||||
'\t-t, --numOfTb <number> num_of_tables, The number of tables. Default is 1.')
|
||||
print('\t-n, --numOfRec <number> num_of_records_per_table, The number of records per table. Default is 1.')
|
||||
print('\t-c, --config <path> config_directory, Configuration directory. Default is \'/etc/taos/\'.')
|
||||
print('\t-x, --inserOnly flag, Insert only flag.')
|
||||
print('\t-O, --outOfOrder out of order data insert, 0: In order, 1: Out of order. Default is in order.')
|
||||
print('\t-R, --rateOOOO <number> rate, Out of order data\'s rate--if order=1 Default 10, min: 0, max: 50.')
|
||||
print('\t-D, --deleteMethod <number> Delete data methods 0: don\'t delete, 1: delete by table, 2: delete by stable, 3: delete by database.')
|
||||
print('\t-v, --verbose Print verbose output')
|
||||
print('\t-g, --debug Print debug output')
|
||||
"\t-d, --dbname <dbname> database, Destination database. Default is 'test'."
|
||||
)
|
||||
print(
|
||||
'\t-y, --skipPrompt Skip read key for continous test, default is not skip')
|
||||
print('')
|
||||
"\t-a, --replica <replications> replica, Set the replica parameters of the database, Default 1, min: 1, max: 5."
|
||||
)
|
||||
print(
|
||||
"\t-m, --tbname <table prefix> table_prefix, Table prefix name. Default is 't'."
|
||||
)
|
||||
print(
|
||||
"\t-M, --stable flag, Use super table. Default is no"
|
||||
)
|
||||
print(
|
||||
"\t-s, --stbname <stable prefix> stable_prefix, STable prefix name. Default is 'st'"
|
||||
)
|
||||
print(
|
||||
"\t-Q, --query [NO|EACHTB|command] query, Execute query command. set 'EACHTB' means select * from each table"
|
||||
)
|
||||
print(
|
||||
"\t-T, --threads <number> num_of_threads, The number of threads. Default is 1."
|
||||
)
|
||||
print(
|
||||
"\t-C, --processes <number> num_of_processes, The number of threads. Default is 1."
|
||||
)
|
||||
print(
|
||||
"\t-r, --batch <number> num_of_records_per_req, The number of records per request. Default is 1000."
|
||||
)
|
||||
print(
|
||||
"\t-t, --numOfTb <number> num_of_tables, The number of tables. Default is 1."
|
||||
)
|
||||
print(
|
||||
"\t-n, --numOfRec <number> num_of_records_per_table, The number of records per table. Default is 1."
|
||||
)
|
||||
print(
|
||||
"\t-c, --config <path> config_directory, Configuration directory. Default is '/etc/taos/'."
|
||||
)
|
||||
print("\t-x, --inserOnly flag, Insert only flag.")
|
||||
print(
|
||||
"\t-O, --outOfOrder out of order data insert, 0: In order, 1: Out of order. Default is in order."
|
||||
)
|
||||
print(
|
||||
"\t-R, --rateOOOO <number> rate, Out of order data's rate--if order=1 Default 10, min: 0, max: 50."
|
||||
)
|
||||
print(
|
||||
"\t-D, --deleteMethod <number> Delete data methods 0: don't delete, 1: delete by table, 2: delete by stable, 3: delete by database."
|
||||
)
|
||||
print("\t-v, --verbose Print verbose output")
|
||||
print("\t-g, --debug Print debug output")
|
||||
print(
|
||||
"\t-y, --skipPrompt Skip read key for continous test, default is not skip"
|
||||
)
|
||||
print("")
|
||||
sys.exit(0)
|
||||
|
||||
if key in ['-N', '--native']:
|
||||
if key in ["-N", "--native"]:
|
||||
try:
|
||||
import taos
|
||||
except Exception as e:
|
||||
|
@ -571,104 +592,104 @@ if __name__ == "__main__":
|
|||
sys.exit(1)
|
||||
native = True
|
||||
|
||||
if key in ['-h', '--host']:
|
||||
if key in ["-h", "--host"]:
|
||||
host = value
|
||||
|
||||
if key in ['-p', '--port']:
|
||||
if key in ["-p", "--port"]:
|
||||
port = int(value)
|
||||
|
||||
if key in ['-u', '--user']:
|
||||
if key in ["-u", "--user"]:
|
||||
user = value
|
||||
|
||||
if key in ['-P', '--password']:
|
||||
if key in ["-P", "--password"]:
|
||||
password = value
|
||||
else:
|
||||
password = defaultPass
|
||||
|
||||
if key in ['-d', '--dbname']:
|
||||
if key in ["-d", "--dbname"]:
|
||||
dbName = value
|
||||
|
||||
if key in ['-a', '--replica']:
|
||||
if key in ["-a", "--replica"]:
|
||||
replica = int(value)
|
||||
if replica < 1:
|
||||
print("FATAL: number of replica need > 0")
|
||||
sys.exit(1)
|
||||
|
||||
if key in ['-m', '--tbname']:
|
||||
if key in ["-m", "--tbname"]:
|
||||
tbName = value
|
||||
|
||||
if key in ['-M', '--stable']:
|
||||
if key in ["-M", "--stable"]:
|
||||
useStable = True
|
||||
numOfStb = 1
|
||||
|
||||
if key in ['-s', '--stbname']:
|
||||
if key in ["-s", "--stbname"]:
|
||||
stbName = value
|
||||
|
||||
if key in ['-Q', '--query']:
|
||||
if key in ["-Q", "--query"]:
|
||||
queryCmd = str(value)
|
||||
|
||||
if key in ['-T', '--threads']:
|
||||
if key in ["-T", "--threads"]:
|
||||
threads = int(value)
|
||||
if threads < 1:
|
||||
print("FATAL: number of threads must be larger than 0")
|
||||
sys.exit(1)
|
||||
|
||||
if key in ['-C', '--processes']:
|
||||
if key in ["-C", "--processes"]:
|
||||
processes = int(value)
|
||||
if processes < 1:
|
||||
print("FATAL: number of processes must be larger than 0")
|
||||
sys.exit(1)
|
||||
|
||||
if key in ['-r', '--batch']:
|
||||
if key in ["-r", "--batch"]:
|
||||
batch = int(value)
|
||||
|
||||
if key in ['-l', '--colsPerRec']:
|
||||
if key in ["-l", "--colsPerRec"]:
|
||||
colsPerRec = int(value)
|
||||
|
||||
if key in ['-t', '--numOfTb']:
|
||||
if key in ["-t", "--numOfTb"]:
|
||||
numOfTb = int(value)
|
||||
v_print("numOfTb is %d", numOfTb)
|
||||
|
||||
if key in ['-n', '--numOfRec']:
|
||||
if key in ["-n", "--numOfRec"]:
|
||||
numOfRec = int(value)
|
||||
v_print("numOfRec is %d", numOfRec)
|
||||
if numOfRec < 1:
|
||||
print("FATAL: number of records must be larger than 0")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if key in ['-c', '--config']:
|
||||
if key in ["-c", "--config"]:
|
||||
configDir = value
|
||||
v_print("config dir: %s", configDir)
|
||||
|
||||
if key in ['-x', '--insertOnly']:
|
||||
if key in ["-x", "--insertOnly"]:
|
||||
insertOnly = True
|
||||
v_print("insert only: %d", insertOnly)
|
||||
|
||||
if key in ['-O', '--outOfOrder']:
|
||||
if key in ["-O", "--outOfOrder"]:
|
||||
outOfOrder = int(value)
|
||||
v_print("out of order is %d", outOfOrder)
|
||||
|
||||
if key in ['-R', '--rateOOOO']:
|
||||
if key in ["-R", "--rateOOOO"]:
|
||||
rateOOOO = int(value)
|
||||
v_print("the rate of out of order is %d", rateOOOO)
|
||||
|
||||
if key in ['-D', '--deleteMethod']:
|
||||
if key in ["-D", "--deleteMethod"]:
|
||||
deleteMethod = int(value)
|
||||
if (deleteMethod < 0) or (deleteMethod > 3):
|
||||
print(
|
||||
"inputed delete method is %d, valid value is 0~3, set to default 0" %
|
||||
deleteMethod)
|
||||
"inputed delete method is %d, valid value is 0~3, set to default 0"
|
||||
% deleteMethod
|
||||
)
|
||||
deleteMethod = 0
|
||||
v_print("the delete method is %d", deleteMethod)
|
||||
|
||||
if key in ['-v', '--verbose']:
|
||||
if key in ["-v", "--verbose"]:
|
||||
verbose = True
|
||||
|
||||
if key in ['-g', '--debug']:
|
||||
if key in ["-g", "--debug"]:
|
||||
debug = True
|
||||
|
||||
if key in ['-y', '--skipPrompt']:
|
||||
if key in ["-y", "--skipPrompt"]:
|
||||
skipPrompt = True
|
||||
|
||||
if verbose:
|
||||
|
@ -679,13 +700,11 @@ if __name__ == "__main__":
|
|||
|
||||
# establish connection first if native
|
||||
if native:
|
||||
v_print("host:%s, user:%s passwd:%s configDir:%s ", host, user, password, configDir)
|
||||
v_print("host:%s, user:%s passwd:xxxxxx configDir:%s ", host, user, configDir)
|
||||
try:
|
||||
conn = taos.connect(
|
||||
host=host,
|
||||
user=user,
|
||||
password=password,
|
||||
config=configDir)
|
||||
host=host, user=user, password=password, config=configDir
|
||||
)
|
||||
v_print("conn: %s", str(conn.__class__))
|
||||
except Exception as e:
|
||||
print("Error: %s" % e.args[0])
|
||||
|
@ -705,7 +724,7 @@ if __name__ == "__main__":
|
|||
drop_tables()
|
||||
print("Drop tables done.")
|
||||
elif deleteMethod == 2:
|
||||
drop_stables()
|
||||
drop_stable()
|
||||
print("Drop super tables done.")
|
||||
elif deleteMethod == 3:
|
||||
drop_databases()
|
||||
|
@ -725,7 +744,7 @@ if __name__ == "__main__":
|
|||
|
||||
if numOfStb > 0:
|
||||
create_stb()
|
||||
if (autosubtable == False):
|
||||
if autosubtable is False:
|
||||
create_tb_using_stb()
|
||||
else:
|
||||
create_tb()
|
||||
|
@ -734,7 +753,9 @@ if __name__ == "__main__":
|
|||
end_time = time.time()
|
||||
print(
|
||||
"Total time consumed {} seconds for create table.".format(
|
||||
(end_time - start_time_begin)))
|
||||
(end_time - start_time_begin)
|
||||
)
|
||||
)
|
||||
|
||||
if native:
|
||||
cursor.close()
|
||||
|
@ -758,10 +779,8 @@ if __name__ == "__main__":
|
|||
|
||||
remainder = numOfTb % processes
|
||||
v_print(
|
||||
"num of tables: %d, quotient: %d, remainder: %d",
|
||||
numOfTb,
|
||||
quotient,
|
||||
remainder)
|
||||
"num of tables: %d, quotient: %d, remainder: %d", numOfTb, quotient, remainder
|
||||
)
|
||||
|
||||
for i in range(processes):
|
||||
begin = end
|
||||
|
@ -770,7 +789,15 @@ if __name__ == "__main__":
|
|||
end = begin + quotient + 1
|
||||
else:
|
||||
end = begin + quotient
|
||||
pool.apply_async(insert_data_process, args=(lock, i, begin, end,))
|
||||
pool.apply_async(
|
||||
insert_data_process,
|
||||
args=(
|
||||
lock,
|
||||
i,
|
||||
begin,
|
||||
end,
|
||||
),
|
||||
)
|
||||
|
||||
pool.close()
|
||||
pool.join()
|
||||
|
@ -780,8 +807,9 @@ if __name__ == "__main__":
|
|||
end_time = time.time()
|
||||
print(
|
||||
"Total time consumed {} seconds for insert data.".format(
|
||||
(end_time - start_time)))
|
||||
|
||||
(end_time - start_time)
|
||||
)
|
||||
)
|
||||
|
||||
# query data
|
||||
if queryCmd != "NO":
|
||||
|
@ -790,8 +818,6 @@ if __name__ == "__main__":
|
|||
|
||||
if measure:
|
||||
end_time = time.time()
|
||||
print(
|
||||
"Total time consumed {} seconds.".format(
|
||||
(end_time - start_time_begin)))
|
||||
print("Total time consumed {} seconds.".format((end_time - start_time_begin)))
|
||||
|
||||
print("done")
|
||||
|
|
|
@ -116,6 +116,8 @@ typedef struct {
|
|||
int32_t vnodes_total;
|
||||
int32_t vnodes_alive;
|
||||
int32_t connections_total;
|
||||
int32_t topics_toal;
|
||||
int32_t streams_total;
|
||||
SArray *dnodes; // array of SMonDnodeDesc
|
||||
SArray *mnodes; // array of SMonMnodeDesc
|
||||
} SMonClusterInfo;
|
||||
|
|
|
@ -25,7 +25,7 @@ sourcePath="nas"
|
|||
cpuType="x64"
|
||||
lite="true"
|
||||
packageType="tar"
|
||||
subFile="taos.tar.gz"
|
||||
subFile="package.tar.gz"
|
||||
while getopts "m:c:f:l:s:o:t:v:h" opt; do
|
||||
case $opt in
|
||||
m)
|
||||
|
|
|
@ -24,7 +24,7 @@ productName="TDengine"
|
|||
emailName="taosdata.com"
|
||||
uninstallScript="rmtaos"
|
||||
historyFile="taos_history"
|
||||
tarName="taos.tar.gz"
|
||||
tarName="package.tar.gz"
|
||||
dataDir="/var/lib/taos"
|
||||
logDir="/var/log/taos"
|
||||
configDir="/etc/taos"
|
||||
|
|
|
@ -17,7 +17,7 @@ serverName="taosd"
|
|||
clientName="taos"
|
||||
uninstallScript="rmtaos"
|
||||
configFile="taos.cfg"
|
||||
tarName="taos.tar.gz"
|
||||
tarName="package.tar.gz"
|
||||
|
||||
osType=Linux
|
||||
pagMode=full
|
||||
|
|
|
@ -124,9 +124,18 @@ call :stop_delete
|
|||
call :check_svc taosd
|
||||
call :check_svc taosadapter
|
||||
|
||||
copy /y C:\\TDengine\\driver\\taos.dll C:\\Windows\\System32 > nul
|
||||
if exist C:\\TDengine\\driver\\taosws.dll (
|
||||
copy /y C:\\TDengine\\driver\\taosws.dll C:\\Windows\\System32 > nul
|
||||
if exist c:\\windows\\sysnative (
|
||||
echo x86
|
||||
copy /y C:\\TDengine\\driver\\taos.dll %windir%\\sysnative > nul
|
||||
if exist C:\\TDengine\\driver\\taosws.dll (
|
||||
copy /y C:\\TDengine\\driver\\taosws.dll %windir%\\sysnative > nul
|
||||
)
|
||||
) else (
|
||||
echo x64
|
||||
copy /y C:\\TDengine\\driver\\taos.dll C:\\Windows\\System32 > nul
|
||||
if exist C:\\TDengine\\driver\\taosws.dll (
|
||||
copy /y C:\\TDengine\\driver\\taosws.dll C:\\Windows\\System32 > nul
|
||||
)
|
||||
)
|
||||
|
||||
rem // create services
|
||||
|
|
|
@ -24,7 +24,7 @@ clientName2="${12}"
|
|||
productName="TDengine"
|
||||
clientName="taos"
|
||||
configFile="taos.cfg"
|
||||
tarName="taos.tar.gz"
|
||||
tarName="package.tar.gz"
|
||||
|
||||
if [ "$osType" != "Darwin" ]; then
|
||||
script_dir="$(dirname $(readlink -f $0))"
|
||||
|
|
|
@ -28,7 +28,7 @@ productName="TDengine"
|
|||
serverName="taosd"
|
||||
clientName="taos"
|
||||
configFile="taos.cfg"
|
||||
tarName="taos.tar.gz"
|
||||
tarName="package.tar.gz"
|
||||
dumpName="taosdump"
|
||||
benchmarkName="taosBenchmark"
|
||||
toolsName="taostools"
|
||||
|
|
|
@ -75,6 +75,14 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_connectImp(JNIEn
|
|||
*/
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeQueryImp(JNIEnv *, jobject, jbyteArray, jlong);
|
||||
|
||||
/*
|
||||
* Class: com_taosdata_jdbc_TSDBJNIConnector
|
||||
* Method: executeQueryWithReqId
|
||||
* Signature: ([BJJ)I
|
||||
*/
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeQueryWithReqId(JNIEnv *, jobject, jbyteArray,
|
||||
jlong, jlong);
|
||||
|
||||
/*
|
||||
* Class: com_taosdata_jdbc_TSDBJNIConnector
|
||||
* Method: getErrCodeImp
|
||||
|
@ -186,6 +194,14 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_validateCreateTab
|
|||
*/
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_prepareStmtImp(JNIEnv *, jobject, jbyteArray, jlong);
|
||||
|
||||
/*
|
||||
* Class: com_taosdata_jdbc_TSDBJNIConnector
|
||||
* Method: prepareStmtWithReqId
|
||||
* Signature: ([BJJ)I
|
||||
*/
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_prepareStmtWithReqId(JNIEnv *, jobject, jbyteArray,
|
||||
jlong, jlong);
|
||||
|
||||
/*
|
||||
* Class: com_taosdata_jdbc_TSDBJNIConnector
|
||||
* Method: setBindTableNameImp
|
||||
|
@ -260,6 +276,32 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_insertLinesImp(JN
|
|||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertImp(JNIEnv *, jobject, jobjectArray,
|
||||
jlong, jint, jint);
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithReqId(JNIEnv *, jobject, jlong,
|
||||
jobjectArray, jint, jint,
|
||||
jlong);
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithTtl(JNIEnv *, jobject, jlong,
|
||||
jobjectArray, jint, jint, jint);
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithTtlAndReqId(JNIEnv *, jobject,
|
||||
jlong, jobjectArray,
|
||||
jint, jint, jint,
|
||||
jlong);
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRaw(JNIEnv *, jobject, jlong, jstring,
|
||||
jint, jint);
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithReqId(JNIEnv *, jobject, jlong,
|
||||
jstring, jint, jint,
|
||||
jlong);
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithTtl(JNIEnv *, jobject, jlong,
|
||||
jstring, jint, jint, jint);
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithTtlAndReqId(JNIEnv *, jobject,
|
||||
jlong, jstring,
|
||||
jint, jint, jint,
|
||||
jlong);
|
||||
/**
|
||||
* Class: com_taosdata_jdbc_TSDBJNIConnector
|
||||
* Method: getTableVgID
|
||||
|
|
|
@ -55,7 +55,7 @@ jclass g_tmqClass;
|
|||
jmethodID g_createConsumerErrorCallback;
|
||||
jmethodID g_topicListCallback;
|
||||
|
||||
jclass g_consumerClass;
|
||||
jclass g_consumerClass;
|
||||
// deprecated
|
||||
jmethodID g_commitCallback;
|
||||
|
||||
|
@ -331,13 +331,58 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeQueryImp(
|
|||
int32_t code = taos_errno(tres);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code:%s, msg:%s", jobj, tscon, tstrerror(code), taos_errstr(tres));
|
||||
jniError("jobj:%p, conn:%p, code:0x%x, msg:%s", jobj, tscon, code, taos_errstr(tres));
|
||||
} else {
|
||||
if (taos_is_update_query(tres)) {
|
||||
int32_t affectRows = taos_affected_rows(tres);
|
||||
jniDebug("jobj:%p, conn:%p, code:%s, affect rows:%d", jobj, tscon, tstrerror(code), affectRows);
|
||||
jniDebug("jobj:%p, conn:%p, code:0x%x, affect rows:%d", jobj, tscon, code, affectRows);
|
||||
} else {
|
||||
jniDebug("jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniDebug("jobj:%p, conn:%p, code:0x%x", jobj, tscon, code);
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(str);
|
||||
return (jlong)tres;
|
||||
}
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeQueryWithReqId(JNIEnv *env, jobject jobj,
|
||||
jbyteArray jsql, jlong con,
|
||||
jlong reqId) {
|
||||
TAOS *tscon = (TAOS *)con;
|
||||
if (tscon == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
return JNI_CONNECTION_NULL;
|
||||
}
|
||||
|
||||
if (jsql == NULL) {
|
||||
jniError("jobj:%p, conn:%p, empty sql string", jobj, tscon);
|
||||
return JNI_SQL_NULL;
|
||||
}
|
||||
|
||||
jsize len = (*env)->GetArrayLength(env, jsql);
|
||||
|
||||
char *str = (char *)taosMemoryCalloc(1, sizeof(char) * (len + 1));
|
||||
if (str == NULL) {
|
||||
jniError("jobj:%p, conn:%p, alloc memory failed", jobj, tscon);
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
(*env)->GetByteArrayRegion(env, jsql, 0, len, (jbyte *)str);
|
||||
if ((*env)->ExceptionCheck(env)) {
|
||||
// todo handle error
|
||||
}
|
||||
|
||||
TAOS_RES *tres = taos_query_with_reqid(tscon, str, reqId);
|
||||
int32_t code = taos_errno(tres);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code:0x%x, msg:%s", jobj, tscon, code, taos_errstr(tres));
|
||||
} else {
|
||||
if (taos_is_update_query(tres)) {
|
||||
int32_t affectRows = taos_affected_rows(tres);
|
||||
jniDebug("jobj:%p, conn:%p, code:0x%x, affect rows:%d", jobj, tscon, code, affectRows);
|
||||
} else {
|
||||
jniDebug("jobj:%p, conn:%p, code:0x%x", jobj, tscon, code);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -489,7 +534,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchRowImp(JNIEn
|
|||
numOfFields);
|
||||
return JNI_FETCH_END;
|
||||
} else {
|
||||
jniDebug("jobj:%p, conn:%p, interrupted query. fetch row error code: %d, msg:%s", jobj, tscon, code,
|
||||
jniDebug("jobj:%p, conn:%p, interrupted query. fetch row error code: 0x%x, msg:%s", jobj, tscon, code,
|
||||
taos_errstr(result));
|
||||
return JNI_RESULT_SET_NULL;
|
||||
}
|
||||
|
@ -575,9 +620,9 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchBlockImp(JNI
|
|||
TAOS_RES *tres = (TAOS_RES *)res;
|
||||
|
||||
int32_t numOfFields = taos_num_fields(tres);
|
||||
if(numOfFields <= 0){
|
||||
jniError("jobj:%p, conn:%p, query interrupted. taos_num_fields error code:%d, msg:%s", jobj, tscon, numOfFields,
|
||||
taos_errstr(tres));
|
||||
if (numOfFields <= 0) {
|
||||
jniError("jobj:%p, conn:%p, query interrupted. taos_num_fields error code: 0x%x, msg:%s", jobj, tscon,
|
||||
taos_errno(tres), taos_errstr(tres));
|
||||
return JNI_RESULT_SET_NULL;
|
||||
}
|
||||
|
||||
|
@ -589,7 +634,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchBlockImp(JNI
|
|||
jniDebug("jobj:%p, conn:%p, resultset:%p, no data to retrieve", jobj, tscon, (void *)res);
|
||||
return JNI_FETCH_END;
|
||||
} else {
|
||||
jniError("jobj:%p, conn:%p, query interrupted. fetch block error code:%d, msg:%s", jobj, tscon, error_code,
|
||||
jniError("jobj:%p, conn:%p, query interrupted. fetch block error code: 0x%x, msg:%s", jobj, tscon, error_code,
|
||||
taos_errstr(tres));
|
||||
return JNI_RESULT_SET_NULL;
|
||||
}
|
||||
|
@ -639,7 +684,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_validateCreateTab
|
|||
}
|
||||
|
||||
int code = taos_validate_sql(tscon, str);
|
||||
jniDebug("jobj:%p, conn:%p, code is %d", jobj, tscon, code);
|
||||
jniDebug("jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
|
||||
taosMemoryFreeClear(str);
|
||||
return code;
|
||||
|
@ -704,7 +749,45 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_prepareStmtImp(J
|
|||
int32_t code = taos_stmt_prepare(pStmt, str, len);
|
||||
taosMemoryFreeClear(str);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("prepareStmt jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniError("prepareStmt jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
return (jlong)pStmt;
|
||||
}
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_prepareStmtWithReqId(JNIEnv *env, jobject jobj,
|
||||
jbyteArray jsql, jlong con,
|
||||
jlong reqId) {
|
||||
TAOS *tscon = (TAOS *)con;
|
||||
if (tscon == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
return JNI_CONNECTION_NULL;
|
||||
}
|
||||
|
||||
if (jsql == NULL) {
|
||||
jniError("jobj:%p, conn:%p, empty sql string", jobj, tscon);
|
||||
return JNI_SQL_NULL;
|
||||
}
|
||||
|
||||
jsize len = (*env)->GetArrayLength(env, jsql);
|
||||
|
||||
char *str = (char *)taosMemoryCalloc(1, sizeof(char) * (len + 1));
|
||||
if (str == NULL) {
|
||||
jniError("jobj:%p, conn:%p, alloc memory failed", jobj, tscon);
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
(*env)->GetByteArrayRegion(env, jsql, 0, len, (jbyte *)str);
|
||||
if ((*env)->ExceptionCheck(env)) {
|
||||
// todo handle error
|
||||
}
|
||||
|
||||
TAOS_STMT *pStmt = taos_stmt_init_with_reqid(tscon, reqId);
|
||||
int32_t code = taos_stmt_prepare(pStmt, str, len);
|
||||
taosMemoryFreeClear(str);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("prepareStmtWithReqId jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -732,7 +815,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setBindTableNameI
|
|||
if (code != TSDB_CODE_SUCCESS) {
|
||||
(*env)->ReleaseStringUTFChars(env, jname, name);
|
||||
|
||||
jniError("bindTableName jobj:%p, conn:%p, code:%s", jobj, tsconn, tstrerror(code));
|
||||
jniError("bindTableName jobj:%p, conn:%p, code: 0x%x", jobj, tsconn, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -807,7 +890,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setTableNameTagsI
|
|||
(*env)->ReleaseStringUTFChars(env, tableName, name);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("tableNameTags jobj:%p, conn:%p, code:%s", jobj, tsconn, tstrerror(code));
|
||||
jniError("tableNameTags jobj:%p, conn:%p, code: 0x%x", jobj, tsconn, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
return JNI_SUCCESS;
|
||||
|
@ -873,7 +956,7 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_bindColDataImp(
|
|||
taosMemoryFreeClear(b);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("bindColData jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniError("bindColData jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -896,7 +979,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_addBatchImp(JNIEn
|
|||
|
||||
int32_t code = taos_stmt_add_batch(pStmt);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("add batch jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniError("add batch jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -920,7 +1003,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeBatchImp(J
|
|||
|
||||
int32_t code = taos_stmt_execute(pStmt);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("excute batch jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniError("excute batch jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -944,7 +1027,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_closeStmt(JNIEnv
|
|||
|
||||
int32_t code = taos_stmt_close(pStmt);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("close stmt jobj:%p, conn:%p, code:%s", jobj, tscon, tstrerror(code));
|
||||
jniError("close stmt jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
||||
|
@ -1006,12 +1089,9 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_insertLinesImp(JN
|
|||
|
||||
TAOS_RES *tres = schemalessInsert(env, jobj, lines, taos, protocol, precision);
|
||||
|
||||
if (tres == NULL) {
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
int code = taos_errno(tres);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code:%s, msg:%s", jobj, taos, tstrerror(code), taos_errstr(tres));
|
||||
jniError("jobj:%p, conn:%p, code: 0x%x, msg:%s", jobj, taos, code, taos_errstr(tres));
|
||||
taos_free_result(tres);
|
||||
return JNI_TDENGINE_ERROR;
|
||||
}
|
||||
|
@ -1030,12 +1110,247 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsert
|
|||
}
|
||||
|
||||
TAOS_RES *tres = schemalessInsert(env, jobj, lines, taos, protocol, precision);
|
||||
if (tres == NULL) {
|
||||
|
||||
return (jlong)tres;
|
||||
}
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithReqId(
|
||||
JNIEnv *env, jobject jobj, jlong conn, jobjectArray lines, jint protocol, jint precision, jlong reqId) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
return JNI_CONNECTION_NULL;
|
||||
}
|
||||
|
||||
int numLines = (*env)->GetArrayLength(env, lines);
|
||||
char **c_lines = taosMemoryCalloc(numLines, sizeof(char *));
|
||||
if (c_lines == NULL) {
|
||||
jniError("c_lines:%p, alloc memory failed", c_lines);
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
c_lines[i] = (char *)(*env)->GetStringUTFChars(env, line, 0);
|
||||
}
|
||||
|
||||
TAOS_RES *tres = taos_schemaless_insert_with_reqid(taos, c_lines, numLines, protocol, precision, reqId);
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
(*env)->ReleaseStringUTFChars(env, line, c_lines[i]);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(c_lines);
|
||||
|
||||
return (jlong)tres;
|
||||
}
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithTtl(JNIEnv *env, jobject jobj,
|
||||
jlong conn, jobjectArray lines,
|
||||
jint protocol, jint precision,
|
||||
jint ttl) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
return JNI_CONNECTION_NULL;
|
||||
}
|
||||
|
||||
int numLines = (*env)->GetArrayLength(env, lines);
|
||||
char **c_lines = taosMemoryCalloc(numLines, sizeof(char *));
|
||||
if (c_lines == NULL) {
|
||||
jniError("c_lines:%p, alloc memory failed", c_lines);
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
c_lines[i] = (char *)(*env)->GetStringUTFChars(env, line, 0);
|
||||
}
|
||||
|
||||
TAOS_RES *tres = taos_schemaless_insert_ttl(taos, c_lines, numLines, protocol, precision, ttl);
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
(*env)->ReleaseStringUTFChars(env, line, c_lines[i]);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(c_lines);
|
||||
|
||||
return (jlong)tres;
|
||||
}
|
||||
|
||||
JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertWithTtlAndReqId(
|
||||
JNIEnv *env, jobject jobj, jlong conn, jobjectArray lines, jint protocol, jint precision, jint ttl, jlong reqId) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
return JNI_CONNECTION_NULL;
|
||||
}
|
||||
|
||||
int numLines = (*env)->GetArrayLength(env, lines);
|
||||
char **c_lines = taosMemoryCalloc(numLines, sizeof(char *));
|
||||
if (c_lines == NULL) {
|
||||
jniError("c_lines:%p, alloc memory failed", c_lines);
|
||||
return JNI_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
c_lines[i] = (char *)(*env)->GetStringUTFChars(env, line, 0);
|
||||
}
|
||||
|
||||
TAOS_RES *tres = taos_schemaless_insert_ttl_with_reqid(taos, c_lines, numLines, protocol, precision, ttl, reqId);
|
||||
|
||||
for (int i = 0; i < numLines; ++i) {
|
||||
jstring line = (jstring)((*env)->GetObjectArrayElement(env, lines, i));
|
||||
(*env)->ReleaseStringUTFChars(env, line, c_lines[i]);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(c_lines);
|
||||
|
||||
return (jlong)tres;
|
||||
}
|
||||
|
||||
JNIEXPORT jobject createSchemalessResp(JNIEnv *env, int totalRows, int code, const char *msg) {
|
||||
// find class
|
||||
jclass schemaless_clazz = (*env)->FindClass(env, "com/taosdata/jdbc/SchemalessResp");
|
||||
// find methods
|
||||
jmethodID init_method = (*env)->GetMethodID(env, schemaless_clazz, "<init>", "()V");
|
||||
jmethodID setCode_method = (*env)->GetMethodID(env, schemaless_clazz, "setCode", "(I)V");
|
||||
jmethodID setMsg_method = (*env)->GetMethodID(env, schemaless_clazz, "setMsg", "(Ljava/lang/String;)V");
|
||||
jmethodID setTotalRows_method = (*env)->GetMethodID(env, schemaless_clazz, "setTotalRows", "(I)V");
|
||||
// new schemaless
|
||||
jobject schemaless_obj = (*env)->NewObject(env, schemaless_clazz, init_method);
|
||||
// set code
|
||||
(*env)->CallVoidMethod(env, schemaless_obj, setCode_method, code);
|
||||
// set totalRows
|
||||
(*env)->CallVoidMethod(env, schemaless_obj, setTotalRows_method, totalRows);
|
||||
// set message
|
||||
jstring message = (*env)->NewStringUTF(env, msg);
|
||||
(*env)->CallVoidMethod(env, schemaless_obj, setMsg_method, message);
|
||||
|
||||
return schemaless_obj;
|
||||
}
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRaw(JNIEnv *env, jobject jobj,
|
||||
jlong conn, jstring data,
|
||||
jint protocol, jint precision) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
char *msg = "JNI connection is NULL";
|
||||
return createSchemalessResp(env, 0, JNI_CONNECTION_NULL, msg);
|
||||
}
|
||||
|
||||
char *line = (char *)(*env)->GetStringUTFChars(env, data, NULL);
|
||||
jint len = (*env)->GetStringUTFLength(env, data);
|
||||
int32_t totalRows;
|
||||
TAOS_RES *tres = taos_schemaless_insert_raw(taos, line, len, &totalRows, protocol, precision);
|
||||
|
||||
(*env)->ReleaseStringUTFChars(env, data, line);
|
||||
|
||||
// if (tres == NULL) {
|
||||
// jniError("jobj:%p, schemaless raw insert failed", jobj);
|
||||
// char *msg = "JNI schemaless raw insert return null";
|
||||
// return createSchemalessResp(env, 0, JNI_TDENGINE_ERROR, msg);
|
||||
// }
|
||||
|
||||
int code = taos_errno(tres);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code: 0x%x, msg:%s", jobj, taos, code, taos_errstr(tres));
|
||||
taos_free_result(tres);
|
||||
return createSchemalessResp(env, 0, code, taos_errstr(tres));
|
||||
}
|
||||
taos_free_result(tres);
|
||||
|
||||
return createSchemalessResp(env, totalRows, JNI_SUCCESS, NULL);
|
||||
}
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithReqId(
|
||||
JNIEnv *env, jobject jobj, jlong conn, jstring data, jint protocol, jint precision, jlong reqId) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
char *msg = "JNI connection is NULL";
|
||||
return createSchemalessResp(env, 0, JNI_CONNECTION_NULL, msg);
|
||||
}
|
||||
|
||||
char *line = (char *)(*env)->GetStringUTFChars(env, data, NULL);
|
||||
jint len = (*env)->GetStringUTFLength(env, data);
|
||||
int32_t totalRows;
|
||||
TAOS_RES *tres = taos_schemaless_insert_raw_with_reqid(taos, line, len, &totalRows, protocol, precision, reqId);
|
||||
|
||||
(*env)->ReleaseStringUTFChars(env, data, line);
|
||||
|
||||
int code = taos_errno(tres);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code: 0x%x, msg:%s", jobj, taos, code, taos_errstr(tres));
|
||||
taos_free_result(tres);
|
||||
return createSchemalessResp(env, 0, code, taos_errstr(tres));
|
||||
}
|
||||
taos_free_result(tres);
|
||||
|
||||
return createSchemalessResp(env, totalRows, JNI_SUCCESS, NULL);
|
||||
}
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithTtl(JNIEnv *env, jobject jobj,
|
||||
jlong conn, jstring data,
|
||||
jint protocol,
|
||||
jint precision, jint ttl) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
char *msg = "JNI connection is NULL";
|
||||
return createSchemalessResp(env, 0, JNI_CONNECTION_NULL, msg);
|
||||
}
|
||||
|
||||
char *line = (char *)(*env)->GetStringUTFChars(env, data, NULL);
|
||||
jint len = (*env)->GetStringUTFLength(env, data);
|
||||
int32_t totalRows;
|
||||
TAOS_RES *tres = taos_schemaless_insert_raw_ttl(taos, line, len, &totalRows, protocol, precision, ttl);
|
||||
|
||||
(*env)->ReleaseStringUTFChars(env, data, line);
|
||||
|
||||
int code = taos_errno(tres);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code: 0x%x, msg:%s", jobj, taos, code, taos_errstr(tres));
|
||||
taos_free_result(tres);
|
||||
return createSchemalessResp(env, 0, code, taos_errstr(tres));
|
||||
}
|
||||
taos_free_result(tres);
|
||||
|
||||
return createSchemalessResp(env, totalRows, JNI_SUCCESS, NULL);
|
||||
}
|
||||
|
||||
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_schemalessInsertRawWithTtlAndReqId(
|
||||
JNIEnv *env, jobject jobj, jlong conn, jstring data, jint protocol, jint precision, jint ttl, jlong reqId) {
|
||||
TAOS *taos = (TAOS *)conn;
|
||||
if (taos == NULL) {
|
||||
jniError("jobj:%p, connection already closed", jobj);
|
||||
char *msg = "JNI connection is NULL";
|
||||
return createSchemalessResp(env, 0, JNI_CONNECTION_NULL, msg);
|
||||
}
|
||||
|
||||
char *line = (char *)(*env)->GetStringUTFChars(env, data, NULL);
|
||||
jint len = (*env)->GetStringUTFLength(env, data);
|
||||
int32_t totalRows;
|
||||
TAOS_RES *tres =
|
||||
taos_schemaless_insert_raw_ttl_with_reqid(taos, line, len, &totalRows, protocol, precision, ttl, reqId);
|
||||
|
||||
(*env)->ReleaseStringUTFChars(env, data, line);
|
||||
|
||||
int code = taos_errno(tres);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
jniError("jobj:%p, conn:%p, code: 0x%x, msg:%s", jobj, taos, code, taos_errstr(tres));
|
||||
taos_free_result(tres);
|
||||
return createSchemalessResp(env, 0, code, taos_errstr(tres));
|
||||
}
|
||||
taos_free_result(tres);
|
||||
|
||||
return createSchemalessResp(env, totalRows, JNI_SUCCESS, NULL);
|
||||
}
|
||||
|
||||
// TABLE_VG_ID_FID_CACHE cache resp object for getTableVgID
|
||||
typedef struct TABLE_VG_ID_FIELD_CACHE {
|
||||
int cached;
|
||||
|
|
|
@ -176,6 +176,7 @@ typedef struct {
|
|||
char opername[TSDB_TRANS_OPER_LEN];
|
||||
SArray* pRpcArray;
|
||||
SRWLatch lockRpcArray;
|
||||
int64_t mTraceId;
|
||||
} STrans;
|
||||
|
||||
typedef struct {
|
||||
|
|
|
@ -52,6 +52,8 @@ typedef struct {
|
|||
int32_t contLen;
|
||||
void *pCont;
|
||||
SSdbRaw *pRaw;
|
||||
|
||||
int64_t mTraceId;
|
||||
} STransAction;
|
||||
|
||||
typedef void (*TransCbFp)(SMnode *pMnode, void *param, int32_t paramLen);
|
||||
|
|
|
@ -1417,7 +1417,7 @@ int32_t mndValidateDbInfo(SMnode *pMnode, SDbVgVersion *pDbs, int32_t numOfDbs,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndTrimDb(SMnode *pMnode, SDbObj *pDb) {
|
||||
static int32_t mndTrimDb(SMnode *pMnode, SDbObj *pDb, SRpcMsg *pReq) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
SVgObj *pVgroup = NULL;
|
||||
void *pIter = NULL;
|
||||
|
@ -1439,7 +1439,7 @@ static int32_t mndTrimDb(SMnode *pMnode, SDbObj *pDb) {
|
|||
pHead->vgId = htonl(pVgroup->vgId);
|
||||
tSerializeSVTrimDbReq((char *)pHead + sizeof(SMsgHead), contLen, &trimReq);
|
||||
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_VND_TRIM, .pCont = pHead, .contLen = contLen};
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_VND_TRIM, .pCont = pHead, .contLen = contLen, .info = pReq->info};
|
||||
SEpSet epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
int32_t code = tmsgSendReq(&epSet, &rpcMsg);
|
||||
if (code != 0) {
|
||||
|
@ -1475,7 +1475,7 @@ static int32_t mndProcessTrimDbReq(SRpcMsg *pReq) {
|
|||
goto _OVER;
|
||||
}
|
||||
|
||||
code = mndTrimDb(pMnode, pDb);
|
||||
code = mndTrimDb(pMnode, pDb, pReq);
|
||||
|
||||
_OVER:
|
||||
if (code != 0) {
|
||||
|
|
|
@ -440,7 +440,8 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
if (roleChanged) {
|
||||
SDbObj *pDb = mndAcquireDb(pMnode, pVgroup->dbName);
|
||||
if (pDb != NULL && pDb->stateTs != curMs) {
|
||||
mInfo("db:%s, stateTs changed by status msg, old stateTs:%" PRId64 " new stateTs:%" PRId64, pDb->name, pDb->stateTs, curMs);
|
||||
mInfo("db:%s, stateTs changed by status msg, old stateTs:%" PRId64 " new stateTs:%" PRId64, pDb->name,
|
||||
pDb->stateTs, curMs);
|
||||
pDb->stateTs = curMs;
|
||||
}
|
||||
mndReleaseDb(pMnode, pDb);
|
||||
|
@ -954,7 +955,7 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
|
|||
tSerializeSDCfgDnodeReq(pBuf, bufLen, &dcfgReq);
|
||||
mInfo("dnode:%d, send config req to dnode, app:%p config:%s value:%s", cfgReq.dnodeId, pReq->info.ahandle,
|
||||
dcfgReq.config, dcfgReq.value);
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_DND_CONFIG_DNODE, .pCont = pBuf, .contLen = bufLen};
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_DND_CONFIG_DNODE, .pCont = pBuf, .contLen = bufLen, .info = pReq->info};
|
||||
tmsgSendReq(&epSet, &rpcMsg);
|
||||
code = 0;
|
||||
}
|
||||
|
|
|
@ -759,6 +759,8 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
|
|||
pClusterInfo->connections_total = mndGetNumOfConnections(pMnode);
|
||||
pClusterInfo->dbs_total = sdbGetSize(pSdb, SDB_DB);
|
||||
pClusterInfo->stbs_total = sdbGetSize(pSdb, SDB_STB);
|
||||
pClusterInfo->topics_toal = sdbGetSize(pSdb, SDB_TOPIC);
|
||||
pClusterInfo->streams_total = sdbGetSize(pSdb, SDB_STREAM);
|
||||
|
||||
void *pIter = NULL;
|
||||
while (1) {
|
||||
|
|
|
@ -470,6 +470,7 @@ static int32_t mndSetCreateSmaVgroupRedoActions(SMnode *pMnode, STrans *pTrans,
|
|||
return -1;
|
||||
}
|
||||
|
||||
action.mTraceId = pTrans->mTraceId;
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_CREATE_VNODE;
|
||||
|
@ -693,6 +694,8 @@ static int32_t mndProcessCreateSmaReq(SRpcMsg *pReq) {
|
|||
SDbObj *pDb = NULL;
|
||||
SMCreateSmaReq createReq = {0};
|
||||
|
||||
int64_t mTraceId = TRACE_GET_ROOTID(&pReq->info.traceId);
|
||||
|
||||
if (tDeserializeSMCreateSmaReq(pReq->pCont, pReq->contLen, &createReq) != 0) {
|
||||
terrno = TSDB_CODE_INVALID_MSG;
|
||||
goto _OVER;
|
||||
|
|
|
@ -668,6 +668,7 @@ static int32_t mndSetCreateStbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj
|
|||
}
|
||||
|
||||
STransAction action = {0};
|
||||
action.mTraceId = pTrans->mTraceId;
|
||||
action.epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
|
@ -877,7 +878,7 @@ static int32_t mndProcessTtlTimer(SRpcMsg *pReq) {
|
|||
pHead->vgId = htonl(pVgroup->vgId);
|
||||
tSerializeSVDropTtlTableReq((char *)pHead + sizeof(SMsgHead), contLen, &ttlReq);
|
||||
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_VND_DROP_TTL_TABLE, .pCont = pHead, .contLen = contLen};
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_VND_DROP_TTL_TABLE, .pCont = pHead, .contLen = contLen, .info = pReq->info};
|
||||
SEpSet epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
int32_t code = tmsgSendReq(&epSet, &rpcMsg);
|
||||
if (code != 0) {
|
||||
|
|
|
@ -434,6 +434,7 @@ int32_t mndPersistTaskDeployReq(STrans *pTrans, const SStreamTask *pTask) {
|
|||
tEncoderClear(&encoder);
|
||||
|
||||
STransAction action = {0};
|
||||
action.mTraceId = pTrans->mTraceId;
|
||||
memcpy(&action.epSet, &pTask->epSet, sizeof(SEpSet));
|
||||
action.pCont = buf;
|
||||
action.contLen = tlen;
|
||||
|
|
|
@ -505,8 +505,8 @@ static TransCbFp mndTransGetCbFp(ETrnFunc ftype) {
|
|||
}
|
||||
|
||||
static int32_t mndTransActionInsert(SSdb *pSdb, STrans *pTrans) {
|
||||
mInfo("trans:%d, perform insert action, row:%p stage:%s, callfunc:1, startFunc:%d", pTrans->id, pTrans, mndTransStr(pTrans->stage),
|
||||
pTrans->startFunc);
|
||||
mInfo("trans:%d, perform insert action, row:%p stage:%s, callfunc:1, startFunc:%d", pTrans->id, pTrans,
|
||||
mndTransStr(pTrans->stage), pTrans->startFunc);
|
||||
|
||||
if (pTrans->startFunc > 0) {
|
||||
TransCbFp fp = mndTransGetCbFp(pTrans->startFunc);
|
||||
|
@ -577,8 +577,7 @@ static void mndTransUpdateActions(SArray *pOldArray, SArray *pNewArray) {
|
|||
|
||||
static int32_t mndTransActionUpdate(SSdb *pSdb, STrans *pOld, STrans *pNew) {
|
||||
mInfo("trans:%d, perform update action, old row:%p stage:%s create:%" PRId64 ", new row:%p stage:%s create:%" PRId64,
|
||||
pOld->id, pOld, mndTransStr(pOld->stage), pOld->createdTime, pNew, mndTransStr(pNew->stage),
|
||||
pNew->createdTime);
|
||||
pOld->id, pOld, mndTransStr(pOld->stage), pOld->createdTime, pNew, mndTransStr(pNew->stage), pNew->createdTime);
|
||||
|
||||
if (pOld->createdTime != pNew->createdTime) {
|
||||
mError("trans:%d, failed to perform update action since createTime not match, old row:%p stage:%s create:%" PRId64
|
||||
|
@ -650,6 +649,7 @@ STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnConflct conflict,
|
|||
pTrans->undoActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction));
|
||||
pTrans->commitActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction));
|
||||
pTrans->pRpcArray = taosArrayInit(1, sizeof(SRpcHandleInfo));
|
||||
pTrans->mTraceId = pReq ? TRACE_GET_ROOTID(&pReq->info.traceId) : 0;
|
||||
taosInitRWLatch(&pTrans->lockRpcArray);
|
||||
|
||||
if (pTrans->redoActions == NULL || pTrans->undoActions == NULL || pTrans->commitActions == NULL ||
|
||||
|
@ -706,7 +706,8 @@ static int32_t mndTransAppendAction(SArray *pArray, STransAction *pAction) {
|
|||
}
|
||||
|
||||
int32_t mndTransAppendRedolog(STrans *pTrans, SSdbRaw *pRaw) {
|
||||
STransAction action = {.stage = TRN_STAGE_REDO_ACTION, .actionType = TRANS_ACTION_RAW, .pRaw = pRaw};
|
||||
STransAction action = {
|
||||
.stage = TRN_STAGE_REDO_ACTION, .actionType = TRANS_ACTION_RAW, .pRaw = pRaw, .mTraceId = pTrans->mTraceId};
|
||||
return mndTransAppendAction(pTrans->redoActions, &action);
|
||||
}
|
||||
|
||||
|
@ -728,6 +729,7 @@ int32_t mndTransAppendCommitlog(STrans *pTrans, SSdbRaw *pRaw) {
|
|||
int32_t mndTransAppendRedoAction(STrans *pTrans, STransAction *pAction) {
|
||||
pAction->stage = TRN_STAGE_REDO_ACTION;
|
||||
pAction->actionType = TRANS_ACTION_MSG;
|
||||
pAction->mTraceId = pTrans->mTraceId;
|
||||
return mndTransAppendAction(pTrans->redoActions, pAction);
|
||||
}
|
||||
|
||||
|
@ -875,7 +877,6 @@ int32_t mndTrancCheckConflict(SMnode *pMnode, STrans *pTrans) {
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
if (mndCheckTransConflict(pMnode, pTrans)) {
|
||||
terrno = TSDB_CODE_MND_TRANS_CONFLICT;
|
||||
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
|
||||
|
@ -912,6 +913,7 @@ int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
|
|||
pNew->pRpcArray = pTrans->pRpcArray;
|
||||
pNew->rpcRsp = pTrans->rpcRsp;
|
||||
pNew->rpcRspLen = pTrans->rpcRspLen;
|
||||
pNew->mTraceId = pTrans->mTraceId;
|
||||
pTrans->pRpcArray = NULL;
|
||||
pTrans->rpcRsp = NULL;
|
||||
pTrans->rpcRspLen = 0;
|
||||
|
@ -1167,6 +1169,8 @@ static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransActio
|
|||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
rpcMsg.info.traceId.rootId = pTrans->mTraceId;
|
||||
|
||||
memcpy(rpcMsg.pCont, pAction->pCont, pAction->contLen);
|
||||
|
||||
char detail[1024] = {0};
|
||||
|
@ -1457,7 +1461,7 @@ static bool mndTransPerformCommitActionStage(SMnode *pMnode, STrans *pTrans) {
|
|||
|
||||
if (code == 0) {
|
||||
pTrans->code = 0;
|
||||
pTrans->stage = TRN_STAGE_FINISHED; // TRN_STAGE_PRE_FINISH is not necessary
|
||||
pTrans->stage = TRN_STAGE_FINISHED; // TRN_STAGE_PRE_FINISH is not necessary
|
||||
mInfo("trans:%d, stage from commitAction to finished", pTrans->id);
|
||||
continueExec = true;
|
||||
} else {
|
||||
|
|
|
@ -15,6 +15,10 @@
|
|||
|
||||
#include "tq.h"
|
||||
|
||||
// 0: not init
|
||||
// 1: already inited
|
||||
// 2: wait to be inited or cleaup
|
||||
|
||||
int32_t tqInit() {
|
||||
int8_t old;
|
||||
while (1) {
|
||||
|
@ -213,12 +217,14 @@ int32_t tqPushDataRsp(STQ* pTq, STqPushEntry* pPushEntry) {
|
|||
|
||||
tmsgSendRsp(&rsp);
|
||||
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, tListLen(buf1), &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, tListLen(buf2), &pRsp->rspOffset);
|
||||
tqDebug("vgId:%d, from consumer:0x%" PRIx64 " (epoch %d) push rsp, block num: %d, req:%s, rsp:%s",
|
||||
TD_VID(pTq->pVnode), pRsp->head.consumerId, pRsp->head.epoch, pRsp->blockNum, buf1, buf2);
|
||||
if (tqDebugFlag & DEBUG_DEBUG) {
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, tListLen(buf1), &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, tListLen(buf2), &pRsp->rspOffset);
|
||||
tqDebug("vgId:%d, from consumer:0x%" PRIx64 " (epoch %d) push rsp, block num: %d, req:%s, rsp:%s",
|
||||
TD_VID(pTq->pVnode), pRsp->head.consumerId, pRsp->head.epoch, pRsp->blockNum, buf1, buf2);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -271,13 +277,14 @@ int32_t tqSendDataRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq, con
|
|||
};
|
||||
tmsgSendRsp(&rsp);
|
||||
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, 80, &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, 80, &pRsp->rspOffset);
|
||||
tqDebug("vgId:%d consumer:0x%" PRIx64 " (epoch %d), block num:%d, req:%s, rsp:%s",
|
||||
TD_VID(pTq->pVnode), pReq->consumerId, pReq->epoch, pRsp->blockNum, buf1, buf2);
|
||||
|
||||
if (tqDebugFlag & DEBUG_DEBUG) {
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, 80, &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, 80, &pRsp->rspOffset);
|
||||
tqDebug("vgId:%d consumer:0x%" PRIx64 " (epoch %d), block num:%d, req:%s, rsp:%s", TD_VID(pTq->pVnode),
|
||||
pReq->consumerId, pReq->epoch, pRsp->blockNum, buf1, buf2);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -332,12 +339,14 @@ int32_t tqSendTaosxRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq, co
|
|||
};
|
||||
tmsgSendRsp(&rsp);
|
||||
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, 80, &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, 80, &pRsp->rspOffset);
|
||||
tqDebug("taosx rsp, vgId:%d, from consumer:0x%" PRIx64 " (epoch %d) send rsp, numOfBlks:%d, req:%s, rsp:%s",
|
||||
TD_VID(pTq->pVnode), pReq->consumerId, pReq->epoch, pRsp->blockNum, buf1, buf2);
|
||||
if (tqDebugFlag & DEBUG_DEBUG) {
|
||||
char buf1[80] = {0};
|
||||
char buf2[80] = {0};
|
||||
tFormatOffset(buf1, 80, &pRsp->reqOffset);
|
||||
tFormatOffset(buf2, 80, &pRsp->rspOffset);
|
||||
tqDebug("taosx rsp, vgId:%d, from consumer:0x%" PRIx64 " (epoch %d) send rsp, numOfBlks:%d, req:%s, rsp:%s",
|
||||
TD_VID(pTq->pVnode), pReq->consumerId, pReq->epoch, pRsp->blockNum, buf1, buf2);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -497,7 +506,8 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
// update epoch if need
|
||||
int32_t savedEpoch = atomic_load_32(&pHandle->epoch);
|
||||
while (savedEpoch < reqEpoch) {
|
||||
tqDebug("tmq poll: consumer:0x%"PRIx64 " epoch update from %d to %d by poll req", consumerId, savedEpoch, reqEpoch);
|
||||
tqDebug("tmq poll: consumer:0x%" PRIx64 " epoch update from %d to %d by poll req", consumerId, savedEpoch,
|
||||
reqEpoch);
|
||||
savedEpoch = atomic_val_compare_exchange_32(&pHandle->epoch, savedEpoch, reqEpoch);
|
||||
}
|
||||
|
||||
|
@ -602,7 +612,8 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
code = -1;
|
||||
}
|
||||
|
||||
tqDebug("tmq poll: consumer:0x%" PRIx64 ", subkey %s, vgId:%d, rsp block:%d, offset type:%d, uid/version:%" PRId64 ", ts:%" PRId64 "",
|
||||
tqDebug("tmq poll: consumer:0x%" PRIx64 ", subkey %s, vgId:%d, rsp block:%d, offset type:%d, uid/version:%" PRId64
|
||||
", ts:%" PRId64 "",
|
||||
consumerId, pHandle->subKey, TD_VID(pTq->pVnode), dataRsp.blockNum, dataRsp.rspOffset.type,
|
||||
dataRsp.rspOffset.uid, dataRsp.rspOffset.ts);
|
||||
|
||||
|
@ -612,7 +623,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
|
||||
// for taosx
|
||||
SMqMetaRsp metaRsp = {0};
|
||||
STaosxRsp taosxRsp = {0};
|
||||
STaosxRsp taosxRsp = {0};
|
||||
tqInitTaosxRsp(&taosxRsp, &req);
|
||||
|
||||
if (fetchOffsetNew.type != TMQ_OFFSET__LOG) {
|
||||
|
@ -887,14 +898,15 @@ int32_t tqProcessSubscribeReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
|
|||
}
|
||||
|
||||
taosHashPut(pTq->pHandle, req.subKey, strlen(req.subKey), pHandle, sizeof(STqHandle));
|
||||
tqDebug("try to persist handle %s consumer:0x%" PRIx64" , old consumer:0x%"PRIx64, req.subKey, pHandle->consumerId,
|
||||
oldConsumerId);
|
||||
tqDebug("try to persist handle %s consumer:0x%" PRIx64 " , old consumer:0x%" PRIx64, req.subKey,
|
||||
pHandle->consumerId, oldConsumerId);
|
||||
if (tqMetaSaveHandle(pTq, req.subKey, pHandle) < 0) {
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
// TODO handle qmsg and exec modification
|
||||
tqInfo("update the consumer info, old consumer id:0x%"PRIx64", new Id:0x%"PRIx64, pHandle->consumerId, req.newConsumerId);
|
||||
tqInfo("update the consumer info, old consumer id:0x%" PRIx64 ", new Id:0x%" PRIx64, pHandle->consumerId,
|
||||
req.newConsumerId);
|
||||
atomic_store_32(&pHandle->epoch, -1);
|
||||
atomic_store_64(&pHandle->consumerId, req.newConsumerId);
|
||||
atomic_add_fetch_32(&pHandle->epoch, 1);
|
||||
|
@ -983,9 +995,9 @@ int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask, int64_t ver) {
|
|||
pTask->tbSink.vnode = pTq->pVnode;
|
||||
pTask->tbSink.tbSinkFunc = tqSinkToTablePipeline2;
|
||||
|
||||
int32_t version = 1;
|
||||
int32_t version = 1;
|
||||
SMetaInfo info = {0};
|
||||
int32_t code = metaGetInfo(pTq->pVnode->pMeta, pTask->tbSink.stbUid, &info, NULL);
|
||||
int32_t code = metaGetInfo(pTq->pVnode->pMeta, pTask->tbSink.stbUid, &info, NULL);
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
version = info.skmVer;
|
||||
}
|
||||
|
|
|
@ -241,7 +241,7 @@ typedef struct SUvUdfWork {
|
|||
struct SUvUdfWork *pWorkNext;
|
||||
} SUvUdfWork;
|
||||
|
||||
typedef enum { UDF_STATE_INIT = 0, UDF_STATE_LOADING, UDF_STATE_READY} EUdfState;
|
||||
typedef enum { UDF_STATE_INIT = 0, UDF_STATE_LOADING, UDF_STATE_READY } EUdfState;
|
||||
|
||||
typedef struct SUdf {
|
||||
char name[TSDB_FUNC_NAME_LEN + 1];
|
||||
|
@ -439,7 +439,7 @@ int32_t udfdInitScriptPlugin(int8_t scriptType) {
|
|||
return err;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
default:
|
||||
fnError("udf script type %d not supported", scriptType);
|
||||
taosMemoryFree(plugin);
|
||||
|
@ -509,15 +509,15 @@ int32_t udfdRenameUdfFile(SUdf *udf) {
|
|||
char newPath[PATH_MAX];
|
||||
if (udf->scriptType == TSDB_FUNC_SCRIPT_BIN_LIB) {
|
||||
snprintf(newPath, PATH_MAX, "%s/lib%s.so", tsTempDir, udf->name);
|
||||
taosRenameFile(udf->path, newPath);
|
||||
sprintf(udf->path, "%s", newPath);
|
||||
} else if (udf->scriptType == TSDB_FUNC_SCRIPT_PYTHON) {
|
||||
snprintf(newPath, PATH_MAX, "%s/%s.py", tsTempDir, udf->name);
|
||||
taosRenameFile(udf->path, newPath);
|
||||
sprintf(udf->path, "%s", newPath);
|
||||
} else {
|
||||
return TSDB_CODE_UDF_SCRIPT_NOT_SUPPORTED;
|
||||
}
|
||||
int32_t code = taosRenameFile(udf->path, newPath);
|
||||
if (code == 0) {
|
||||
sprintf(udf->path, "%s", newPath);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -554,6 +554,8 @@ int32_t udfdInitUdf(char *udfName, SUdf *udf) {
|
|||
fnError("udf name %s init failed. error %d", udfName, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
fnInfo("udf init succeeded. name %s type %d context %p", udf->name, udf->scriptType, (void*)udf->scriptUdfCtx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -763,6 +765,7 @@ void udfdProcessTeardownRequest(SUvUdfWork *uvUdf, SUdfRequest *request) {
|
|||
}
|
||||
uv_mutex_unlock(&global.udfsMutex);
|
||||
if (unloadUdf) {
|
||||
fnInfo("udf teardown. udf name: %s type %d: context %p", udf->name, udf->scriptType, (void*)(udf->scriptUdfCtx));
|
||||
uv_cond_destroy(&udf->condReady);
|
||||
uv_mutex_destroy(&udf->lock);
|
||||
code = udf->scriptPlugin->udfDestroyFunc(udf->scriptUdfCtx);
|
||||
|
@ -1358,7 +1361,7 @@ int32_t udfdDeinitResidentFuncs() {
|
|||
char *funcName = taosArrayGet(global.residentFuncs, i);
|
||||
SUdf **udfInHash = taosHashGet(global.udfsHash, funcName, strlen(funcName));
|
||||
if (udfInHash) {
|
||||
SUdf *udf = *udfInHash;
|
||||
SUdf *udf = *udfInHash;
|
||||
int32_t code = udf->scriptPlugin->udfDestroyFunc(udf->scriptUdfCtx);
|
||||
fnDebug("udfd destroy function returns %d", code);
|
||||
taosHashRemove(global.udfsHash, funcName, strlen(funcName));
|
||||
|
|
|
@ -210,6 +210,8 @@ static void monGenClusterJson(SMonInfo *pMonitor) {
|
|||
tjsonAddDoubleToObject(pJson, "vnodes_total", pInfo->vnodes_total);
|
||||
tjsonAddDoubleToObject(pJson, "vnodes_alive", pInfo->vnodes_alive);
|
||||
tjsonAddDoubleToObject(pJson, "connections_total", pInfo->connections_total);
|
||||
tjsonAddDoubleToObject(pJson, "topics_total", pInfo->topics_toal);
|
||||
tjsonAddDoubleToObject(pJson, "streams_total", pInfo->streams_total);
|
||||
|
||||
SJson *pDnodesJson = tjsonAddArrayToObject(pJson, "dnodes");
|
||||
if (pDnodesJson == NULL) return;
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include "ttimer.h"
|
||||
|
||||
SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskExpand expandFunc, int32_t vgId) {
|
||||
int32_t code = -1;
|
||||
SStreamMeta* pMeta = taosMemoryCalloc(1, sizeof(SStreamMeta));
|
||||
if (pMeta == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -33,7 +34,12 @@ SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskExpand expandF
|
|||
}
|
||||
|
||||
sprintf(streamPath, "%s/%s", pMeta->path, "checkpoints");
|
||||
taosMulModeMkDir(streamPath, 0755);
|
||||
code = taosMulModeMkDir(streamPath, 0755);
|
||||
if (code != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(code);
|
||||
taosMemoryFree(streamPath);
|
||||
goto _err;
|
||||
}
|
||||
taosMemoryFree(streamPath);
|
||||
|
||||
if (tdbTbOpen("task.db", sizeof(int32_t), -1, NULL, pMeta->db, &pMeta->pTaskDb, 0) < 0) {
|
||||
|
|
|
@ -25,7 +25,7 @@ int32_t qsortHelper(const void* p1, const void* p2, const void* param) {
|
|||
|
||||
// todo refactor: 1) move away; 2) use merge sort instead; 3) qsort is not a stable sort actually.
|
||||
void taosSort(void* base, int64_t sz, int64_t width, __compar_fn_t compar) {
|
||||
#ifdef _ALPINE
|
||||
#if defined(WINDOWS) || defined(_ALPINE)
|
||||
void* param = compar;
|
||||
taosqsort(base, sz, width, param, qsortHelper);
|
||||
#else
|
||||
|
|
|
@ -31,6 +31,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor())
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -33,6 +33,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -29,25 +29,23 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("community")]
|
||||
elif "src" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("src")]
|
||||
elif "/tools/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tools/")]
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[: selfPath.find("tests")]
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
paths = []
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if (tool) in files:
|
||||
if ((tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if "packaging" not in rootRealPath:
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
break
|
||||
if len(paths) == 0:
|
||||
if (len(paths) == 0):
|
||||
tdLog.exit("taosBenchmark not found!")
|
||||
return
|
||||
else:
|
||||
|
|
|
@ -33,6 +33,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -35,6 +35,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -29,28 +29,23 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("community")]
|
||||
elif "src" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("src")]
|
||||
elif "/tools/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tools/")]
|
||||
elif "/tests/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tests/")]
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
tdLog.info("cannot found %s in path: %s, use system's" % (tool, selfPath))
|
||||
projPath = "/usr/local/taos/bin/"
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
paths = []
|
||||
for root, dummy, files in os.walk(projPath):
|
||||
if (tool) in files:
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ((tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if "packaging" not in rootRealPath:
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
break
|
||||
if len(paths) == 0:
|
||||
if (len(paths) == 0):
|
||||
tdLog.exit("taosBenchmark not found!")
|
||||
return
|
||||
else:
|
||||
|
|
|
@ -30,6 +30,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -31,6 +31,8 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
|
|
|
@ -32,30 +32,28 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("community")]
|
||||
elif "src" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("src")]
|
||||
elif "/tools/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tools/")]
|
||||
elif "/tests/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tests/")]
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
tdLog.info("cannot found %s in path: %s, use system's" % (tool, selfPath))
|
||||
projPath = "/usr/local/taos/bin/"
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
paths = []
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if (tool) in files:
|
||||
if ((tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if "packaging" not in rootRealPath:
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
break
|
||||
if len(paths) == 0:
|
||||
return ""
|
||||
return paths[0]
|
||||
if (len(paths) == 0):
|
||||
tdLog.exit("taosBenchmark not found!")
|
||||
return
|
||||
else:
|
||||
tdLog.info("taosBenchmark found in %s" % paths[0])
|
||||
return paths[0]
|
||||
|
||||
# 获取taosc接口查询的结果文件中的内容,返回每行数据,并断言数据的第一列内容。
|
||||
def assertfileDataTaosc(self, filename, expectResult):
|
||||
|
|
|
@ -29,27 +29,23 @@ class TDTestCase:
|
|||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getPath(self, tool="taosBenchmark"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("community")]
|
||||
elif "src" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("src")]
|
||||
elif "/tools/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tools/")]
|
||||
elif "/tests/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tests/")]
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
tdLog.exit("cannot found %s in path: %s, use system's" % (tool, selfPath))
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
paths = []
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if (tool) in files:
|
||||
if ((tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if "packaging" not in rootRealPath:
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
break
|
||||
if len(paths) == 0:
|
||||
if (len(paths) == 0):
|
||||
tdLog.exit("taosBenchmark not found!")
|
||||
return
|
||||
else:
|
||||
|
|
|
@ -30,30 +30,28 @@ class TDTestCase:
|
|||
self.tmpdir = "tmp"
|
||||
|
||||
def getPath(self, tool="taosdump"):
|
||||
if (platform.system().lower() == 'windows'):
|
||||
tool = tool + ".exe"
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if "community" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("community")]
|
||||
elif "src" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("src")]
|
||||
elif "/tools/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tools/")]
|
||||
elif "/tests/" in selfPath:
|
||||
projPath = selfPath[: selfPath.find("/tests/")]
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
tdLog.info("cannot found %s in path: %s, use system's" % (tool, selfPath))
|
||||
projPath = "/usr/local/taos/bin"
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
paths = []
|
||||
for root, dummy, files in os.walk(projPath):
|
||||
if (tool) in files:
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ((tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if "packaging" not in rootRealPath:
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
break
|
||||
if len(paths) == 0:
|
||||
return ""
|
||||
return paths[0]
|
||||
if (len(paths) == 0):
|
||||
tdLog.exit("taosBenchmark not found!")
|
||||
return
|
||||
else:
|
||||
tdLog.info("taosBenchmark found in %s" % paths[0])
|
||||
return paths[0]
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
|
|
@ -316,9 +316,9 @@ if __name__ == "__main__":
|
|||
print(r)
|
||||
else:
|
||||
pass
|
||||
if restful:
|
||||
tAdapter.init(deployPath, masterIp)
|
||||
tAdapter.stop(force_kill=True)
|
||||
|
||||
tAdapter.init(deployPath, masterIp)
|
||||
tAdapter.stop(force_kill=True)
|
||||
|
||||
if dnodeNums == 1:
|
||||
tdDnodes.deploy(1, updateCfgDict)
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
python3 ./test.py -f 2-query/table_count_scan.py
|
||||
python3 ./test.py -f 2-query/show_create_db.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/auto_create_table_json.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/custom_col_tag.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/default_json.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/demo.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/insert_alltypes_json.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/invalid_commandline.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/json_tag.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/query_json.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/sample_csv_json.py
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/taosdemoTestQueryWithJson.py -R
|
||||
python3 ./test.py -f 5-taos-tools/taosbenchmark/telnet_tcp.py -R
|
|
@ -6,15 +6,10 @@ cd ../../docs/examples/java
|
|||
|
||||
mvn clean test > jdbc-out.log 2>&1
|
||||
tail -n 20 jdbc-out.log
|
||||
|
||||
cases=`grep 'Tests run' jdbc-out.log | awk 'END{print $3}'`
|
||||
totalJDBCCases=`echo ${cases/%,}`
|
||||
failed=`grep 'Tests run' jdbc-out.log | awk 'END{print $5}'`
|
||||
JDBCFailed=`echo ${failed/%,}`
|
||||
error=`grep 'Tests run' jdbc-out.log | awk 'END{print $7}'`
|
||||
JDBCError=`echo ${error/%,}`
|
||||
|
||||
totalJDBCFailed=`expr $JDBCFailed + $JDBCError`
|
||||
totalJDBCCases=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $2 }'`
|
||||
failed=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $4 }'`
|
||||
error=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $6 }'`
|
||||
totalJDBCFailed=`expr $failed + $error`
|
||||
totalJDBCSuccess=`expr $totalJDBCCases - $totalJDBCFailed`
|
||||
|
||||
if [ "$totalJDBCSuccess" -gt "0" ]; then
|
||||
|
|
|
@ -177,6 +177,7 @@
|
|||
,,y,script,./test.sh -f tsim/query/scalarNull.sim
|
||||
,,y,script,./test.sh -f tsim/query/session.sim
|
||||
,,y,script,./test.sh -f tsim/query/udf.sim
|
||||
,,n,script,./test.sh -f tsim/query/udfpy.sim
|
||||
,,y,script,./test.sh -f tsim/query/udf_with_const.sim
|
||||
,,y,script,./test.sh -f tsim/query/sys_tbname.sim
|
||||
,,y,script,./test.sh -f tsim/query/groupby.sim
|
||||
|
|
|
@ -51,6 +51,9 @@ if [ $ent -eq 0 ]; then
|
|||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
|
||||
ln -s /home/TDengine/include/client/taos.h /usr/include/taos.h 2>/dev/null
|
||||
ln -s /home/TDengine/include/common/taosdef.h /usr/include/taosdef.h 2>/dev/null
|
||||
ln -s /home/TDengine/include/util/taoserror.h /usr/include/taoserror.h 2>/dev/null
|
||||
ln -s /home/TDengine/include/libs/function/taosudf.h /usr/include/taosudf.h 2>/dev/null
|
||||
CONTAINER_TESTDIR=/home/TDengine
|
||||
else
|
||||
export PATH=$PATH:/home/TDinternal/debug/build/bin
|
||||
|
@ -58,6 +61,9 @@ else
|
|||
ln -s /home/TDinternal/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
|
||||
ln -s /home/TDinternal/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
|
||||
ln -s /home/TDinternal/community/include/client/taos.h /usr/include/taos.h 2>/dev/null
|
||||
ln -s /home/TDinternal/community/include/common/taosdef.h /usr/include/taosdef.h 2>/dev/null
|
||||
ln -s /home/TDinternal/community/include/util/taoserror.h /usr/include/taoserror.h 2>/dev/null
|
||||
ln -s /home/TDinternal/community/include/libs/function/taosudf.h /usr/include/taosudf.h 2>/dev/null
|
||||
CONTAINER_TESTDIR=/home/TDinternal/community
|
||||
fi
|
||||
mkdir -p /var/lib/taos/subscribe
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
case_file="cases_temp_file"
|
||||
|
||||
parm_path=$(dirname $0)
|
||||
parm_path=$(pwd ${parm_path})
|
||||
echo "execute path:${parm_path}"
|
||||
cd ${parm_path}
|
||||
cp cases.task ${case_file}
|
||||
sed -i '/^$/d' ${case_file}
|
||||
sed -i '$a\%%FINISHED%%' ${case_file}
|
||||
|
||||
utest="unit-test"
|
||||
tsimtest="script"
|
||||
systest="system-test"
|
||||
devtest="develop-test"
|
||||
doctest="docs-examples-test"
|
||||
rm -rf win-${utest}.log win-${tsimtest}.log win-${systest}.log win-${devtest}.log win-${doctest}.log
|
||||
rm -rf ${parm_path}/../${utest}/win-test-file ${parm_path}/../${tsimtest}/win-test-file ${parm_path}/../${systest}/win-test-file ${parm_path}/../${devtest}/win-test-file
|
||||
while read -r line
|
||||
do
|
||||
echo "$line"|grep -q "^#"
|
||||
if [ $? -eq 0 ]; then
|
||||
continue
|
||||
fi
|
||||
exec_dir=$(echo "$line"|cut -d ',' -f4)
|
||||
case_cmd=$(echo "$line"|cut -d ',' -f5)
|
||||
if [[ "${exec_dir}" == "${utest}" ]]; then
|
||||
echo ${case_cmd} >> win-${utest}.log
|
||||
continue
|
||||
fi
|
||||
if [[ "${exec_dir}" == "${tsimtest}" ]]; then
|
||||
echo ${case_cmd} >> win-${tsimtest}.log
|
||||
continue
|
||||
fi
|
||||
if [[ "${exec_dir}" == "${systest}" ]]; then
|
||||
if [[ "${case_cmd}" =~ "pytest.sh" ]]; then
|
||||
case_cmd=$(echo "$case_cmd"|cut -d ' ' -f 2-)
|
||||
echo ${case_cmd} >> win-${systest}.log
|
||||
else
|
||||
echo ${case_cmd} >> win-${systest}.log
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
if [[ "${exec_dir}" == "${devtest}" ]]; then
|
||||
echo ${case_cmd} >> win-${devtest}.log
|
||||
continue
|
||||
fi
|
||||
if [[ "${exec_dir}" == "${doctest}" ]]; then
|
||||
echo ${case_cmd} >> win-${doctest}.log
|
||||
continue
|
||||
fi
|
||||
done < ${case_file}
|
||||
mv win-${utest}.log ${parm_path}/../${utest}/win-test-file
|
||||
mv win-${tsimtest}.log ${parm_path}/../${tsimtest}/win-test-file
|
||||
mv win-${systest}.log ${parm_path}/../${systest}/win-test-file
|
||||
mv win-${devtest}.log ${parm_path}/../${devtest}/win-test-file
|
||||
|
||||
|
||||
rm -rf ${case_file}
|
|
@ -94,14 +94,14 @@ class TDCases:
|
|||
|
||||
tdLog.notice("total %d Windows test case(s) executed" % (runNum))
|
||||
|
||||
def runOneWindows(self, conn, fileName):
|
||||
def runOneWindows(self, conn, fileName, replicaVar=1):
|
||||
testModule = self.__dynamicLoadModule(fileName)
|
||||
|
||||
runNum = 0
|
||||
for tmp in self.windowsCases:
|
||||
if tmp.name.find(fileName) != -1:
|
||||
case = testModule.TDTestCase()
|
||||
case.init(conn, self._logSql)
|
||||
case.init(conn, self._logSql,replicaVar)
|
||||
try:
|
||||
case.run()
|
||||
except Exception as e:
|
||||
|
|
|
@ -46,11 +46,11 @@ class TDSimClient:
|
|||
self.path = path
|
||||
|
||||
def getLogDir(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
return self.logDir
|
||||
|
||||
def getCfgDir(self):
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
return self.cfgDir
|
||||
|
||||
def setTestCluster(self, value):
|
||||
|
@ -65,9 +65,9 @@ class TDSimClient:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgPath = "%s/sim/psim/cfg/taos.cfg" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","psim","cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.logDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -131,11 +131,10 @@ class TDDnode:
|
|||
return totalSize
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/dnode%d/log" % (self.path, self.index)
|
||||
self.dataDir = "%s/sim/dnode%d/data" % (self.path, self.index)
|
||||
self.cfgDir = "%s/sim/dnode%d/cfg" % (self.path, self.index)
|
||||
self.cfgPath = "%s/sim/dnode%d/cfg/taos.cfg" % (
|
||||
self.path, self.index)
|
||||
self.logDir = os.path.join(self.path,"sim","dnode%d" % self.index, "log")
|
||||
self.dataDir = os.path.join(self.path,"sim","dnode%d" % self.index, "data")
|
||||
self.cfgDir = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.dataDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -325,11 +324,11 @@ class TDDnode:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def getDnodeRootDir(self, index):
|
||||
dnodeRootDir = "%s/sim/psim/dnode%d" % (self.path, index)
|
||||
dnodeRootDir = os.path.join(self.path,"sim","psim","dnode%d" % index)
|
||||
return dnodeRootDir
|
||||
|
||||
def getDnodesRootDir(self):
|
||||
dnodesRootDir = "%s/sim/psim" % (self.path)
|
||||
dnodesRootDir = os.path.join(self.path,"sim","psim")
|
||||
return dnodesRootDir
|
||||
|
||||
|
||||
|
|
|
@ -44,11 +44,11 @@ class TDSimClient:
|
|||
self.path = path
|
||||
|
||||
def getLogDir(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
return self.logDir
|
||||
|
||||
def getCfgDir(self):
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
return self.cfgDir
|
||||
|
||||
def setTestCluster(self, value):
|
||||
|
@ -63,9 +63,9 @@ class TDSimClient:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgPath = "%s/sim/psim/cfg/taos.cfg" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","psim","cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.logDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -129,11 +129,10 @@ class TDDnode:
|
|||
return totalSize
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/dnode%d/log" % (self.path, self.index)
|
||||
self.dataDir = "%s/sim/dnode%d/data" % (self.path, self.index)
|
||||
self.cfgDir = "%s/sim/dnode%d/cfg" % (self.path, self.index)
|
||||
self.cfgPath = "%s/sim/dnode%d/cfg/taos.cfg" % (
|
||||
self.path, self.index)
|
||||
self.logDir = os.path.join(self.path,"sim","dnode%d" % self.index, "log")
|
||||
self.dataDir = os.path.join(self.path,"sim","dnode%d" % self.index, "data")
|
||||
self.cfgDir = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.dataDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -323,11 +322,11 @@ class TDDnode:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def getDnodeRootDir(self, index):
|
||||
dnodeRootDir = "%s/sim/psim/dnode%d" % (self.path, index)
|
||||
dnodeRootDir = os.path.join(self.path,"sim","psim","dnode%d" % index)
|
||||
return dnodeRootDir
|
||||
|
||||
def getDnodesRootDir(self):
|
||||
dnodesRootDir = "%s/sim/psim" % (self.path)
|
||||
dnodesRootDir = os.path.join(self.path,"sim","psim")
|
||||
return dnodesRootDir
|
||||
|
||||
|
||||
|
|
|
@ -41,11 +41,11 @@ class TDSimClient:
|
|||
self.path = path
|
||||
|
||||
def getLogDir(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
return self.logDir
|
||||
|
||||
def getCfgDir(self):
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
return self.cfgDir
|
||||
|
||||
def setTestCluster(self, value):
|
||||
|
@ -60,9 +60,9 @@ class TDSimClient:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgPath = "%s/sim/psim/cfg/taos.cfg" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","psim","cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.logDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -126,11 +126,10 @@ class TDDnode:
|
|||
return totalSize
|
||||
|
||||
def deploy(self):
|
||||
self.logDir = "%s/sim/dnode%d/log" % (self.path, self.index)
|
||||
self.dataDir = "%s/sim/dnode%d/data" % (self.path, self.index)
|
||||
self.cfgDir = "%s/sim/dnode%d/cfg" % (self.path, self.index)
|
||||
self.cfgPath = "%s/sim/dnode%d/cfg/taos.cfg" % (
|
||||
self.path, self.index)
|
||||
self.logDir = os.path.join(self.path,"sim","dnode%d" % self.index, "log")
|
||||
self.dataDir = os.path.join(self.path,"sim","dnode%d" % self.index, "data")
|
||||
self.cfgDir = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.dataDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -320,11 +319,11 @@ class TDDnode:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def getDnodeRootDir(self, index):
|
||||
dnodeRootDir = "%s/sim/psim/dnode%d" % (self.path, index)
|
||||
dnodeRootDir = os.path.join(self.path,"sim","psim","dnode%d" % index)
|
||||
return dnodeRootDir
|
||||
|
||||
def getDnodesRootDir(self):
|
||||
dnodesRootDir = "%s/sim/psim" % (self.path)
|
||||
dnodesRootDir = os.path.join(self.path,"sim","psim")
|
||||
return dnodesRootDir
|
||||
|
||||
|
||||
|
|
|
@ -50,11 +50,11 @@ class TDSimClient:
|
|||
}
|
||||
|
||||
def getLogDir(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
return self.logDir
|
||||
|
||||
def getCfgDir(self):
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
return self.cfgDir
|
||||
|
||||
def setTestCluster(self, value):
|
||||
|
@ -69,9 +69,9 @@ class TDSimClient:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def deploy(self, *updatecfgDict):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
self.cfgDir = "%s/sim/psim/cfg" % (self.path)
|
||||
self.cfgPath = "%s/sim/psim/cfg/taos.cfg" % (self.path)
|
||||
self.logDir = os.path.join(self.path,"sim","psim","log")
|
||||
self.cfgDir = os.path.join(self.path,"sim","psim","cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","psim","cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.logDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -207,11 +207,10 @@ class TDDnode:
|
|||
self.remote_conn.run("python3 ./test.py %s -d %s -e %s"%(valgrindStr,remoteCfgDictStr,execCmdStr))
|
||||
|
||||
def deploy(self, *updatecfgDict):
|
||||
self.logDir = "%s/sim/dnode%d/log" % (self.path, self.index)
|
||||
self.dataDir = "%s/sim/dnode%d/data" % (self.path, self.index)
|
||||
self.cfgDir = "%s/sim/dnode%d/cfg" % (self.path, self.index)
|
||||
self.cfgPath = "%s/sim/dnode%d/cfg/taos.cfg" % (
|
||||
self.path, self.index)
|
||||
self.logDir = os.path.join(self.path,"sim","dnode%d" % self.index, "log")
|
||||
self.dataDir = os.path.join(self.path,"sim","dnode%d" % self.index, "data")
|
||||
self.cfgDir = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg")
|
||||
self.cfgPath = os.path.join(self.path,"sim","dnode%d" % self.index, "cfg","taos.cfg")
|
||||
|
||||
cmd = "rm -rf " + self.dataDir
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -472,20 +471,25 @@ class TDDnode:
|
|||
tdLog.exit("dnode:%d is not deployed" % (self.index))
|
||||
|
||||
if self.valgrind == 0:
|
||||
if self.asan:
|
||||
asanDir = "%s/sim/asan/dnode%d.asan" % (
|
||||
self.path, self.index)
|
||||
cmd = "nohup %s -c %s > /dev/null 2> %s & " % (
|
||||
binPath, self.cfgDir, asanDir)
|
||||
if platform.system().lower() == 'windows':
|
||||
cmd = "mintty -h never %s -c %s" % (binPath, self.cfgDir)
|
||||
else:
|
||||
cmd = "nohup %s -c %s > /dev/null 2>&1 & " % (
|
||||
binPath, self.cfgDir)
|
||||
if self.asan:
|
||||
asanDir = "%s/sim/asan/dnode%d.asan" % (
|
||||
self.path, self.index)
|
||||
cmd = "nohup %s -c %s > /dev/null 2> %s & " % (
|
||||
binPath, self.cfgDir, asanDir)
|
||||
else:
|
||||
cmd = "nohup %s -c %s > /dev/null 2>&1 & " % (
|
||||
binPath, self.cfgDir)
|
||||
else:
|
||||
valgrindCmdline = "valgrind --log-file=\"%s/../log/valgrind.log\" --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all -v --workaround-gcc296-bugs=yes"%self.cfgDir
|
||||
|
||||
cmd = "nohup %s %s -c %s 2>&1 & " % (
|
||||
valgrindCmdline, binPath, self.cfgDir)
|
||||
|
||||
if platform.system().lower() == 'windows':
|
||||
cmd = "mintty -h never %s %s -c %s" % (
|
||||
valgrindCmdline, binPath, self.cfgDir)
|
||||
else:
|
||||
cmd = "nohup %s %s -c %s 2>&1 & " % (
|
||||
valgrindCmdline, binPath, self.cfgDir)
|
||||
print(cmd)
|
||||
|
||||
if (self.remoteIP == ""):
|
||||
|
@ -568,6 +572,8 @@ class TDDnode:
|
|||
while(processID):
|
||||
if not platform.system().lower() == 'windows' or (onlyKillOnceWindows == 0 and platform.system().lower() == 'windows'):
|
||||
killCmd = "kill -INT %s > /dev/null 2>&1" % processID
|
||||
if platform.system().lower() == 'windows':
|
||||
killCmd = "kill -INT %s > nul 2>&1" % processID
|
||||
os.system(killCmd)
|
||||
onlyKillOnceWindows = 1
|
||||
time.sleep(1)
|
||||
|
@ -635,11 +641,11 @@ class TDDnode:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def getDnodeRootDir(self, index):
|
||||
dnodeRootDir = "%s/sim/psim/dnode%d" % (self.path, index)
|
||||
dnodeRootDir = os.path.join(self.path,"sim","psim","dnode%d" % index)
|
||||
return dnodeRootDir
|
||||
|
||||
def getDnodesRootDir(self):
|
||||
dnodesRootDir = "%s/sim/psim" % (self.path)
|
||||
dnodesRootDir = os.path.join(self.path,"sim","psim")
|
||||
return dnodesRootDir
|
||||
|
||||
|
||||
|
@ -789,15 +795,21 @@ class TDDnodes:
|
|||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
while(processID):
|
||||
print(processID)
|
||||
if platform.system().lower() == 'windows':
|
||||
killCmd = "kill -9 %s > nul 2>&1" % processID
|
||||
else:
|
||||
killCmd = "kill -9 %s > /dev/null 2>&1" % processID
|
||||
killCmd = "kill -9 %s > /dev/null 2>&1" % processID
|
||||
os.system(killCmd)
|
||||
time.sleep(1)
|
||||
processID = subprocess.check_output(
|
||||
psCmd, shell=True).decode("utf-8").strip()
|
||||
elif platform.system().lower() == 'windows':
|
||||
psCmd = "for /f %a in ('wmic process where \"name='taosd.exe'\" get processId ^| xargs echo ^| awk '{print $2}' ^&^& echo aa') do @(ps | grep %a | awk '{print $1}' | xargs)"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
while(processID):
|
||||
print(processID)
|
||||
killCmd = "kill -9 %s > nul 2>&1" % processID
|
||||
os.system(killCmd)
|
||||
time.sleep(1)
|
||||
processID = subprocess.check_output(
|
||||
psCmd, shell=True).decode("utf-8").strip()
|
||||
|
||||
else:
|
||||
psCmd = "ps -ef | grep -w taosd | grep 'root' | grep -v grep| grep -v defunct | awk '{print $2}' | xargs"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
|
@ -809,10 +821,7 @@ class TDDnodes:
|
|||
psCmd = "ps -ef|grep -w taosd| grep -v grep| grep -v defunct | awk '{print $2}' | xargs"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
while(processID):
|
||||
if platform.system().lower() == 'windows':
|
||||
killCmd = "kill -9 %s > nul 2>&1" % processID
|
||||
else:
|
||||
killCmd = "kill -9 %s > /dev/null 2>&1" % processID
|
||||
killCmd = "kill -9 %s > /dev/null 2>&1" % processID
|
||||
os.system(killCmd)
|
||||
time.sleep(1)
|
||||
processID = subprocess.check_output(
|
||||
|
|
|
@ -18,6 +18,9 @@ class GetTime:
|
|||
|
||||
def get_ms_timestamp(self,ts_str):
|
||||
_ts_str = ts_str
|
||||
if "+" in _ts_str:
|
||||
timestamp = datetime.fromisoformat(_ts_str)
|
||||
return int((timestamp-datetime.fromtimestamp(0,timestamp.tzinfo)).total_seconds())*1000+int(timestamp.microsecond / 1000)
|
||||
if " " in ts_str:
|
||||
p = ts_str.split(" ")[1]
|
||||
if len(p) > 15 :
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import socket
|
||||
import requests
|
||||
from fabric2 import Connection
|
||||
from util.log import *
|
||||
from util.common import *
|
||||
|
@ -132,9 +132,9 @@ class TAdapter:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def deploy(self, *update_cfg_dict):
|
||||
self.log_dir = f"{self.path}/sim/dnode1/log"
|
||||
self.cfg_dir = f"{self.path}/sim/dnode1/cfg"
|
||||
self.cfg_path = f"{self.cfg_dir}/taosadapter.toml"
|
||||
self.log_dir = os.path.join(self.path,"sim","dnode1","log")
|
||||
self.cfg_dir = os.path.join(self.path,"sim","dnode1","cfg")
|
||||
self.cfg_path = os.path.join(self.cfg_dir,"taosadapter.toml")
|
||||
|
||||
cmd = f"touch {self.cfg_path}"
|
||||
if os.system(cmd) != 0:
|
||||
|
@ -162,7 +162,7 @@ class TAdapter:
|
|||
tdLog.info(f"taosadapter found: {bin_path}")
|
||||
|
||||
if platform.system().lower() == 'windows':
|
||||
cmd = f"mintty -h never {bin_path} -c {self.cfg_dir}"
|
||||
cmd = f"mintty -h never {bin_path} -c {self.cfg_path}"
|
||||
else:
|
||||
cmd = f"nohup {bin_path} -c {self.cfg_path} > /dev/null & "
|
||||
|
||||
|
@ -170,7 +170,7 @@ class TAdapter:
|
|||
self.remote_exec(self.taosadapter_cfg_dict, f"tAdapter.deployed=1\ntAdapter.log_dir={self.log_dir}\ntAdapter.cfg_dir={self.cfg_dir}\ntAdapter.start()")
|
||||
self.running = 1
|
||||
else:
|
||||
os.system(f"rm -rf {self.log_dir}/taosadapter*")
|
||||
os.system(f"rm -rf {self.log_dir}{os.sep}taosadapter*")
|
||||
if os.system(cmd) != 0:
|
||||
tdLog.exit(cmd)
|
||||
self.running = 1
|
||||
|
@ -179,22 +179,19 @@ class TAdapter:
|
|||
time.sleep(0.1)
|
||||
|
||||
taosadapter_port = self.taosadapter_cfg_dict["port"]
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
s.settimeout(3)
|
||||
try:
|
||||
res = s.connect_ex((self.remoteIP, taosadapter_port))
|
||||
s.shutdown(2)
|
||||
if res == 0:
|
||||
tdLog.info(f"the taosadapter has been started, using port:{taosadapter_port}")
|
||||
else:
|
||||
tdLog.info(f"the taosadapter do not started!!!")
|
||||
except socket.error as e:
|
||||
tdLog.notice("socket connect error!")
|
||||
finally:
|
||||
if s:
|
||||
s.close()
|
||||
# tdLog.debug("the taosadapter has been started.")
|
||||
time.sleep(1)
|
||||
for i in range(5):
|
||||
ip = 'localhost'
|
||||
if self.remoteIP != "":
|
||||
ip = self.remoteIP
|
||||
url = f'http://{ip}:{taosadapter_port}/-/ping'
|
||||
try:
|
||||
r = requests.get(url)
|
||||
if r.status_code == 200:
|
||||
tdLog.info(f"the taosadapter has been started, using port:{taosadapter_port}")
|
||||
break
|
||||
except Exception:
|
||||
tdLog.info(f"the taosadapter do not started!!!")
|
||||
time.sleep(1)
|
||||
|
||||
def start_taosadapter(self):
|
||||
"""
|
||||
|
@ -228,33 +225,36 @@ class TAdapter:
|
|||
|
||||
def stop(self, force_kill=False):
|
||||
signal = "-9" if force_kill else "-15"
|
||||
|
||||
if self.remoteIP:
|
||||
self.remote_exec(self.taosadapter_cfg_dict, "tAdapter.running=1\ntAdapter.stop()")
|
||||
tdLog.info("stop taosadapter")
|
||||
return
|
||||
|
||||
toBeKilled = "taosadapter"
|
||||
|
||||
if self.running != 0:
|
||||
if platform.system().lower() == 'windows':
|
||||
psCmd = f"ps -ef|grep -w {toBeKilled}| grep -v grep | awk '{{print $2}}'"
|
||||
# psCmd = f"pgrep {toBeKilled}"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
while(processID):
|
||||
killCmd = "kill %s %s > /dev/null 2>&1" % (signal, processID)
|
||||
while(processID):
|
||||
killCmd = "kill %s %s > nul 2>&1" % (signal, processID)
|
||||
os.system(killCmd)
|
||||
time.sleep(1)
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
if not platform.system().lower() == 'windows':
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
self.running = 0
|
||||
tdLog.debug(f"taosadapter is stopped by kill {signal}")
|
||||
|
||||
else:
|
||||
if self.running != 0:
|
||||
psCmd = f"ps -ef|grep -w {toBeKilled}| grep -v grep | awk '{{print $2}}'"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
while(processID):
|
||||
killCmd = "kill %s %s > /dev/null 2>&1" % (signal, processID)
|
||||
os.system(killCmd)
|
||||
time.sleep(1)
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8").strip()
|
||||
port = 6041
|
||||
fuserCmd = f"fuser -k -n tcp {port} > /dev/null"
|
||||
os.system(fuserCmd)
|
||||
# for port in range(6030, 6041):
|
||||
# fuserCmd = f"fuser -k -n tcp {port} > /dev/null"
|
||||
# os.system(fuserCmd)
|
||||
|
||||
self.running = 0
|
||||
tdLog.debug(f"taosadapter is stopped by kill {signal}")
|
||||
self.running = 0
|
||||
tdLog.debug(f"taosadapter is stopped by kill {signal}")
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,61 +1,47 @@
|
|||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "taosudf.h"
|
||||
|
||||
DLL_EXPORT int32_t bit_and_init() { return 0; }
|
||||
|
||||
DLL_EXPORT int32_t bit_and_init() {
|
||||
return 0;
|
||||
}
|
||||
DLL_EXPORT int32_t bit_and_destroy() { return 0; }
|
||||
|
||||
DLL_EXPORT int32_t bit_and_destroy() {
|
||||
return 0;
|
||||
}
|
||||
DLL_EXPORT int32_t bit_and(SUdfDataBlock* block, SUdfColumn* resultCol) {
|
||||
if (block->numOfCols < 2) {
|
||||
return TSDB_CODE_UDF_INVALID_INPUT;
|
||||
}
|
||||
|
||||
DLL_EXPORT int32_t bit_and(SUdfDataBlock* block, SUdfColumn *resultCol) {
|
||||
|
||||
if (block->numOfCols < 2) {
|
||||
return TSDB_CODE_UDF_INVALID_INPUT;
|
||||
for (int32_t i = 0; i < block->numOfCols; ++i) {
|
||||
SUdfColumn* col = block->udfCols[i];
|
||||
if (!(col->colMeta.type == TSDB_DATA_TYPE_INT)) {
|
||||
return TSDB_CODE_UDF_INVALID_INPUT;
|
||||
}
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < block->numOfCols; ++i) {
|
||||
SUdfColumn* col = block->udfCols[i];
|
||||
if (!(col->colMeta.type == TSDB_DATA_TYPE_INT)) {
|
||||
return TSDB_CODE_UDF_INVALID_INPUT;
|
||||
}
|
||||
SUdfColumnData* resultData = &resultCol->colData;
|
||||
|
||||
for (int32_t i = 0; i < block->numOfRows; ++i) {
|
||||
if (udfColDataIsNull(block->udfCols[0], i)) {
|
||||
udfColDataSetNull(resultCol, i);
|
||||
continue;
|
||||
}
|
||||
int32_t result = *(int32_t*)udfColDataGetData(block->udfCols[0], i);
|
||||
int j = 1;
|
||||
for (; j < block->numOfCols; ++j) {
|
||||
if (udfColDataIsNull(block->udfCols[j], i)) {
|
||||
udfColDataSetNull(resultCol, i);
|
||||
break;
|
||||
}
|
||||
|
||||
SUdfColumnMeta *meta = &resultCol->colMeta;
|
||||
meta->bytes = 4;
|
||||
meta->type = TSDB_DATA_TYPE_INT;
|
||||
meta->scale = 0;
|
||||
meta->precision = 0;
|
||||
|
||||
|
||||
SUdfColumnData *resultData = &resultCol->colData;
|
||||
|
||||
resultData->numOfRows = block->numOfRows;
|
||||
|
||||
for (int32_t i = 0; i < resultData->numOfRows; ++i) {
|
||||
if (udfColDataIsNull(block->udfCols[0], i)) {
|
||||
udfColDataSetNull(resultCol, i);
|
||||
continue;
|
||||
}
|
||||
int32_t result = *(int32_t*)udfColDataGetData(block->udfCols[0], i);
|
||||
int j = 1;
|
||||
for (; j < block->numOfCols; ++j) {
|
||||
if (udfColDataIsNull(block->udfCols[j], i)) {
|
||||
udfColDataSetNull(resultCol, i);
|
||||
break;
|
||||
}
|
||||
|
||||
char* colData = udfColDataGetData(block->udfCols[j], i);
|
||||
result &= *(int32_t*)colData;
|
||||
}
|
||||
if (j == block->numOfCols) {
|
||||
udfColDataSet(resultCol, i, (char*)&result, false);
|
||||
}
|
||||
|
||||
char* colData = udfColDataGetData(block->udfCols[j], i);
|
||||
result &= *(int32_t*)colData;
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
if (j == block->numOfCols) {
|
||||
udfColDataSet(resultCol, i, (char*)&result, false);
|
||||
}
|
||||
}
|
||||
resultData->numOfRows = block->numOfRows;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
@echo off
|
||||
echo 0
|
|
@ -54,7 +54,7 @@ if %NODE% == 6 set NODE=7600
|
|||
if %NODE% == 7 set NODE=7700
|
||||
if %NODE% == 8 set NODE=7800
|
||||
|
||||
rem set "fqdn="
|
||||
set "fqdn=localhost"
|
||||
for /f "skip=1" %%A in (
|
||||
'wmic computersystem get caption'
|
||||
) do if not defined fqdn set "fqdn=%%A"
|
||||
|
|
|
@ -50,7 +50,7 @@ if %EXEC_OPTON% == start (
|
|||
goto :finish
|
||||
)
|
||||
echo check taosd online
|
||||
tail -n +0 %TAOS_LOG% | grep -q "TDengine initialized successfully" || goto :check_online
|
||||
tail -n +0 %TAOS_LOG% | grep -E "TDengine initialized successfully|from offline to online" || goto :check_online
|
||||
echo finish
|
||||
goto :finish
|
||||
)
|
||||
|
|
|
@ -16,7 +16,7 @@ DLL_EXPORT int32_t l2norm_destroy() {
|
|||
DLL_EXPORT int32_t l2norm_start(SUdfInterBuf *buf) {
|
||||
*(int64_t*)(buf->buf) = 0;
|
||||
buf->bufLen = sizeof(double);
|
||||
buf->numOfResult = 0;
|
||||
buf->numOfResult = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -58,20 +58,11 @@ DLL_EXPORT int32_t l2norm(SUdfDataBlock* block, SUdfInterBuf *interBuf, SUdfInte
|
|||
|
||||
*(double*)(newInterBuf->buf) = sumSquares;
|
||||
newInterBuf->bufLen = sizeof(double);
|
||||
|
||||
if (interBuf->numOfResult == 0 && numNotNull == 0) {
|
||||
newInterBuf->numOfResult = 0;
|
||||
} else {
|
||||
newInterBuf->numOfResult = 1;
|
||||
}
|
||||
newInterBuf->numOfResult = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
DLL_EXPORT int32_t l2norm_finish(SUdfInterBuf* buf, SUdfInterBuf *resultData) {
|
||||
if (buf->numOfResult == 0) {
|
||||
resultData->numOfResult = 0;
|
||||
return 0;
|
||||
}
|
||||
double sumSquares = *(double*)(buf->buf);
|
||||
*(double*)(resultData->buf) = sqrt(sumSquares);
|
||||
resultData->bufLen = sizeof(double);
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
#!/bin/bash
|
||||
set +e
|
||||
|
||||
FILE=/usr/local/lib/libtaospyudf.so
|
||||
if [ ! -f "$FILE" ]; then
|
||||
echo "$FILE does not exist."
|
||||
apt install -y python3 python3-dev python3-venv
|
||||
/usr/bin/python3 -m venv /udfenv
|
||||
source /udfenv/bin/activate
|
||||
pip3 install taospyudf
|
||||
ldconfig
|
||||
deactivate
|
||||
else
|
||||
echo "show dependencies of $FILE"
|
||||
ldd $FILE
|
||||
fi
|
|
@ -0,0 +1,21 @@
|
|||
def init():
|
||||
pass
|
||||
|
||||
def process(block):
|
||||
(rows, cols) = block.shape()
|
||||
result = []
|
||||
for i in range(rows):
|
||||
r = 2 ** 32 - 1
|
||||
for j in range(cols):
|
||||
cell = block.data(i,j)
|
||||
if cell is None:
|
||||
result.append(None)
|
||||
break
|
||||
else:
|
||||
r = r & cell
|
||||
else:
|
||||
result.append(r)
|
||||
return result
|
||||
|
||||
def destroy():
|
||||
pass
|
|
@ -0,0 +1,27 @@
|
|||
import json
|
||||
import math
|
||||
|
||||
def init():
|
||||
pass
|
||||
|
||||
def destroy():
|
||||
pass
|
||||
|
||||
def start():
|
||||
return json.dumps(0.0).encode('utf-8')
|
||||
|
||||
def finish(buf):
|
||||
sum_squares = json.loads(buf)
|
||||
result = math.sqrt(sum_squares)
|
||||
return result
|
||||
|
||||
def reduce(datablock, buf):
|
||||
(rows, cols) = datablock.shape()
|
||||
sum_squares = json.loads(buf)
|
||||
|
||||
for i in range(rows):
|
||||
for j in range(cols):
|
||||
cell = datablock.data(i,j)
|
||||
if cell is not None:
|
||||
sum_squares += cell * cell
|
||||
return json.dumps(sum_squares).encode('utf-8')
|
|
@ -0,0 +1,67 @@
|
|||
@echo off
|
||||
SETLOCAL EnableDelayedExpansion
|
||||
for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a")
|
||||
set /a a=0
|
||||
echo Windows Taosd Full Test
|
||||
set /a exitNum=0
|
||||
rm -rf failed.txt
|
||||
set caseFile="win-test-file"
|
||||
if not "%2" == "" (
|
||||
set caseFile="%2"
|
||||
)
|
||||
for /F "usebackq tokens=*" %%i in (!caseFile!) do (
|
||||
set line=%%i
|
||||
call :CheckSkipCase %%i
|
||||
if !skipCase! == false (
|
||||
if "!line:~,9!" == "./test.sh" (
|
||||
set /a a+=1
|
||||
echo !a! Processing %%i
|
||||
call :GetTimeSeconds !time!
|
||||
set time1=!_timeTemp!
|
||||
echo Start at !time!
|
||||
call !line:./test.sh=wtest.bat! > result_!a!.txt 2>error_!a!.txt || set /a errorlevel=8
|
||||
if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && set /a exitNum=8 && echo %%i >>failed.txt ) else ( call :colorEcho 0a "Success" &echo. )
|
||||
)
|
||||
)
|
||||
)
|
||||
exit /b !exitNum!
|
||||
|
||||
:colorEcho
|
||||
set timeNow=%time%
|
||||
call :GetTimeSeconds %timeNow%
|
||||
set time2=%_timeTemp%
|
||||
set /a interTime=%time2% - %time1%
|
||||
echo End at %timeNow% , cast %interTime%s
|
||||
echo off
|
||||
<nul set /p ".=%DEL%" > "%~2"
|
||||
findstr /v /a:%1 /R "^$" "%~2" nul
|
||||
del "%~2" > nul 2>&1i
|
||||
goto :eof
|
||||
|
||||
:GetTimeSeconds
|
||||
set tt=%1
|
||||
set tt=%tt:.= %
|
||||
set tt=%tt::= %
|
||||
set tt=%tt: 0= %
|
||||
set /a index=1
|
||||
for %%a in (%tt%) do (
|
||||
if !index! EQU 1 (
|
||||
set /a hh=%%a
|
||||
)^
|
||||
else if !index! EQU 2 (
|
||||
set /a mm=%%a
|
||||
|
||||
)^
|
||||
else if !index! EQU 3 (
|
||||
set /a ss=%%a
|
||||
)
|
||||
set /a index=index+1
|
||||
)
|
||||
set /a _timeTemp=(%hh%*60+%mm%)*60+%ss%
|
||||
goto :eof
|
||||
|
||||
:CheckSkipCase
|
||||
set skipCase=false
|
||||
@REM if "%*" == "./test.sh -f tsim/query/scalarFunction.sim" ( set skipCase=true )
|
||||
echo %* | grep valgrind && set skipCase=true
|
||||
:goto eof
|
|
@ -50,6 +50,21 @@ endi
|
|||
print =============== step2 create database
|
||||
sql create database d1 vgroups 1 replica 3
|
||||
sql use d1
|
||||
|
||||
$wt = 0
|
||||
stepwt1:
|
||||
$wt = $wt + 1
|
||||
sleep 1000
|
||||
if $wt == 200 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show transactions
|
||||
if $rows != 0 then
|
||||
print wait 1 seconds to alter
|
||||
goto stepwt1
|
||||
endi
|
||||
|
||||
sql create table d1.st0 (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c0 using st0 tags(0)
|
||||
sql create table d1.c1 using st0 tags(1)
|
||||
|
|
|
@ -170,6 +170,20 @@ if $leaderExist != 1 then
|
|||
goto step35
|
||||
endi
|
||||
|
||||
$wt = 0
|
||||
stepwt1:
|
||||
$wt = $wt + 1
|
||||
sleep 1000
|
||||
if $wt == 200 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show transactions
|
||||
if $rows != 0 then
|
||||
print wait 1 seconds to alter
|
||||
goto stepwt1
|
||||
endi
|
||||
|
||||
print =============== step36: create table
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
|
|
|
@ -0,0 +1,279 @@
|
|||
system_content printf %OS%
|
||||
if $system_content == Windows_NT then
|
||||
return 0;
|
||||
endi
|
||||
|
||||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c udf -v 1
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
sql connect
|
||||
|
||||
print ======== step1 udf
|
||||
system sh/compile_udf.sh
|
||||
system sh/prepare_pyudf.sh
|
||||
system mkdir -p /tmp/pyudf
|
||||
system cp sh/pybitand.py /tmp/pyudf/
|
||||
system cp sh/pyl2norm.py /tmp/pyudf/
|
||||
system ls /tmp/pyudf
|
||||
|
||||
sql create database udf vgroups 3;
|
||||
sql use udf;
|
||||
sql select * from information_schema.ins_databases;
|
||||
|
||||
sql create table t (ts timestamp, f int);
|
||||
sql insert into t values(now, 1)(now+1s, 2);
|
||||
|
||||
system_content printf %OS%
|
||||
if $system_content == Windows_NT then
|
||||
return 0;
|
||||
endi
|
||||
if $system_content == Windows_NT then
|
||||
sql create function bit_and as 'C:\\Windows\\Temp\\bitand.dll' outputtype int bufSize 8;
|
||||
sql create aggregate function l2norm as 'C:\\Windows\\Temp\\l2norm.dll' outputtype double bufSize 8;
|
||||
else
|
||||
sql create function bit_and as '/tmp/udf/libbitand.so' outputtype int bufSize 8;
|
||||
sql create aggregate function l2norm as '/tmp/udf/libl2norm.so' outputtype double bufSize 8;
|
||||
endi
|
||||
sql create function pybitand as '/tmp/pyudf/pybitand.py' outputtype int language 'python';
|
||||
sql create aggregate function pyl2norm as '/tmp/pyudf/pyl2norm.py' outputtype double bufSize 128 language 'python';
|
||||
|
||||
sql show functions;
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
sql select bit_and(f, f) from t;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 2 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pybitand(f, f) from t;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 2 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select l2norm(f) from t;
|
||||
if $rows != 1 then
|
||||
print expect 1, actual $rows
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.236067977 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(f) from t;
|
||||
if $rows != 1 then
|
||||
print expect 1, actual $rows
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.236067977 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql create table t2 (ts timestamp, f1 int, f2 int);
|
||||
sql insert into t2 values(now, 0, 0)(now+1s, 1, 1);
|
||||
sql select bit_and(f1, f2) from t2;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 0 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select l2norm(f1, f2) from t2;
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pybitand(f1, f2) from t2;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 0 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(f1, f2) from t2;
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql insert into t2 values(now+2s, 1, null)(now+3s, null, 2);
|
||||
sql select bit_and(f1, f2) from t2;
|
||||
print $rows , $data00 , $data10 , $data20 , $data30
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 0 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data20 != NULL then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data30 != NULL then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select l2norm(f1, f2) from t2;
|
||||
print $rows, $data00
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.645751311 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pybitand(f1, f2) from t2;
|
||||
print $rows , $data00 , $data10 , $data20 , $data30
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 0 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data20 != NULL then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data30 != NULL then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(f1, f2) from t2;
|
||||
print $rows, $data00
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.645751311 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql insert into t2 values(now+4s, 4, 8)(now+5s, 5, 9);
|
||||
sql select l2norm(f1-f2), l2norm(f1+f2) from t2;
|
||||
print $rows , $data00 , $data01
|
||||
if $rows != 1 then
|
||||
return -1;
|
||||
endi
|
||||
if $data00 != 5.656854249 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 18.547236991 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select l2norm(bit_and(f2, f1)), l2norm(bit_and(f1, f2)) from t2;
|
||||
print $rows , $data00 , $data01
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select l2norm(f2) from udf.t2 group by 1-bit_and(f1, f2) order by 1-bit_and(f1,f2);
|
||||
print $rows , $data00 , $data10 , $data20
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.000000000 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 9.055385138 then
|
||||
return -1
|
||||
endi
|
||||
if $data20 != 8.000000000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(f1-f2), pyl2norm(f1+f2) from t2;
|
||||
print $rows , $data00 , $data01
|
||||
if $rows != 1 then
|
||||
return -1;
|
||||
endi
|
||||
if $data00 != 5.656854249 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 18.547236991 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(pybitand(f2, f1)), pyl2norm(pybitand(f1, f2)) from t2;
|
||||
print $rows , $data00 , $data01
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1.414213562 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select pyl2norm(f2) from udf.t2 group by 1-pybitand(f1, f2) order by 1-pybitand(f1,f2);
|
||||
print $rows , $data00 , $data10 , $data20
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 2.000000000 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != 9.055385138 then
|
||||
return -1
|
||||
endi
|
||||
if $data20 != 8.000000000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
#sql drop function bit_and;
|
||||
#sql show functions;
|
||||
#if $rows != 1 then
|
||||
# return -1
|
||||
#endi
|
||||
#if $data00 != @l2norm@ then
|
||||
# return -1
|
||||
# endi
|
||||
#sql drop function l2norm;
|
||||
#sql show functions;
|
||||
#if $rows != 0 then
|
||||
# return -1
|
||||
#endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
@ -55,7 +55,7 @@ $loop_cnt = 0
|
|||
check_db_ready:
|
||||
$loop_cnt = $loop_cnt + 1
|
||||
sleep 200
|
||||
if $loop_cnt == 100 then
|
||||
if $loop_cnt == 500 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
|
|
|
@ -0,0 +1,402 @@
|
|||
./test.sh -f tsim/user/basic.sim
|
||||
./test.sh -f tsim/user/password.sim
|
||||
./test.sh -f tsim/user/privilege_db.sim
|
||||
./test.sh -f tsim/user/privilege_sysinfo.sim
|
||||
./test.sh -f tsim/user/privilege_topic.sim
|
||||
./test.sh -f tsim/db/alter_option.sim
|
||||
rem ./test.sh -f tsim/db/alter_replica_13.sim
|
||||
./test.sh -f tsim/db/alter_replica_31.sim
|
||||
./test.sh -f tsim/db/basic1.sim
|
||||
./test.sh -f tsim/db/basic2.sim
|
||||
./test.sh -f tsim/db/basic3.sim
|
||||
./test.sh -f tsim/db/basic4.sim
|
||||
./test.sh -f tsim/db/basic5.sim
|
||||
./test.sh -f tsim/db/basic6.sim
|
||||
./test.sh -f tsim/db/commit.sim
|
||||
./test.sh -f tsim/db/create_all_options.sim
|
||||
./test.sh -f tsim/db/delete_reuse1.sim
|
||||
./test.sh -f tsim/db/delete_reuse2.sim
|
||||
./test.sh -f tsim/db/delete_reusevnode.sim
|
||||
./test.sh -f tsim/db/delete_reusevnode2.sim
|
||||
./test.sh -f tsim/db/delete_writing1.sim
|
||||
./test.sh -f tsim/db/delete_writing2.sim
|
||||
./test.sh -f tsim/db/error1.sim
|
||||
./test.sh -f tsim/db/keep.sim
|
||||
./test.sh -f tsim/db/len.sim
|
||||
./test.sh -f tsim/db/repeat.sim
|
||||
./test.sh -f tsim/db/show_create_db.sim
|
||||
./test.sh -f tsim/db/show_create_table.sim
|
||||
./test.sh -f tsim/db/tables.sim
|
||||
./test.sh -f tsim/db/taosdlog.sim
|
||||
./test.sh -f tsim/dnode/balance_replica1.sim
|
||||
./test.sh -f tsim/dnode/balance_replica3.sim
|
||||
./test.sh -f tsim/dnode/balance1.sim
|
||||
./test.sh -f tsim/dnode/balance2.sim
|
||||
./test.sh -f tsim/dnode/balance3.sim
|
||||
./test.sh -f tsim/dnode/balancex.sim
|
||||
./test.sh -f tsim/dnode/create_dnode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_mnode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_qnode_snode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_vnode_replica1.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_vnode_replica3.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica1.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica3.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_force.sim
|
||||
./test.sh -f tsim/dnode/offline_reason.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica1.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v1_leader.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v1_follower.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v2.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v3.sim
|
||||
./test.sh -f tsim/dnode/vnode_clean.sim
|
||||
./test.sh -f tsim/dnode/use_dropped_dnode.sim
|
||||
./test.sh -f tsim/dnode/split_vgroup_replica1.sim
|
||||
./test.sh -f tsim/dnode/split_vgroup_replica3.sim
|
||||
./test.sh -f tsim/import/basic.sim
|
||||
./test.sh -f tsim/import/commit.sim
|
||||
./test.sh -f tsim/import/large.sim
|
||||
./test.sh -f tsim/import/replica1.sim
|
||||
./test.sh -f tsim/insert/backquote.sim
|
||||
./test.sh -f tsim/insert/basic.sim
|
||||
./test.sh -f tsim/insert/basic0.sim
|
||||
./test.sh -f tsim/insert/basic1.sim
|
||||
./test.sh -f tsim/insert/basic2.sim
|
||||
./test.sh -f tsim/insert/commit-merge0.sim
|
||||
./test.sh -f tsim/insert/insert_drop.sim
|
||||
./test.sh -f tsim/insert/insert_select.sim
|
||||
./test.sh -f tsim/insert/null.sim
|
||||
./test.sh -f tsim/insert/query_block1_file.sim
|
||||
./test.sh -f tsim/insert/query_block1_memory.sim
|
||||
./test.sh -f tsim/insert/query_block2_file.sim
|
||||
./test.sh -f tsim/insert/query_block2_memory.sim
|
||||
./test.sh -f tsim/insert/query_file_memory.sim
|
||||
./test.sh -f tsim/insert/query_multi_file.sim
|
||||
./test.sh -f tsim/insert/tcp.sim
|
||||
./test.sh -f tsim/insert/update0.sim
|
||||
./test.sh -f tsim/insert/update1_sort_merge.sim
|
||||
./test.sh -f tsim/insert/update2.sim
|
||||
./test.sh -f tsim/parser/alter__for_community_version.sim
|
||||
./test.sh -f tsim/parser/alter_column.sim
|
||||
./test.sh -f tsim/parser/alter_stable.sim
|
||||
./test.sh -f tsim/parser/alter.sim
|
||||
./test.sh -f tsim/parser/alter1.sim
|
||||
./test.sh -f tsim/parser/auto_create_tb_drop_tb.sim
|
||||
./test.sh -f tsim/parser/auto_create_tb.sim
|
||||
./test.sh -f tsim/parser/between_and.sim
|
||||
./test.sh -f tsim/parser/binary_escapeCharacter.sim
|
||||
./test.sh -f tsim/parser/col_arithmetic_operation.sim
|
||||
./test.sh -f tsim/parser/columnValue_bigint.sim
|
||||
./test.sh -f tsim/parser/columnValue_bool.sim
|
||||
./test.sh -f tsim/parser/columnValue_double.sim
|
||||
./test.sh -f tsim/parser/columnValue_float.sim
|
||||
./test.sh -f tsim/parser/columnValue_int.sim
|
||||
./test.sh -f tsim/parser/columnValue_smallint.sim
|
||||
./test.sh -f tsim/parser/columnValue_tinyint.sim
|
||||
./test.sh -f tsim/parser/columnValue_unsign.sim
|
||||
./test.sh -f tsim/parser/commit.sim
|
||||
./test.sh -f tsim/parser/condition.sim
|
||||
./test.sh -f tsim/parser/constCol.sim
|
||||
./test.sh -f tsim/parser/create_db.sim
|
||||
./test.sh -f tsim/parser/create_mt.sim
|
||||
./test.sh -f tsim/parser/create_tb_with_tag_name.sim
|
||||
./test.sh -f tsim/parser/create_tb.sim
|
||||
./test.sh -f tsim/parser/dbtbnameValidate.sim
|
||||
./test.sh -f tsim/parser/distinct.sim
|
||||
./test.sh -f tsim/parser/fill_us.sim
|
||||
./test.sh -f tsim/parser/fill.sim
|
||||
./test.sh -f tsim/parser/first_last.sim
|
||||
./test.sh -f tsim/parser/fill_stb.sim
|
||||
./test.sh -f tsim/parser/interp.sim
|
||||
./test.sh -f tsim/parser/fourArithmetic-basic.sim
|
||||
./test.sh -f tsim/parser/function.sim
|
||||
./test.sh -f tsim/parser/groupby-basic.sim
|
||||
./test.sh -f tsim/parser/groupby.sim
|
||||
./test.sh -f tsim/parser/having_child.sim
|
||||
./test.sh -f tsim/parser/having.sim
|
||||
./test.sh -f tsim/parser/import_commit1.sim
|
||||
./test.sh -f tsim/parser/import_commit2.sim
|
||||
./test.sh -f tsim/parser/import_commit3.sim
|
||||
./test.sh -f tsim/parser/import_file.sim
|
||||
./test.sh -f tsim/parser/import.sim
|
||||
./test.sh -f tsim/parser/insert_multiTbl.sim
|
||||
./test.sh -f tsim/parser/insert_tb.sim
|
||||
./test.sh -f tsim/parser/join_manyblocks.sim
|
||||
./test.sh -f tsim/parser/join_multitables.sim
|
||||
./test.sh -f tsim/parser/join_multivnode.sim
|
||||
./test.sh -f tsim/parser/join.sim
|
||||
./test.sh -f tsim/parser/last_cache.sim
|
||||
./test.sh -f tsim/parser/last_groupby.sim
|
||||
./test.sh -f tsim/parser/lastrow.sim
|
||||
./test.sh -f tsim/parser/lastrow2.sim
|
||||
./test.sh -f tsim/parser/like.sim
|
||||
./test.sh -f tsim/parser/limit.sim
|
||||
./test.sh -f tsim/parser/limit1.sim
|
||||
./test.sh -f tsim/parser/mixed_blocks.sim
|
||||
./test.sh -f tsim/parser/nchar.sim
|
||||
./test.sh -f tsim/parser/nestquery.sim
|
||||
./test.sh -f tsim/parser/null_char.sim
|
||||
./test.sh -f tsim/parser/precision_ns.sim
|
||||
./test.sh -f tsim/parser/projection_limit_offset.sim
|
||||
./test.sh -f tsim/parser/regex.sim
|
||||
./test.sh -f tsim/parser/regressiontest.sim
|
||||
./test.sh -f tsim/parser/select_across_vnodes.sim
|
||||
./test.sh -f tsim/parser/select_distinct_tag.sim
|
||||
./test.sh -f tsim/parser/select_from_cache_disk.sim
|
||||
./test.sh -f tsim/parser/select_with_tags.sim
|
||||
./test.sh -f tsim/parser/selectResNum.sim
|
||||
./test.sh -f tsim/parser/set_tag_vals.sim
|
||||
./test.sh -f tsim/parser/single_row_in_tb.sim
|
||||
./test.sh -f tsim/parser/sliding.sim
|
||||
./test.sh -f tsim/parser/slimit_alter_tags.sim
|
||||
./test.sh -f tsim/parser/slimit.sim
|
||||
./test.sh -f tsim/parser/slimit1.sim
|
||||
./test.sh -f tsim/parser/stableOp.sim
|
||||
./test.sh -f tsim/parser/tags_dynamically_specifiy.sim
|
||||
./test.sh -f tsim/parser/tags_filter.sim
|
||||
./test.sh -f tsim/parser/tbnameIn.sim
|
||||
./test.sh -f tsim/parser/timestamp.sim
|
||||
./test.sh -f tsim/parser/top_groupby.sim
|
||||
./test.sh -f tsim/parser/topbot.sim
|
||||
./test.sh -f tsim/parser/union.sim
|
||||
./test.sh -f tsim/parser/union_sysinfo.sim
|
||||
./test.sh -f tsim/parser/where.sim
|
||||
./test.sh -f tsim/query/tagLikeFilter.sim
|
||||
./test.sh -f tsim/query/charScalarFunction.sim
|
||||
./test.sh -f tsim/query/explain.sim
|
||||
./test.sh -f tsim/query/interval-offset.sim
|
||||
./test.sh -f tsim/query/interval.sim
|
||||
./test.sh -f tsim/query/scalarFunction.sim
|
||||
./test.sh -f tsim/query/scalarNull.sim
|
||||
./test.sh -f tsim/query/session.sim
|
||||
./test.sh -f tsim/query/sys_tbname.sim
|
||||
./test.sh -f tsim/query/groupby.sim
|
||||
./test.sh -f tsim/query/event.sim
|
||||
./test.sh -f tsim/query/forceFill.sim
|
||||
./test.sh -f tsim/query/emptyTsRange.sim
|
||||
./test.sh -f tsim/query/partitionby.sim
|
||||
./test.sh -f tsim/qnode/basic1.sim
|
||||
./test.sh -f tsim/snode/basic1.sim
|
||||
./test.sh -f tsim/mnode/basic1.sim
|
||||
./test.sh -f tsim/mnode/basic2.sim
|
||||
./test.sh -f tsim/mnode/basic3.sim
|
||||
./test.sh -f tsim/mnode/basic4.sim
|
||||
./test.sh -f tsim/mnode/basic5.sim
|
||||
./test.sh -f tsim/show/basic.sim
|
||||
./test.sh -f tsim/table/autocreate.sim
|
||||
./test.sh -f tsim/table/basic1.sim
|
||||
./test.sh -f tsim/table/basic2.sim
|
||||
./test.sh -f tsim/table/basic3.sim
|
||||
./test.sh -f tsim/table/bigint.sim
|
||||
./test.sh -f tsim/table/binary.sim
|
||||
./test.sh -f tsim/table/bool.sim
|
||||
./test.sh -f tsim/table/column_name.sim
|
||||
./test.sh -f tsim/table/column_num.sim
|
||||
./test.sh -f tsim/table/column_value.sim
|
||||
./test.sh -f tsim/table/column2.sim
|
||||
./test.sh -f tsim/table/createmulti.sim
|
||||
./test.sh -f tsim/table/date.sim
|
||||
./test.sh -f tsim/table/db.table.sim
|
||||
./test.sh -f tsim/table/delete_reuse1.sim
|
||||
./test.sh -f tsim/table/delete_reuse2.sim
|
||||
./test.sh -f tsim/table/delete_writing.sim
|
||||
./test.sh -f tsim/table/describe.sim
|
||||
./test.sh -f tsim/table/double.sim
|
||||
./test.sh -f tsim/table/float.sim
|
||||
./test.sh -f tsim/table/hash.sim
|
||||
./test.sh -f tsim/table/int.sim
|
||||
./test.sh -f tsim/table/limit.sim
|
||||
./test.sh -f tsim/table/smallint.sim
|
||||
./test.sh -f tsim/table/table_len.sim
|
||||
./test.sh -f tsim/table/table.sim
|
||||
./test.sh -f tsim/table/tinyint.sim
|
||||
./test.sh -f tsim/table/vgroup.sim
|
||||
./test.sh -f tsim/stream/basic0.sim -g
|
||||
./test.sh -f tsim/stream/basic1.sim
|
||||
./test.sh -f tsim/stream/basic2.sim
|
||||
./test.sh -f tsim/stream/drop_stream.sim
|
||||
./test.sh -f tsim/stream/fillHistoryBasic1.sim
|
||||
./test.sh -f tsim/stream/fillHistoryBasic2.sim
|
||||
./test.sh -f tsim/stream/fillHistoryBasic3.sim
|
||||
./test.sh -f tsim/stream/distributeInterval0.sim
|
||||
./test.sh -f tsim/stream/distributeIntervalRetrive0.sim
|
||||
./test.sh -f tsim/stream/distributeSession0.sim
|
||||
./test.sh -f tsim/stream/session0.sim
|
||||
./test.sh -f tsim/stream/session1.sim
|
||||
./test.sh -f tsim/stream/state0.sim
|
||||
./test.sh -f tsim/stream/triggerInterval0.sim
|
||||
./test.sh -f tsim/stream/triggerSession0.sim
|
||||
./test.sh -f tsim/stream/partitionby.sim
|
||||
./test.sh -f tsim/stream/partitionby1.sim
|
||||
./test.sh -f tsim/stream/schedSnode.sim
|
||||
./test.sh -f tsim/stream/windowClose.sim
|
||||
./test.sh -f tsim/stream/ignoreExpiredData.sim
|
||||
./test.sh -f tsim/stream/sliding.sim
|
||||
./test.sh -f tsim/stream/partitionbyColumnInterval.sim
|
||||
./test.sh -f tsim/stream/partitionbyColumnSession.sim
|
||||
./test.sh -f tsim/stream/partitionbyColumnState.sim
|
||||
./test.sh -f tsim/stream/deleteInterval.sim
|
||||
./test.sh -f tsim/stream/deleteSession.sim
|
||||
./test.sh -f tsim/stream/deleteState.sim
|
||||
./test.sh -f tsim/stream/fillIntervalDelete0.sim
|
||||
./test.sh -f tsim/stream/fillIntervalDelete1.sim
|
||||
./test.sh -f tsim/stream/fillIntervalLinear.sim
|
||||
./test.sh -f tsim/stream/fillIntervalPartitionBy.sim
|
||||
./test.sh -f tsim/stream/fillIntervalPrevNext.sim
|
||||
./test.sh -f tsim/stream/fillIntervalValue.sim
|
||||
./test.sh -f tsim/stream/udTableAndTag0.sim
|
||||
./test.sh -f tsim/stream/udTableAndTag1.sim
|
||||
./test.sh -f tsim/trans/lossdata1.sim
|
||||
./test.sh -f tsim/trans/create_db.sim
|
||||
./test.sh -f tsim/tmq/basic1.sim
|
||||
./test.sh -f tsim/tmq/basic2.sim
|
||||
./test.sh -f tsim/tmq/basic3.sim
|
||||
./test.sh -f tsim/tmq/basic4.sim
|
||||
./test.sh -f tsim/tmq/basic1Of2Cons.sim
|
||||
./test.sh -f tsim/tmq/basic2Of2Cons.sim
|
||||
./test.sh -f tsim/tmq/basic3Of2Cons.sim
|
||||
./test.sh -f tsim/tmq/basic4Of2Cons.sim
|
||||
./test.sh -f tsim/tmq/basic2Of2ConsOverlap.sim
|
||||
./test.sh -f tsim/tmq/topic.sim
|
||||
./test.sh -f tsim/tmq/snapshot.sim
|
||||
./test.sh -f tsim/tmq/snapshot1.sim
|
||||
./test.sh -f tsim/stable/alter_comment.sim
|
||||
./test.sh -f tsim/stable/alter_count.sim
|
||||
./test.sh -f tsim/stable/alter_import.sim
|
||||
./test.sh -f tsim/stable/alter_insert1.sim
|
||||
./test.sh -f tsim/stable/alter_insert2.sim
|
||||
./test.sh -f tsim/stable/alter_metrics.sim
|
||||
./test.sh -f tsim/stable/column_add.sim
|
||||
./test.sh -f tsim/stable/column_drop.sim
|
||||
./test.sh -f tsim/stable/column_modify.sim
|
||||
./test.sh -f tsim/stable/disk.sim
|
||||
./test.sh -f tsim/stable/dnode3.sim
|
||||
./test.sh -f tsim/stable/metrics.sim
|
||||
./test.sh -f tsim/stable/refcount.sim
|
||||
./test.sh -f tsim/stable/tag_add.sim
|
||||
./test.sh -f tsim/stable/tag_drop.sim
|
||||
./test.sh -f tsim/stable/tag_filter.sim
|
||||
./test.sh -f tsim/stable/tag_modify.sim
|
||||
./test.sh -f tsim/stable/tag_rename.sim
|
||||
./test.sh -f tsim/stable/values.sim
|
||||
./test.sh -f tsim/stable/vnode3.sim
|
||||
./test.sh -f tsim/stable/metrics_idx.sim
|
||||
./test.sh -f tsim/sma/drop_sma.sim
|
||||
./test.sh -f tsim/sma/sma_leak.sim
|
||||
./test.sh -f tsim/sma/tsmaCreateInsertQuery.sim
|
||||
./test.sh -f tsim/sma/rsmaCreateInsertQuery.sim
|
||||
./test.sh -f tsim/sma/rsmaPersistenceRecovery.sim
|
||||
./test.sh -f tsim/valgrind/checkError1.sim
|
||||
./test.sh -f tsim/valgrind/checkError2.sim
|
||||
./test.sh -f tsim/valgrind/checkError3.sim
|
||||
./test.sh -f tsim/valgrind/checkError4.sim
|
||||
./test.sh -f tsim/valgrind/checkError5.sim
|
||||
./test.sh -f tsim/valgrind/checkError6.sim
|
||||
./test.sh -f tsim/valgrind/checkError7.sim
|
||||
./test.sh -f tsim/valgrind/checkError8.sim
|
||||
./test.sh -f tsim/vnode/replica3_basic.sim
|
||||
./test.sh -f tsim/vnode/replica3_repeat.sim
|
||||
./test.sh -f tsim/vnode/replica3_vgroup.sim
|
||||
./test.sh -f tsim/vnode/replica3_many.sim
|
||||
./test.sh -f tsim/vnode/replica3_import.sim
|
||||
./test.sh -f tsim/vnode/stable_balance_replica1.sim
|
||||
./test.sh -f tsim/vnode/stable_dnode2_stop.sim
|
||||
./test.sh -f tsim/vnode/stable_dnode2.sim
|
||||
./test.sh -f tsim/vnode/stable_dnode3.sim
|
||||
./test.sh -f tsim/vnode/stable_replica3_dnode6.sim
|
||||
./test.sh -f tsim/vnode/stable_replica3_vnode3.sim
|
||||
./test.sh -f tsim/sync/3Replica1VgElect.sim
|
||||
./test.sh -f tsim/sync/3Replica5VgElect.sim
|
||||
./test.sh -f tsim/sync/oneReplica1VgElect.sim
|
||||
./test.sh -f tsim/sync/oneReplica5VgElect.sim
|
||||
./test.sh -f tsim/catalog/alterInCurrent.sim
|
||||
./test.sh -f tsim/scalar/in.sim
|
||||
./test.sh -f tsim/scalar/scalar.sim
|
||||
./test.sh -f tsim/scalar/filter.sim
|
||||
./test.sh -f tsim/scalar/caseWhen.sim
|
||||
./test.sh -f tsim/scalar/tsConvert.sim
|
||||
./test.sh -f tsim/alter/cached_schema_after_alter.sim
|
||||
./test.sh -f tsim/alter/dnode.sim
|
||||
./test.sh -f tsim/alter/table.sim
|
||||
./test.sh -f tsim/cache/new_metrics.sim
|
||||
./test.sh -f tsim/cache/restart_table.sim
|
||||
./test.sh -f tsim/cache/restart_metrics.sim
|
||||
./test.sh -f tsim/column/commit.sim
|
||||
./test.sh -f tsim/column/metrics.sim
|
||||
./test.sh -f tsim/column/table.sim
|
||||
./test.sh -f tsim/compress/commitlog.sim
|
||||
./test.sh -f tsim/compress/compress2.sim
|
||||
./test.sh -f tsim/compress/compress.sim
|
||||
./test.sh -f tsim/compress/uncompress.sim
|
||||
./test.sh -f tsim/compute/avg.sim
|
||||
./test.sh -f tsim/compute/block_dist.sim
|
||||
./test.sh -f tsim/compute/bottom.sim
|
||||
./test.sh -f tsim/compute/count.sim
|
||||
./test.sh -f tsim/compute/diff.sim
|
||||
./test.sh -f tsim/compute/diff2.sim
|
||||
./test.sh -f tsim/compute/first.sim
|
||||
./test.sh -f tsim/compute/interval.sim
|
||||
./test.sh -f tsim/compute/last_row.sim
|
||||
./test.sh -f tsim/compute/last.sim
|
||||
./test.sh -f tsim/compute/leastsquare.sim
|
||||
./test.sh -f tsim/compute/max.sim
|
||||
./test.sh -f tsim/compute/min.sim
|
||||
./test.sh -f tsim/compute/null.sim
|
||||
./test.sh -f tsim/compute/percentile.sim
|
||||
./test.sh -f tsim/compute/stddev.sim
|
||||
./test.sh -f tsim/compute/sum.sim
|
||||
./test.sh -f tsim/compute/top.sim
|
||||
./test.sh -f tsim/field/2.sim
|
||||
./test.sh -f tsim/field/3.sim
|
||||
./test.sh -f tsim/field/4.sim
|
||||
./test.sh -f tsim/field/5.sim
|
||||
./test.sh -f tsim/field/6.sim
|
||||
./test.sh -f tsim/field/binary.sim
|
||||
./test.sh -f tsim/field/bigint.sim
|
||||
./test.sh -f tsim/field/bool.sim
|
||||
./test.sh -f tsim/field/double.sim
|
||||
./test.sh -f tsim/field/float.sim
|
||||
./test.sh -f tsim/field/int.sim
|
||||
./test.sh -f tsim/field/single.sim
|
||||
./test.sh -f tsim/field/smallint.sim
|
||||
./test.sh -f tsim/field/tinyint.sim
|
||||
./test.sh -f tsim/field/unsigined_bigint.sim
|
||||
./test.sh -f tsim/vector/metrics_field.sim
|
||||
./test.sh -f tsim/vector/metrics_mix.sim
|
||||
./test.sh -f tsim/vector/metrics_query.sim
|
||||
./test.sh -f tsim/vector/metrics_tag.sim
|
||||
./test.sh -f tsim/vector/metrics_time.sim
|
||||
./test.sh -f tsim/vector/multi.sim
|
||||
./test.sh -f tsim/vector/single.sim
|
||||
./test.sh -f tsim/vector/table_field.sim
|
||||
./test.sh -f tsim/vector/table_mix.sim
|
||||
./test.sh -f tsim/vector/table_query.sim
|
||||
./test.sh -f tsim/vector/table_time.sim
|
||||
./test.sh -f tsim/wal/kill.sim
|
||||
./test.sh -f tsim/tag/3.sim
|
||||
./test.sh -f tsim/tag/4.sim
|
||||
./test.sh -f tsim/tag/5.sim
|
||||
./test.sh -f tsim/tag/6.sim
|
||||
./test.sh -f tsim/tag/add.sim
|
||||
./test.sh -f tsim/tag/bigint.sim
|
||||
./test.sh -f tsim/tag/binary_binary.sim
|
||||
./test.sh -f tsim/tag/binary.sim
|
||||
./test.sh -f tsim/tag/bool_binary.sim
|
||||
./test.sh -f tsim/tag/bool_int.sim
|
||||
./test.sh -f tsim/tag/bool.sim
|
||||
./test.sh -f tsim/tag/change.sim
|
||||
./test.sh -f tsim/tag/column.sim
|
||||
./test.sh -f tsim/tag/commit.sim
|
||||
./test.sh -f tsim/tag/create.sim
|
||||
./test.sh -f tsim/tag/delete.sim
|
||||
./test.sh -f tsim/tag/double.sim
|
||||
./test.sh -f tsim/tag/filter.sim
|
||||
./test.sh -f tsim/tag/float.sim
|
||||
./test.sh -f tsim/tag/int_binary.sim
|
||||
./test.sh -f tsim/tag/int_float.sim
|
||||
./test.sh -f tsim/tag/int.sim
|
||||
./test.sh -f tsim/tag/set.sim
|
||||
./test.sh -f tsim/tag/smallint.sim
|
||||
./test.sh -f tsim/tag/tinyint.sim
|
||||
./test.sh -f tsim/tag/drop_tag.sim
|
||||
./test.sh -f tsim/tag/tbNameIn.sim
|
||||
./test.sh -f tmp/monitor.sim
|
|
@ -33,7 +33,7 @@ if exist %LOG_DIR% rmdir /s/q %LOG_DIR%
|
|||
if not exist %CFG_DIR% mkdir %CFG_DIR%
|
||||
if not exist %LOG_DIR% mkdir %LOG_DIR%
|
||||
|
||||
rem set "fqdn="
|
||||
set "fqdn=localhost"
|
||||
for /f "skip=1" %%A in (
|
||||
'wmic computersystem get caption'
|
||||
) do if not defined fqdn set "fqdn=%%A"
|
||||
|
|
|
@ -3,6 +3,7 @@ import taos
|
|||
import sys
|
||||
import os
|
||||
import time
|
||||
import platform
|
||||
import inspect
|
||||
from taos.tmq import Consumer
|
||||
|
||||
|
@ -106,6 +107,9 @@ class TDTestCase:
|
|||
if distro_id == "alpine":
|
||||
tdLog.info(f"alpine skip compatibility test")
|
||||
return True
|
||||
if platform.system().lower() == 'windows':
|
||||
tdLog.info(f"Windows skip compatibility test")
|
||||
return True
|
||||
bPath = self.getBuildPath()
|
||||
cPath = self.getCfgPath()
|
||||
dbname = "test"
|
||||
|
@ -163,7 +167,7 @@ class TDTestCase:
|
|||
|
||||
tdLog.printNoPrefix(f"==========step3:prepare and check data in new version-{nowServerVersion}")
|
||||
tdsql.query(f"select count(*) from {stb}")
|
||||
tdsql.checkData(0,0,tableNumbers*recordNumbers1)
|
||||
tdsql.checkData(0,0,tableNumbers*recordNumbers1)
|
||||
# tdsql.query("show streams;")
|
||||
# os.system(f"taosBenchmark -t {tableNumbers} -n {recordNumbers2} -y ")
|
||||
# tdsql.query("show streams;")
|
||||
|
@ -192,15 +196,15 @@ class TDTestCase:
|
|||
tdsql.query("describe information_schema.ins_databases;")
|
||||
qRows=tdsql.queryRows
|
||||
comFlag=True
|
||||
j=0
|
||||
while comFlag:
|
||||
j=0
|
||||
while comFlag:
|
||||
for i in range(qRows) :
|
||||
if tdsql.queryResult[i][0] == "retentions" :
|
||||
print("parameters include retentions")
|
||||
comFlag=False
|
||||
break
|
||||
else :
|
||||
comFlag=True
|
||||
comFlag=True
|
||||
j=j+1
|
||||
if j == qRows:
|
||||
print("parameters don't include retentions")
|
||||
|
|
|
@ -49,8 +49,6 @@ class TDTestCase:
|
|||
#!for bug
|
||||
tdDnodes.stoptaosd(1)
|
||||
sleep(self.delaytime * 5)
|
||||
if platform.system().lower() == 'windows':
|
||||
sleep(10)
|
||||
tdSql.error('select server_status()')
|
||||
|
||||
def run(self):
|
||||
|
|
|
@ -4,6 +4,7 @@ import sys
|
|||
import time
|
||||
from datetime import datetime
|
||||
import socket
|
||||
import psutil
|
||||
import os
|
||||
import platform
|
||||
if platform.system().lower() == 'windows':
|
||||
|
@ -67,19 +68,25 @@ class TDTestCase:
|
|||
return buildPath
|
||||
|
||||
def get_process_pid(self,processname):
|
||||
#origin artical link:https://blog.csdn.net/weixin_45623536/article/details/122099062
|
||||
process_info_list = []
|
||||
process = os.popen('ps -A | grep %s'% processname)
|
||||
process_info = process.read()
|
||||
for i in process_info.split(' '):
|
||||
if i != "":
|
||||
process_info_list.append(i)
|
||||
print(process_info_list)
|
||||
if len(process_info_list) != 0 :
|
||||
pid = int(process_info_list[0])
|
||||
else :
|
||||
pid = 0
|
||||
return pid
|
||||
if platform.system().lower() == 'windows':
|
||||
pids = psutil.process_iter()
|
||||
for pid in pids:
|
||||
if(pid.name() == processname):
|
||||
return pid.pid
|
||||
return 0
|
||||
else:
|
||||
process_info_list = []
|
||||
process = os.popen('ps -A | grep %s'% processname)
|
||||
process_info = process.read()
|
||||
for i in process_info.split(' '):
|
||||
if i != "":
|
||||
process_info_list.append(i)
|
||||
print(process_info_list)
|
||||
if len(process_info_list) != 0 :
|
||||
pid = int(process_info_list[0])
|
||||
else :
|
||||
pid = 0
|
||||
return pid
|
||||
|
||||
def checkAndstopPro(self,processName,startAction):
|
||||
i = 1
|
||||
|
@ -88,23 +95,29 @@ class TDTestCase:
|
|||
taosdPid=self.get_process_pid(processName)
|
||||
if taosdPid != 0 and taosdPid != "" :
|
||||
tdLog.info("stop taosd %s ,kill pid :%s "%(startAction,taosdPid))
|
||||
os.system("kill -9 %d"%taosdPid)
|
||||
os.system("kill -9 %d"%taosdPid)
|
||||
break
|
||||
else:
|
||||
tdLog.info( "wait start taosd ,times: %d "%i)
|
||||
time.sleep(1)
|
||||
i+= 1
|
||||
else :
|
||||
tdLog.exit("taosd %s is not running "%startAction)
|
||||
tdLog.exit("taosd %s is not running "%startAction)
|
||||
|
||||
def taosdCommandStop(self,startAction,taosdCmdRun):
|
||||
processName="taosd"
|
||||
if platform.system().lower() == 'windows':
|
||||
processName="taosd.exe"
|
||||
taosdCmd = taosdCmdRun + startAction
|
||||
tdLog.printNoPrefix("%s"%taosdCmd)
|
||||
logTime=datetime.now().strftime('%Y%m%d_%H%M%S_%f')
|
||||
os.system(f"nohup {taosdCmd} > {logTime}.log 2>&1 & ")
|
||||
self.checkAndstopPro(processName,startAction)
|
||||
os.system(f"rm -rf {logTime}.log")
|
||||
if platform.system().lower() == 'windows':
|
||||
cmd = f"mintty -h never {taosdCmd}"
|
||||
os.system(cmd)
|
||||
else:
|
||||
logTime=datetime.now().strftime('%Y%m%d_%H%M%S_%f')
|
||||
os.system(f"nohup {taosdCmd} > {logTime}.log 2>&1 & ")
|
||||
self.checkAndstopPro(processName,startAction)
|
||||
os.system(f"rm -rf {logTime}.log")
|
||||
|
||||
|
||||
def taosdCommandExe(self,startAction,taosdCmdRun):
|
||||
|
@ -139,7 +152,7 @@ class TDTestCase:
|
|||
tdSql.query("create stream s1 into source_db.output_stb as select _wstart AS startts, min(k), max(k), sum(k) from source_db.stb interval(10m);")
|
||||
|
||||
|
||||
#TD-19944 -Q=3
|
||||
#TD-19944 -Q=3
|
||||
tdsqlN=tdCom.newTdSql()
|
||||
|
||||
tdsqlN.query("select * from source_db.stb")
|
||||
|
@ -186,7 +199,11 @@ class TDTestCase:
|
|||
|
||||
startAction=" -a jsonFile:./taosdCaseTmp.json"
|
||||
tdLog.printNoPrefix("================================ parameter: %s"%startAction)
|
||||
os.system("echo \'{\"queryPolicy\":\"3\"}\' > taosdCaseTmp.json")
|
||||
|
||||
if platform.system().lower() == 'windows':
|
||||
os.system("echo {\"queryPolicy\":\"3\"} > taosdCaseTmp.json")
|
||||
else:
|
||||
os.system("echo \'{\"queryPolicy\":\"3\"}\' > taosdCaseTmp.json")
|
||||
self.taosdCommandStop(startAction,taosdCmdRun)
|
||||
|
||||
startAction = " -a jsonFile:./taosdCaseTmp.json -C "
|
||||
|
@ -206,12 +223,12 @@ class TDTestCase:
|
|||
self.taosdCommandStop(startAction,taosdCmdRun)
|
||||
|
||||
|
||||
startAction=" -E taosdCaseTmp/.env"
|
||||
startAction=f" -E taosdCaseTmp{os.sep}.env"
|
||||
tdLog.printNoPrefix("================================ parameter: %s"%startAction)
|
||||
os.system(" mkdir -p taosdCaseTmp ")
|
||||
os.system("echo \'TAOS_QUERY_POLICY=3\' > taosdCaseTmp/.env ")
|
||||
os.system(" mkdir -p taosdCaseTmp ")
|
||||
os.system("echo TAOS_QUERY_POLICY=3 > taosdCaseTmp/.env ")
|
||||
self.taosdCommandStop(startAction,taosdCmdRun)
|
||||
os.system(" rm -rf taosdCaseTmp ")
|
||||
os.system(" rm -rf taosdCaseTmp ")
|
||||
|
||||
startAction = " -V"
|
||||
tdLog.printNoPrefix("================================ parameter: %s"%startAction)
|
||||
|
|
|
@ -75,7 +75,7 @@ class TDTestCase:
|
|||
tdLog.exit(cmd)
|
||||
|
||||
def cfg_str(self, filename, update_str):
|
||||
cmd = f'echo "{update_str}" >> {filename}'
|
||||
cmd = f'echo {update_str} >> {filename}'
|
||||
if os.system(cmd) != 0:
|
||||
tdLog.exit(cmd)
|
||||
|
||||
|
@ -94,41 +94,41 @@ class TDTestCase:
|
|||
def __err_cfg(self):
|
||||
cfg_list = []
|
||||
err_case1 = [
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE1}1 {L1} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||
]
|
||||
err_case2 = [
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L2} {PRIMARY_DIR}"
|
||||
]
|
||||
err_case3 = [
|
||||
f"dataDir {self.taos_data_dir}/data33 3 {NON_PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}data33 3 {NON_PRIMARY_DIR}"
|
||||
]
|
||||
err_case4 = [
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L1} {NON_PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L1} {NON_PRIMARY_DIR}"
|
||||
]
|
||||
err_case5 = [f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||
err_case5 = [f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||
for i in range(16):
|
||||
err_case5.append(f"dataDir {self.taos_data_dir}/{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||
err_case5.append(f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||
|
||||
err_case6 = [
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}1 {L0} {PRIMARY_DIR}",
|
||||
]
|
||||
err_case7 = [
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L2} {PRIMARY_DIR}",
|
||||
]
|
||||
err_case8 = [
|
||||
f"dataDir {self.taos_data_dir}/data33 3 {PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}data33 3 {PRIMARY_DIR}"
|
||||
]
|
||||
err_case9 = [
|
||||
f"dataDir {self.taos_data_dir}/data33 -1 {NON_PRIMARY_DIR}"
|
||||
f"dataDir {self.taos_data_dir}{os.sep}data33 -1 {NON_PRIMARY_DIR}"
|
||||
]
|
||||
|
||||
cfg_list.append(err_case1)
|
||||
|
@ -147,23 +147,23 @@ class TDTestCase:
|
|||
def __current_cfg(self):
|
||||
cfg_list = []
|
||||
current_case1 = [
|
||||
#f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||
#f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||
]
|
||||
|
||||
#current_case2 = [f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||
#current_case2 = [f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||
current_case2 = []
|
||||
for i in range(9):
|
||||
current_case2.append(f"dataDir {self.taos_data_dir}/{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||
current_case2.append(f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||
|
||||
# TD-17773bug
|
||||
current_case3 = [
|
||||
#f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 ",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE1}0 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}/{DATA_PRE2}0 {L2} {NON_PRIMARY_DIR}",
|
||||
#f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}0 ",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE1}0 {L1} {NON_PRIMARY_DIR}",
|
||||
f"dataDir {self.taos_data_dir}{os.sep}{DATA_PRE2}0 {L2} {NON_PRIMARY_DIR}",
|
||||
]
|
||||
cfg_list.append(current_case1)
|
||||
cfg_list.append(current_case3)
|
||||
|
|
|
@ -567,7 +567,7 @@ class TDTestCase:
|
|||
if data_ct4_c10[i] is None:
|
||||
tdSql.checkData( i, 0, None )
|
||||
else:
|
||||
time2str = str(int((data_ct4_c10[i]-datetime.datetime.fromtimestamp(0)).total_seconds()*1000))
|
||||
time2str = str(int((data_ct4_c10[i]-datetime.datetime.fromtimestamp(0,data_ct4_c10[i].tzinfo)).total_seconds())*1000+int(data_ct4_c10[i].microsecond / 1000))
|
||||
tdSql.checkData( i, 0, time2str )
|
||||
tdSql.query(f"select cast(c10 as nchar(32)) as b from {self.dbname}.t1")
|
||||
for i in range(len(data_t1_c10)):
|
||||
|
@ -576,7 +576,7 @@ class TDTestCase:
|
|||
elif i == 10:
|
||||
continue
|
||||
else:
|
||||
time2str = str(int((data_t1_c10[i]-datetime.datetime.fromtimestamp(0)).total_seconds()*1000))
|
||||
time2str = str(int((data_t1_c10[i]-datetime.datetime.fromtimestamp(0,data_t1_c10[i].tzinfo)).total_seconds())*1000+int(data_t1_c10[i].microsecond / 1000))
|
||||
tdSql.checkData( i, 0, time2str )
|
||||
|
||||
tdLog.printNoPrefix("==========step38: cast timestamp to binary, expect no changes ")
|
||||
|
@ -585,7 +585,7 @@ class TDTestCase:
|
|||
if data_ct4_c10[i] is None:
|
||||
tdSql.checkData( i, 0, None )
|
||||
else:
|
||||
time2str = str(int((data_ct4_c10[i]-datetime.datetime.fromtimestamp(0)).total_seconds()*1000))
|
||||
time2str = str(int((data_ct4_c10[i]-datetime.datetime.fromtimestamp(0,data_ct4_c10[i].tzinfo)).total_seconds())*1000+int(data_ct4_c10[i].microsecond / 1000))
|
||||
tdSql.checkData( i, 0, time2str )
|
||||
tdSql.query(f"select cast(c10 as binary(32)) as b from {self.dbname}.t1")
|
||||
for i in range(len(data_t1_c10)):
|
||||
|
@ -594,7 +594,7 @@ class TDTestCase:
|
|||
elif i == 10:
|
||||
continue
|
||||
else:
|
||||
time2str = str(int((data_t1_c10[i]-datetime.datetime.fromtimestamp(0)).total_seconds()*1000))
|
||||
time2str = str(int((data_t1_c10[i]-datetime.datetime.fromtimestamp(0,data_t1_c10[i].tzinfo)).total_seconds())*1000+int(data_t1_c10[i].microsecond / 1000))
|
||||
tdSql.checkData( i, 0, time2str )
|
||||
|
||||
tdLog.printNoPrefix("==========step39: cast constant operation to bigint, expect change to int ")
|
||||
|
|
|
@ -64,61 +64,61 @@ class TDTestCase:
|
|||
tdSql.query(f"select ts,mode(c1) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 9)
|
||||
|
||||
tdSql.query(f"select ts,mode(c2) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -99999999999999999)
|
||||
|
||||
tdSql.query(f"select ts,mode(c3) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -999)
|
||||
|
||||
tdSql.query(f"select ts,mode(c4) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -99)
|
||||
|
||||
tdSql.query(f"select ts,mode(c5) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -9.99)
|
||||
|
||||
tdSql.query(f"select ts,mode(c6) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -1e+21)
|
||||
|
||||
tdSql.query(f"select ts,mode(c7) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, True)
|
||||
|
||||
tdSql.query(f"select ts,mode(c8) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 'binary9')
|
||||
|
||||
tdSql.query(f"select ts,mode(c9) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 'nchar9')
|
||||
|
||||
tdSql.query(f"select ts,c3,c5,c8,mode(c1) from {dbname}.tb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2022, 12, 31, 1, 1, 36))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2022, 12, 31, 1, 1, 36).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -999)
|
||||
tdSql.checkData(0, 2, -9.99)
|
||||
tdSql.checkData(0, 3, 'binary9')
|
||||
|
@ -128,61 +128,61 @@ class TDTestCase:
|
|||
tdSql.query(f"select ts,mode(c1) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 9)
|
||||
|
||||
tdSql.query(f"select ts,mode(c2) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -99999)
|
||||
|
||||
tdSql.query(f"select ts,mode(c3) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -999)
|
||||
|
||||
tdSql.query(f"select ts,mode(c4) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 2))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 2).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -99)
|
||||
|
||||
tdSql.query(f"select ts,mode(c5) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -9.99)
|
||||
|
||||
tdSql.query(f"select ts,mode(c6) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, -99.99)
|
||||
|
||||
tdSql.query(f"select ts,mode(c7) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, True)
|
||||
|
||||
tdSql.query(f"select ts,mode(c8) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 'binary9')
|
||||
|
||||
tdSql.query(f"select ts,mode(c9) from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 'nchar9')
|
||||
|
||||
tdSql.query(f"select ts,mode(c1),c3,c5,c8 from {dbname}.stb")
|
||||
tdSql.checkRows(1)
|
||||
ts = tdSql.getData(0, 0)
|
||||
tdSql.checkEqual(ts, datetime.datetime(2020, 12, 11, 0, 0, 3))
|
||||
tdSql.checkEqual(ts.astimezone(datetime.timezone.utc), datetime.datetime(2020, 12, 11, 0, 0, 3).astimezone(datetime.timezone.utc))
|
||||
tdSql.checkData(0, 1, 9)
|
||||
tdSql.checkData(0, 2, -999)
|
||||
tdSql.checkData(0, 3, -9.99)
|
||||
|
|
|
@ -35,7 +35,10 @@ class TDTestCase:
|
|||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
taosdFileName = "taosd"
|
||||
if platform.system().lower() == 'windows':
|
||||
taosdFileName = "taosd.exe"
|
||||
if (taosdFileName in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
|
@ -52,7 +55,10 @@ class TDTestCase:
|
|||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
|
||||
os.system("%staosBenchmark -d %s -t %d -n %d -O %d -a %d -b float,double,nchar\(200\),binary\(50\) -T 50 -y " % (binPath,dbname,tables,per_table_num,order,replica))
|
||||
cmd = "%staosBenchmark -d %s -t %d -n %d -O %d -a %d -b float,double,nchar\(200\),binary\(50\) -T 50 -y " % (binPath,dbname,tables,per_table_num,order,replica)
|
||||
if platform.system().lower() == 'windows':
|
||||
cmd = "%staosBenchmark -d %s -t %d -n %d -O %d -a %d -b float,double,nchar(200),binary(50) -T 50 -y " % (binPath,dbname,tables,per_table_num,order,replica)
|
||||
os.system(cmd)
|
||||
|
||||
def sql_base(self,dbname):
|
||||
self.check_sub(dbname)
|
||||
|
|
|
@ -292,9 +292,9 @@ class TDTestCase:
|
|||
maxQnode=tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
minQnode=tdSql.getData(0,0)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionQnode=tdSql.queryResult
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionallQnode=tdSql.queryResult
|
||||
|
||||
# tdSql.query("select * from information_schema.ins_qnodes;")
|
||||
|
@ -306,10 +306,10 @@ class TDTestCase:
|
|||
tdSql.checkData(0, 0, "%s"%maxQnode)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
tdSql.checkData(0, 0, "%s"%minQnode)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionVnode=tdSql.queryResult
|
||||
assert unionQnode == unionVnode
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionallVnode=tdSql.queryResult
|
||||
assert unionallQnode == unionallVnode
|
||||
|
||||
|
@ -340,9 +340,9 @@ class TDTestCase:
|
|||
assert maxQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
assert minQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionQnode==tdSql.queryResult
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionallQnode==tdSql.queryResult
|
||||
|
||||
# tdSql.query("select * from information_schema.ins_qnodes;")
|
||||
|
@ -354,8 +354,8 @@ class TDTestCase:
|
|||
assert maxQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
assert minQnode==tdSql.getData(0,0)
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
|
||||
# tdSql.execute("create qnode on dnode %s"%dnodeId)
|
||||
|
||||
|
@ -387,9 +387,9 @@ class TDTestCase:
|
|||
assert maxQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
assert minQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionQnode==tdSql.queryResult
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionallQnode==tdSql.queryResult
|
||||
|
||||
queryPolicy=1
|
||||
|
@ -412,9 +412,9 @@ class TDTestCase:
|
|||
assert maxQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
assert minQnode==tdSql.getData(0,0)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionQnode==tdSql.queryResult
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
assert unionallQnode==tdSql.queryResult
|
||||
|
||||
# test case : queryPolicy = 2
|
||||
|
@ -443,8 +443,8 @@ class TDTestCase:
|
|||
tdSql.execute("use db1;")
|
||||
tdSql.error("select max(c1) from stb10;")
|
||||
tdSql.error("select min(c1) from stb11;")
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
|
||||
tdSql.query("select max(c1) from stb10_0;")
|
||||
tdSql.query("select min(c1) from stb11_0;")
|
||||
|
@ -464,9 +464,9 @@ class TDTestCase:
|
|||
maxQnode=tdSql.getData(0,0)
|
||||
tdSql.query("select min(c1) from stb11;")
|
||||
minQnode=tdSql.getData(0,0)
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionQnode=tdSql.queryResult
|
||||
tdSql.query("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.query("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
unionallQnode=tdSql.queryResult
|
||||
|
||||
# tdSql.query("select * from information_schema.ins_qnodes;")
|
||||
|
@ -478,8 +478,8 @@ class TDTestCase:
|
|||
|
||||
tdSql.error("select max(c1) from stb10;")
|
||||
tdSql.error("select min(c1) from stb11;")
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
tdSql.error("select c0,c1 from(select c0,c1 from stb11_1 where (c0>1000) union all select c0,c1 from stb11_1 where c0>2000) order by c0,c1;")
|
||||
|
||||
# run case
|
||||
def run(self):
|
||||
|
|
|
@ -18,7 +18,10 @@ class MyDnodes(TDDnodes):
|
|||
def __init__(self ,dnodes_lists):
|
||||
super(MyDnodes,self).__init__()
|
||||
self.dnodes = dnodes_lists # dnode must be TDDnode instance
|
||||
self.simDeployed = False
|
||||
if platform.system().lower() == 'windows':
|
||||
self.simDeployed = True
|
||||
else:
|
||||
self.simDeployed = False
|
||||
|
||||
class TDTestCase:
|
||||
noConn = True
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue