diff --git a/docs/en/13-operation/07-import.md b/docs/en/13-operation/07-import.md index e95824e927..be0b988fc0 100644 --- a/docs/en/13-operation/07-import.md +++ b/docs/en/13-operation/07-import.md @@ -59,4 +59,4 @@ Query OK, 9 row(s) affected (0.004763s) ## Import using taosdump -A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can be used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump). +A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can be used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to the taosdump documentation. diff --git a/docs/en/13-operation/08-export.md b/docs/en/13-operation/08-export.md index bffda36e23..580844cf08 100644 --- a/docs/en/13-operation/08-export.md +++ b/docs/en/13-operation/08-export.md @@ -19,4 +19,4 @@ The data of table or STable specified by `tb_name` will be exported into a file ## Export Using taosdump -With `taosdump`, you can choose to export the data of all databases, a database, a table or a STable, you can also choose to export the data within a time range, or even only export the schema definition of a table. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump). +With `taosdump`, you can choose to export the data of all databases, a database, a table or a STable, you can also choose to export the data within a time range, or even only export the schema definition of a table. For the details of using `taosdump` please refer to the taosdump documentation. diff --git a/docs/en/13-operation/10-monitor.md b/docs/en/13-operation/10-monitor.md index b08216a9c4..f1be4c5fd3 100644 --- a/docs/en/13-operation/10-monitor.md +++ b/docs/en/13-operation/10-monitor.md @@ -11,8 +11,6 @@ The collection of the monitoring information is enabled by default, but can be d TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster. -Please refer to [TDinsight Grafana Dashboard](../../reference/tdinsight) to learn more details about using TDinsight to monitor TDengine. - A script `TDinsight.sh` is provided to deploy TDinsight automatically. Download `TDinsight.sh` with the below command: diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index fd43dd67fa..f770ce0d5d 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -36,6 +36,7 @@ REST connection supports all platforms that can run Java. | taos-jdbcdriver version | major changes | TDengine version | | :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: | +| 3.2.7 | Support VARBINARY and GEOMETRY types, and add time zone support for native connections. Support websocket auto reconnection | 3.2.0.0 or later | | 3.2.5 | Subscription add committed() and assignment() method | 3.1.0.3 or later | | 3.2.4 | Subscription add the enable.auto.commit parameter and the unsubscribe() method in the WebSocket connection | - | | 3.2.3 | Fixed resultSet data parsing failure in some cases | - | @@ -178,7 +179,7 @@ Add following dependency in the `pom.xml` file of your Maven project: com.taosdata.jdbc taos-jdbcdriver - 3.2.2 + 3.2.7 ``` diff --git a/docs/en/14-reference/05-taosbenchmark.md b/docs/en/14-reference/05-taosbenchmark.md index 8e5ee178a4..4744e143fc 100644 --- a/docs/en/14-reference/05-taosbenchmark.md +++ b/docs/en/14-reference/05-taosbenchmark.md @@ -13,7 +13,7 @@ taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDen There are two ways to install taosBenchmark: -- Installing the official TDengine installer will automatically install taosBenchmark. +- Installing the official TDengine installer will automatically install taosBenchmark. - Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details. diff --git a/docs/en/20-third-party/01-grafana.mdx b/docs/en/20-third-party/01-grafana.mdx index 8475888df5..f7d1a2db7e 100644 --- a/docs/en/20-third-party/01-grafana.mdx +++ b/docs/en/20-third-party/01-grafana.mdx @@ -218,7 +218,7 @@ The example to query the average system memory usage for the specified interval ### Importing the Dashboard -You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly. Check the [TDinsight User Manual](/reference/tdinsight/) for the details. +You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly. ![TDengine Database Grafana plugine import dashboard](./import_dashboard.webp) diff --git a/docs/en/20-third-party/11-kafka.md b/docs/en/20-third-party/11-kafka.md index 64c0f0bd48..b865c00bc3 100644 --- a/docs/en/20-third-party/11-kafka.md +++ b/docs/en/20-third-party/11-kafka.md @@ -21,7 +21,7 @@ TDengine Source Connector is used to read data from TDengine in real-time and se 1. Linux operating system 2. Java 8 and Maven installed 3. Git/curl/vi is installed -4. TDengine is installed and started. +4. TDengine is installed and started. ## Install Kafka diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 237e3ef8f9..e2a0a85b9a 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -36,6 +36,7 @@ REST 连接支持所有能运行 Java 的平台。 | taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 | | :------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------: | +| 3.2.7 | 支持VARBINARY和GEOMETRY类型,增加native连接的时区设置支持。增加websocket自动重连功能。 | 3.2.0.0 及更高版本 | | 3.2.5 | 数据订阅增加 committed()、assignment() 方法 | 3.1.0.3 及更高版本 | | 3.2.4 | 数据订阅在 WebSocket 连接下增加 enable.auto.commit 参数,以及 unsubscribe() 方法。 | - | | 3.2.3 | 修复 ResultSet 在一些情况数据解析失败 | - | @@ -177,7 +178,7 @@ Maven 项目中,在 pom.xml 中添加以下依赖: com.taosdata.jdbc taos-jdbcdriver - 3.2.2 + 3.2.7 ``` @@ -1097,7 +1098,6 @@ TaosConsumer consumer = new TaosConsumer<>(config); - httpPoolSize: 同一个连接下最大并行请求数。仅在 WebSocket 连接下有效。 其他参数请参考:[Consumer 参数列表](../../develop/tmq#创建-consumer-以及consumer-group), 注意TDengine服务端自3.2.0.0版本开始消息订阅中的auto.offset.reset默认值发生变化。 - #### 订阅消费数据 ```java diff --git a/docs/zh/08-connector/05-schemaless-api.mdx b/docs/zh/08-connector/_05-schemaless.mdx similarity index 100% rename from docs/zh/08-connector/05-schemaless-api.mdx rename to docs/zh/08-connector/_05-schemaless.mdx diff --git a/docs/zh/10-deployment/03-k8s.md b/docs/zh/10-deployment/03-k8s.md index 16e2be0dfd..31e909f02d 100644 --- a/docs/zh/10-deployment/03-k8s.md +++ b/docs/zh/10-deployment/03-k8s.md @@ -105,7 +105,7 @@ spec: # TZ for timezone settings, we recommend to always set it. - name: TZ value: "Asia/Shanghai" - # TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase. + # Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy - name: TAOS_SERVER_PORT value: "6030" # Must set if you want a cluster. diff --git a/docs/zh/12-taos-sql/02-database.md b/docs/zh/12-taos-sql/02-database.md index e9ca5405f4..bd33281bc0 100644 --- a/docs/zh/12-taos-sql/02-database.md +++ b/docs/zh/12-taos-sql/02-database.md @@ -53,7 +53,7 @@ database_option: { - 1:表示一阶段压缩。 - 2:表示两阶段压缩。 - DURATION:数据文件存储数据的时间跨度。可以使用加单位的表示形式,如 DURATION 100h、DURATION 10d 等,支持 m(分钟)、h(小时)和 d(天)三个单位。不加时间单位时默认单位为天,如 DURATION 50 表示 50 天。 -- WAL_FSYNC_PERIOD:当 WAL 参数设置为 2 时,落盘的周期。默认为 3000,单位毫秒。最小为 0,表示每次写入立即落盘;最大为 180000,即三分钟。 +- WAL_FSYNC_PERIOD:当 WAL_LEVEL 参数设置为 2 时,用于设置落盘的周期。默认为 3000,单位毫秒。最小为 0,表示每次写入立即落盘;最大为 180000,即三分钟。 - MAXROWS:文件块中记录的最大条数,默认为 4096 条。 - MINROWS:文件块中记录的最小条数,默认为 100 条。 - KEEP:表示数据文件保存的天数,缺省值为 3650,取值范围 [1, 365000],且必须大于或等于3倍的 DURATION 参数值。数据库会自动删除保存时间超过 KEEP 值的数据。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等,支持 m(分钟)、h(小时)和 d(天)三个单位。也可以不写单位,如 KEEP 50,此时默认单位为天。企业版支持[多级存储](https://docs.taosdata.com/tdinternal/arch/#%E5%A4%9A%E7%BA%A7%E5%AD%98%E5%82%A8)功能, 因此, 可以设置多个保存时间(多个以英文逗号分隔,最多 3 个,满足 keep 0 <= keep 1 <= keep 2,如 KEEP 100h,100d,3650d); 社区版不支持多级存储功能(即使配置了多个保存时间, 也不会生效, KEEP 会取最大的保存时间)。 diff --git a/docs/zh/17-operation/04-import.md b/docs/zh/17-operation/04-import.md index 17945be595..e2c35b36c6 100644 --- a/docs/zh/17-operation/04-import.md +++ b/docs/zh/17-operation/04-import.md @@ -59,4 +59,4 @@ Query OK, 9 row(s) affected (0.004763s) ## taosdump 工具导入 -TDengine 提供了方便的数据库导入导出工具 taosdump。用户可以将 taosdump 从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见:[TDengine 数据备份工具: taosdump](/reference/taosdump)。 +TDengine 提供了方便的数据库导入导出工具 taosdump。用户可以将 taosdump 从一个系统导出的数据,导入到其他系统中。具体使用方法,请参考 taosdump 的相关文档。 diff --git a/docs/zh/17-operation/05-export.md b/docs/zh/17-operation/05-export.md index 44247e28bd..3d4425a792 100644 --- a/docs/zh/17-operation/05-export.md +++ b/docs/zh/17-operation/05-export.md @@ -17,5 +17,4 @@ select * from >> data.csv; ## 用 taosdump 导出数据 -利用 taosdump,用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。具体使用方法,请参见: -[TDengine 数据备份工具: taosdump](/reference/taosdump)。 +利用 taosdump,用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。具体使用方法,请参考 taosdump 的相关文档。 \ No newline at end of file diff --git a/docs/zh/20-third-party/01-grafana.mdx b/docs/zh/20-third-party/01-grafana.mdx index 9a1223ab9c..8e17ce4768 100644 --- a/docs/zh/20-third-party/01-grafana.mdx +++ b/docs/zh/20-third-party/01-grafana.mdx @@ -218,7 +218,7 @@ docker run -d \ ### 导入 Dashboard -在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper,相关使用说明请见 [TDinsight 用户手册](/reference/tdinsight/)。 +在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper。 ![TDengine Database Grafana plugine import dashboard](./import_dashboard.webp) diff --git a/docs/zh/21-tdinternal/07-tsz.md b/docs/zh/21-tdinternal/07-tsz.md new file mode 100644 index 0000000000..db1a340ab8 --- /dev/null +++ b/docs/zh/21-tdinternal/07-tsz.md @@ -0,0 +1,69 @@ +--- +title: TSZ 压缩算法 +description: TDengine 对浮点数进行高效压缩的算法 +--- + +TSZ 压缩算法是 TDengine 为浮点数据类型提供的可选压缩算法,可以实现浮点数有损至无损全状态压缩,相比默认压缩算法, TSZ 压缩算法压缩率更高,即使切至无损状态,压缩率也会比默认压缩高一倍。 + +## 适合场景 + +- TSZ 压缩算法是通过数据预测技术完成的压缩,所以更适合有规律变化的数据 +- TSZ 压缩时间会更长一些,如果您的服务器 CPU 空闲多,存储空间小的情况下适合选用 + +## 使用步骤 +- TDengine 支持版本为 3.2.0.0 或以上 +- 开启选项 + 在 taos.cfg 配置中增加以下内容,即可开启 TSZ 压缩算法,功能打开后,会替换默认算法。 + 以下表示字段类型是 float 及 double 类型都使用此压缩算法,也可以单独只配置一个 + +```sql + lossyColumns float|double +``` + +- 配置需重启服务生效 +- Taosd 日志输出以下内容,表明功能已生效: + +```sql + 02/22 10:49:27.607990 00002933 UTL lossyColumns float|double +``` + +## 配置参数 + +### fPrecision +FLOAT 类型精度控制: + +| 属性 | 说明 | +| -------- | -------------------------------- | +| 适用范围 | 服务器端 | +| 含义 | 设置 float 类型浮点数压缩精度 | +| 取值范围 | 0.1 ~ 0.00000001 | +| 缺省值 | 0.00000001 | +| 补充说明 | 小于此值的浮点数尾数部分将被截取 | + + + +### dPrecision +DOUBLE 类型精度控制: + +| 属性 | 说明 | +| -------- | -------------------------------- | +| 适用范围 | 服务器端 | +| 含义 | 设置 double 类型浮点数压缩精度 | +| 取值范围 | 0.1 ~ 0.0000000000000001 | +| 缺省值 | 0.0000000000000001 | +| 补充说明 | 小于此值的浮点数尾数部分将被截取 | + + +### ifAdtFse +TSZ 压缩中可选择的算法 FSE,默认为 HUFFMAN: + +| 属性 | 说明 | +| -------- | -------------------------------- | +| 适用范围 | 服务器端 | +| 含义 | 使用 FSE 算法替换 HUFFMAN 算法, FSE 算法压缩速度更快,但解压稍慢,追求压缩速度可选用此算法 | +| 取值范围 | 0:关闭 1:打开 | +| 缺省值 | 0:关闭 | + + +## 注意事项 +- 打开 TSZ 后生成的存储数据格式,回退至 3.2.0.0 之前的版本,数据将不能被识别 diff --git a/examples/JDBC/taosdemo/pom.xml b/examples/JDBC/taosdemo/pom.xml index ff64d3e1df..031d83b084 100644 --- a/examples/JDBC/taosdemo/pom.xml +++ b/examples/JDBC/taosdemo/pom.xml @@ -67,7 +67,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.0.0 + 3.2.7 diff --git a/packaging/tools/install.sh b/packaging/tools/install.sh index df6fbc6e76..02ebb182fa 100755 --- a/packaging/tools/install.sh +++ b/packaging/tools/install.sh @@ -1140,3 +1140,5 @@ elif [ "$verType" == "client" ]; then else echo "please input correct verType" fi + + diff --git a/packaging/tools/install_client.sh b/packaging/tools/install_client.sh index 5bb7a0cb38..6643363339 100755 --- a/packaging/tools/install_client.sh +++ b/packaging/tools/install_client.sh @@ -129,7 +129,7 @@ function install_bin() { if [ "$osType" != "Darwin" ]; then [ -x ${install_main_dir}/bin/${demoName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${demoName2} ${bin_link_dir}/${demoName2} || : [ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${benchmarkName2} || : - [ -x ${install_main_dir}/bin/${dumpName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName2} ${bin_link_dir}/${dumpName2} || : + [ -x ${install_main_dir}/bin/${dumpName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName2} ${bin_link_dir}/${dumpName2} || : fi [ -x ${install_main_dir}/bin/remove_client.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/remove_client.sh ${bin_link_dir}/${uninstallScript2} || : fi diff --git a/packaging/tools/makepkg.sh b/packaging/tools/makepkg.sh index 7684701ea4..4b0faaa958 100755 --- a/packaging/tools/makepkg.sh +++ b/packaging/tools/makepkg.sh @@ -281,6 +281,11 @@ fi chmod a+x ${install_dir}/install.sh if [[ $dbName == "taos" ]]; then + cp ${top_dir}/../enterprise/packaging/start-all.sh ${install_dir} + cp ${top_dir}/../enterprise/packaging/stop-all.sh ${install_dir} + cp ${top_dir}/../enterprise/packaging/README.md ${install_dir} + chmod a+x ${install_dir}/start-all.sh + chmod a+x ${install_dir}/stop-all.sh # Copy example code mkdir -p ${install_dir}/examples examples_dir="${top_dir}/examples" @@ -360,12 +365,6 @@ if [ "$verMode" == "cluster" ]; then git clone --depth 1 https://github.com/taosdata/taos-connector-rust ${install_dir}/connector/rust rm -rf ${install_dir}/connector/rust/.git ||: - cp ${top_dir}/../enterprise/packaging/start-all.sh ${install_dir} - cp ${top_dir}/../enterprise/packaging/stop-all.sh ${install_dir} - cp ${top_dir}/../enterprise/packaging/README.md ${install_dir} - chmod a+x ${install_dir}/start-all.sh - chmod a+x ${install_dir}/stop-all.sh - # copy taosx if [ -d ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ]; then cp -r ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ${install_dir} @@ -443,4 +442,4 @@ if [ -n "${taostools_bin_files}" ] && [ "$verMode" != "cloud" ]; then fi fi -cd ${curr_dir} \ No newline at end of file +cd ${curr_dir} diff --git a/source/client/src/clientRawBlockWrite.c b/source/client/src/clientRawBlockWrite.c index 03cd19fa40..1f9d3c6d8c 100644 --- a/source/client/src/clientRawBlockWrite.c +++ b/source/client/src/clientRawBlockWrite.c @@ -203,7 +203,7 @@ static char* processCreateStb(SMqMetaRsp* metaRsp) { goto _err; } string = buildCreateTableJson(&req.schemaRow, &req.schemaTag, req.name, req.suid, TSDB_SUPER_TABLE); - _err: +_err: uDebug("create stable return, sql json:%s", string); tDecoderClear(&coder); return string; @@ -224,7 +224,7 @@ static char* processAlterStb(SMqMetaRsp* metaRsp) { goto _err; } string = buildAlterSTableJson(req.alterOriData, req.alterOriDataLen); - _err: +_err: uDebug("alter stable return, sql json:%s", string); tDecoderClear(&coder); return string; @@ -375,7 +375,7 @@ static char* processCreateTable(SMqMetaRsp* metaRsp) { } } - _exit: +_exit: uDebug("create table return, sql json:%s", string); for (int32_t iReq = 0; iReq < req.nReqs; iReq++) { pCreateReq = req.pReqs + iReq; @@ -416,7 +416,7 @@ static char* processAutoCreateTable(STaosxRsp* rsp) { } } string = buildCreateCTableJson(pCreateReq, rsp->createTableNum); - _exit: +_exit: uDebug("auto created table return, sql json:%s", string); for (int i = 0; i < rsp->createTableNum; i++) { tDecoderClear(&decoder[i]); @@ -549,7 +549,7 @@ static char* processAlterTable(SMqMetaRsp* metaRsp) { } string = cJSON_PrintUnformatted(json); - _exit: +_exit: uDebug("alter table return, sql json:%s", string); cJSON_Delete(json); tDecoderClear(&decoder); @@ -585,7 +585,7 @@ static char* processDropSTable(SMqMetaRsp* metaRsp) { cJSON_AddItemToObject(json, "tableName", tableName); string = cJSON_PrintUnformatted(json); - _exit: +_exit: uDebug("processDropSTable return, sql json:%s", string); cJSON_Delete(json); tDecoderClear(&decoder); @@ -624,7 +624,7 @@ static char* processDeleteTable(SMqMetaRsp* metaRsp) { cJSON_AddItemToObject(json, "sql", sqlJson); string = cJSON_PrintUnformatted(json); - _exit: +_exit: uDebug("processDeleteTable return, sql json:%s", string); cJSON_Delete(json); tDecoderClear(&coder); @@ -669,7 +669,7 @@ static char* processDropTable(SMqMetaRsp* metaRsp) { cJSON_AddItemToObject(json, "tableNameList", tableNameList); string = cJSON_PrintUnformatted(json); - _exit: +_exit: uDebug("processDropTable return, json sql:%s", string); cJSON_Delete(json); tDecoderClear(&decoder); @@ -765,7 +765,7 @@ static int32_t taosCreateStb(TAOS* taos, void* meta, int32_t metaLen) { code = pRequest->code; taosMemoryFree(pCmdMsg.pMsg); - end: +end: uDebug(LOG_ID_TAG" create stable return, msg:%s", LOG_ID_VALUE, tstrerror(code)); destroyRequest(pRequest); tFreeSMCreateStbReq(&pReq); @@ -869,7 +869,7 @@ static int32_t taosDropStb(TAOS* taos, void* meta, int32_t metaLen) { code = pRequest->code; taosMemoryFree(pCmdMsg.pMsg); - end: +end: uDebug(LOG_ID_TAG" drop stable return, msg:%s", LOG_ID_VALUE, tstrerror(code)); destroyRequest(pRequest); tDecoderClear(&coder); @@ -1023,7 +1023,7 @@ static int32_t taosCreateTable(TAOS* taos, void* meta, int32_t metaLen) { code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG" create table return, msg:%s", LOG_ID_VALUE, tstrerror(code)); for (int32_t iReq = 0; iReq < req.nReqs; iReq++) { pCreateReq = req.pReqs + iReq; @@ -1175,7 +1175,7 @@ static int32_t taosDropTable(TAOS* taos, void* meta, int32_t metaLen) { } code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG" drop table return, msg:%s", LOG_ID_VALUE, tstrerror(code)); taosHashCleanup(pVgroupHashmap); destroyRequest(pRequest); @@ -1250,7 +1250,7 @@ static int32_t taosDeleteData(TAOS* taos, void* meta, int32_t metaLen) { } taos_free_result(res); - end: +end: uDebug("connId:0x%"PRIx64" delete data sql:%s, code:%s", *(int64_t*)taos, sql, tstrerror(code)); tDecoderClear(&coder); terrno = code; @@ -1368,7 +1368,7 @@ static int32_t taosAlterTable(TAOS* taos, void* meta, int32_t metaLen) { code = handleAlterTbExecRes(pRes->res, pCatalog); } } - end: +end: uDebug(LOG_ID_TAG " alter table return, meta:%p, len:%d, msg:%s", LOG_ID_VALUE, meta, metaLen, tstrerror(code)); taosArrayDestroy(pArray); if (pVgData) taosMemoryFreeClear(pVgData->pData); @@ -1459,7 +1459,7 @@ int taos_write_raw_block_with_fields_with_reqid(TAOS *taos, int rows, char *pDat launchQueryImpl(pRequest, pQuery, true, NULL); code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG " write raw block with field return, msg:%s", LOG_ID_VALUE, tstrerror(code)); taosMemoryFreeClear(pTableMeta); qDestroyQuery(pQuery); @@ -1543,7 +1543,7 @@ int taos_write_raw_block_with_reqid(TAOS* taos, int rows, char* pData, const cha launchQueryImpl(pRequest, pQuery, true, NULL); code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG " write raw block return, msg:%s", LOG_ID_VALUE, tstrerror(code)); taosMemoryFreeClear(pTableMeta); qDestroyQuery(pQuery); @@ -1669,7 +1669,7 @@ static int32_t tmqWriteRawDataImpl(TAOS* taos, void* data, int32_t dataLen) { launchQueryImpl(pRequest, pQuery, true, NULL); code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG " write raw data return, msg:%s", LOG_ID_VALUE, tstrerror(code)); tDeleteMqDataRsp(&rspObj.rsp); tDecoderClear(&decoder); @@ -1841,7 +1841,7 @@ static int32_t tmqWriteRawMetaDataImpl(TAOS* taos, void* data, int32_t dataLen) launchQueryImpl(pRequest, pQuery, true, NULL); code = pRequest->code; - end: +end: uDebug(LOG_ID_TAG " write raw metadata return, msg:%s", LOG_ID_VALUE, tstrerror(code)); tDeleteSTaosxRsp(&rspObj.rsp); tDecoderClear(&decoder); @@ -1984,7 +1984,7 @@ int32_t tmq_write_raw(TAOS* taos, tmq_raw_data raw) { return tmqWriteRawMetaDataImpl(taos, raw.raw, raw.raw_len); } - end: +end: terrno = TSDB_CODE_INVALID_PARA; return terrno; } diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index 7a2ae90cda..86db9180d2 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -95,8 +95,8 @@ int32_t tsMonitorMaxLogs = 100; bool tsMonitorComp = false; // audit -bool tsEnableAudit = true; -bool tsEnableAuditCreateTable = true; +bool tsEnableAudit = true; +bool tsEnableAuditCreateTable = true; // telem #ifdef TD_ENTERPRISE @@ -507,8 +507,8 @@ static int32_t taosAddClientCfg(SConfig *pCfg) { tsNumOfTaskQueueThreads = tsNumOfCores / 2; tsNumOfTaskQueueThreads = TMAX(tsNumOfTaskQueueThreads, 4); - if (tsNumOfTaskQueueThreads >= 10) { - tsNumOfTaskQueueThreads = 10; + if (tsNumOfTaskQueueThreads >= 50) { + tsNumOfTaskQueueThreads = 50; } if (cfgAddInt32(pCfg, "numOfTaskQueueThreads", tsNumOfTaskQueueThreads, 4, 1024, CFG_SCOPE_CLIENT, CFG_DYN_NONE) != 0) return -1; diff --git a/source/dnode/mnode/impl/src/mndStb.c b/source/dnode/mnode/impl/src/mndStb.c index 1b21d4a017..82d0074fbd 100644 --- a/source/dnode/mnode/impl/src/mndStb.c +++ b/source/dnode/mnode/impl/src/mndStb.c @@ -882,7 +882,6 @@ static int32_t mndCreateStb(SMnode *pMnode, SRpcMsg *pReq, SMCreateStbReq *pCrea if (mndAcquireGlobalIdx(pMnode, fullIdxName, SDB_IDX, &idx) == 0 && idx.pIdx != NULL) { terrno = TSDB_CODE_MND_TAG_INDEX_ALREADY_EXIST; mndReleaseIdx(pMnode, idx.pIdx); - goto _OVER; } diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index ec69ae5ca7..ca9d22a987 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -831,7 +831,7 @@ struct SDiskDataBuilder { SBlkInfo bi; }; -typedef struct SLDataIter { +struct SLDataIter { SRBTreeNode node; SSttBlk *pSttBlk; int64_t cid; // for debug purpose @@ -845,7 +845,7 @@ typedef struct SLDataIter { SSttBlockLoadInfo *pBlockLoadInfo; bool ignoreEarlierTs; struct SSttFileReader *pReader; -} SLDataIter; +}; #define tMergeTreeGetRow(_t) (&((_t)->pIter->rInfo.row)) diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index 2717f1b78c..b928751abb 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -1105,10 +1105,6 @@ static void doStartFillhistoryStep2(SStreamTask* pTask, SStreamTask* pStreamTask } } -static void ddxx() { - -} - // this function should be executed by only one thread, so we set an sentinel to protect this function int32_t tqProcessTaskScanHistory(STQ* pTq, SRpcMsg* pMsg) { SStreamScanHistoryReq* pReq = (SStreamScanHistoryReq*)pMsg->pCont; @@ -1155,7 +1151,9 @@ int32_t tqProcessTaskScanHistory(STQ* pTq, SRpcMsg* pMsg) { tqDebug("s-task:%s continue exec scan-history(step1), original step1 startTs:%" PRId64 ", already elapsed:%.2fs", id, pTask->execInfo.step1Start, pTask->execInfo.step1El); } else { - tqDebug("s-task:%s already in step2, no need to scan-history data, step2 starTs:%"PRId64, id, pTask->execInfo.step2Start); + tqDebug("s-task:%s already in step2, no need to scan-history data, step2 startTs:%" PRId64, id, + pTask->execInfo.step2Start); + atomic_store_32(&pTask->status.inScanHistorySentinel, 0); streamMetaReleaseTask(pMeta, pTask); return 0; diff --git a/source/dnode/vnode/src/tq/tqStreamTask.c b/source/dnode/vnode/src/tq/tqStreamTask.c index 7ec0490e3f..4c0491da86 100644 --- a/source/dnode/vnode/src/tq/tqStreamTask.c +++ b/source/dnode/vnode/src/tq/tqStreamTask.c @@ -155,7 +155,7 @@ int32_t tqRestartStreamTasks(STQ* pTq) { streamMetaInitBackend(pMeta); int64_t el = taosGetTimestampMs() - st; - tqInfo("vgId:%d close&reload state elapsed time:%.3fms", vgId, el/1000.); + tqInfo("vgId:%d close&reload state elapsed time:%.3fs", vgId, el/1000.); code = streamMetaLoadAllTasks(pTq->pStreamMeta); if (code != TSDB_CODE_SUCCESS) { @@ -168,12 +168,15 @@ int32_t tqRestartStreamTasks(STQ* pTq) { if (vnodeIsRoleLeader(pTq->pVnode) && !tsDisableStream) { tqInfo("vgId:%d restart all stream tasks after all tasks being updated", vgId); tqResetStreamTaskStatus(pTq); + + streamMetaWUnLock(pMeta); tqStartStreamTasks(pTq); } else { + streamMetaResetStartInfo(&pMeta->startInfo); + streamMetaWUnLock(pMeta); tqInfo("vgId:%d, follower node not start stream tasks", vgId); } - streamMetaWUnLock(pMeta); code = terrno; return code; } diff --git a/source/dnode/vnode/src/tq/tqStreamTaskSnap.c b/source/dnode/vnode/src/tq/tqStreamTaskSnap.c index e122cf19d3..f22ecc3daf 100644 --- a/source/dnode/vnode/src/tq/tqStreamTaskSnap.c +++ b/source/dnode/vnode/src/tq/tqStreamTaskSnap.c @@ -212,9 +212,7 @@ int32_t streamTaskSnapWriterClose(SStreamTaskWriter* pWriter, int8_t rollback) { taosMemoryFree(pWriter); goto _err; } - taosWUnLockLatch(&pTq->pStreamMeta->lock); - taosMemoryFree(pWriter); return code; diff --git a/source/dnode/vnode/src/tsdb/tsdbRead2.c b/source/dnode/vnode/src/tsdb/tsdbRead2.c index 4682c47bd1..6169014d9f 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRead2.c +++ b/source/dnode/vnode/src/tsdb/tsdbRead2.c @@ -524,7 +524,7 @@ static int32_t doLoadFileBlock(STsdbReader* pReader, SArray* pIndexList, SBlockN int64_t st = taosGetTimestampUs(); // clear info for the new file - cleanupInfoFoxNextFileset(pReader->status.pTableMap); + cleanupInfoForNextFileset(pReader->status.pTableMap); int32_t k = 0; int32_t numOfTables = tSimpleHashGetSize(pReader->status.pTableMap); @@ -1444,7 +1444,9 @@ static bool nextRowFromSttBlocks(SLastBlockReader* pLastBlockReader, STableBlock } } -static void doPinSttBlock(SLastBlockReader* pLastBlockReader) { tMergeTreePinSttBlock(&pLastBlockReader->mergeTree); } +static void doPinSttBlock(SLastBlockReader* pLastBlockReader) { + tMergeTreePinSttBlock(&pLastBlockReader->mergeTree); +} static void doUnpinSttBlock(SLastBlockReader* pLastBlockReader) { tMergeTreeUnpinSttBlock(&pLastBlockReader->mergeTree); @@ -1454,7 +1456,6 @@ static bool tryCopyDistinctRowFromSttBlock(TSDBROW* fRow, SLastBlockReader* pLas STableBlockScanInfo* pScanInfo, int64_t ts, STsdbReader* pReader, bool* copied) { int32_t code = TSDB_CODE_SUCCESS; - *copied = false; // avoid the fetch next row replace the referenced stt block in buffer @@ -1540,12 +1541,6 @@ static int32_t doMergeBufAndFileRows(STsdbReader* pReader, STableBlockScanInfo* if (ps == NULL) { return terrno; } - - int32_t code = tsdbRowMergerInit(pMerger, ps); - if (code != TSDB_CODE_SUCCESS) { - tsdbError("failed to init row merger, code:%s", tstrerror(code)); - return code; - } } int64_t minKey = 0; @@ -1688,7 +1683,7 @@ static int32_t doMergeFileBlockAndLastBlock(SLastBlockReader* pLastBlockReader, tsdbTrace("fRow ptr:%p, %d, uid:%" PRIu64 ", ts:%" PRId64 " %s", pRow->pBlockData, pRow->iRow, pLastBlockReader->uid, fRow.pBlockData->aTSKEY[fRow.iRow], pReader->idStr); - // only last block exists + // only stt block exists if ((!mergeBlockData) || (tsLastBlock != pBlockData->aTSKEY[pDumpInfo->rowIndex])) { code = tryCopyDistinctRowFromSttBlock(&fRow, pLastBlockReader, pBlockScanInfo, tsLastBlock, pReader, &copied); if (code) { @@ -1767,12 +1762,6 @@ static int32_t mergeFileBlockAndLastBlock(STsdbReader* pReader, SLastBlockReader if (ps == NULL) { return terrno; } - - int32_t code = tsdbRowMergerInit(pMerger, ps); - if (code != TSDB_CODE_SUCCESS) { - tsdbError("failed to init row merger, code:%s", tstrerror(code)); - return code; - } } if (hasDataInFileBlock(pBlockData, pDumpInfo)) { @@ -1874,12 +1863,6 @@ static int32_t doMergeMultiLevelRows(STsdbReader* pReader, STableBlockScanInfo* if (ps == NULL) { return terrno; } - - code = tsdbRowMergerInit(pMerger, ps); - if (code != TSDB_CODE_SUCCESS) { - tsdbError("failed to init row merger, code:%s", tstrerror(code)); - return code; - } } int64_t minKey = 0; @@ -2202,12 +2185,6 @@ int32_t mergeRowsInFileBlocks(SBlockData* pBlockData, STableBlockScanInfo* pBloc if (ps == NULL) { return terrno; } - - code = tsdbRowMergerInit(pMerger, ps); - if (code != TSDB_CODE_SUCCESS) { - tsdbError("failed to init row merger, code:%s", tstrerror(code)); - return code; - } } if (copied) { diff --git a/source/dnode/vnode/src/tsdb/tsdbReadUtil.c b/source/dnode/vnode/src/tsdb/tsdbReadUtil.c index af7cae33fc..305399e0af 100644 --- a/source/dnode/vnode/src/tsdb/tsdbReadUtil.c +++ b/source/dnode/vnode/src/tsdb/tsdbReadUtil.c @@ -245,7 +245,7 @@ static void doCleanupInfoForNextFileset(STableBlockScanInfo* pScanInfo) { pScanInfo->sttKeyInfo.status = STT_FILE_READER_UNINIT; } -void cleanupInfoFoxNextFileset(SSHashObj* pTableMap) { +void cleanupInfoForNextFileset(SSHashObj* pTableMap) { STableBlockScanInfo** p = NULL; int32_t iter = 0; diff --git a/source/dnode/vnode/src/tsdb/tsdbReadUtil.h b/source/dnode/vnode/src/tsdb/tsdbReadUtil.h index 47585ea6e3..60e6e6960a 100644 --- a/source/dnode/vnode/src/tsdb/tsdbReadUtil.h +++ b/source/dnode/vnode/src/tsdb/tsdbReadUtil.h @@ -248,7 +248,7 @@ SSHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, SBlockInfoBuf* pBuf void clearBlockScanInfo(STableBlockScanInfo* p); void destroyAllBlockScanInfo(SSHashObj* pTableMap); void resetAllDataBlockScanInfo(SSHashObj* pTableMap, int64_t ts, int32_t step); -void cleanupInfoFoxNextFileset(SSHashObj* pTableMap); +void cleanupInfoForNextFileset(SSHashObj* pTableMap); int32_t ensureBlockScanInfoBuf(SBlockInfoBuf* pBuf, int32_t numOfTables); void clearBlockScanInfoBuf(SBlockInfoBuf* pBuf); void* getPosInBlockInfoBuf(SBlockInfoBuf* pBuf, int32_t index); diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c index c9dfec960f..33b4114009 100644 --- a/source/dnode/vnode/src/vnd/vnodeSvr.c +++ b/source/dnode/vnode/src/vnd/vnodeSvr.c @@ -21,6 +21,8 @@ #include "cos.h" #include "vnode.h" #include "vnodeInt.h" +#include "audit.h" +#include "tstrbuild.h" static int32_t vnodeProcessCreateStbReq(SVnode *pVnode, int64_t ver, void *pReq, int32_t len, SRpcMsg *pRsp); static int32_t vnodeProcessAlterStbReq(SVnode *pVnode, int64_t ver, void *pReq, int32_t len, SRpcMsg *pRsp); @@ -1002,8 +1004,8 @@ static int32_t vnodeProcessCreateTbReq(SVnode *pVnode, int64_t ver, void *pReq, taosMemoryFreeClear(*key); } - size_t len = 0; - char *keyJoined = taosStringBuilderGetResult(&sb, &len); + size_t len = 0; + char* keyJoined = taosStringBuilderGetResult(&sb, &len); if(pOriginRpc->info.conn.user != NULL && strlen(pOriginRpc->info.conn.user) > 0){ auditRecord(pOriginRpc, clusterId, "createTable", name.dbname, "", keyJoined, len); @@ -1017,7 +1019,7 @@ _exit: pCreateReq = req.pReqs + iReq; taosMemoryFree(pCreateReq->sql); taosMemoryFree(pCreateReq->comment); - taosArrayDestroy(pCreateReq->ctb.tagName); + taosArrayDestroy(pCreateReq->ctb.tagName); } taosArrayDestroyEx(rsp.pArray, tFreeSVCreateTbRsp); taosArrayDestroy(tbUids); @@ -1235,7 +1237,7 @@ static int32_t vnodeProcessDropTbReq(SVnode *pVnode, int64_t ver, void *pReq, in taosStringBuilderDestroy(&sb); } - + _exit: taosArrayDestroy(tbUids); tdUidStoreFree(pStore); diff --git a/source/libs/stream/src/streamDispatch.c b/source/libs/stream/src/streamDispatch.c index 5665e7a917..6bb15dfd23 100644 --- a/source/libs/stream/src/streamDispatch.c +++ b/source/libs/stream/src/streamDispatch.c @@ -1106,8 +1106,8 @@ int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp, i taosArrayPush(pTask->msgInfo.pRetryList, &pRsp->downstreamNodeId); taosThreadMutexUnlock(&pTask->lock); - stError("s-task:%s inputQ of downstream task:0x%x(vgId:%d) is full, wait for %dms and retry dispatch data", id, - pRsp->downstreamTaskId, pRsp->downstreamNodeId, DISPATCH_RETRY_INTERVAL_MS); + stWarn("s-task:%s inputQ of downstream task:0x%x(vgId:%d) is full, wait for %dms and retry dispatch", id, + pRsp->downstreamTaskId, pRsp->downstreamNodeId, DISPATCH_RETRY_INTERVAL_MS); } else if (pRsp->inputStatus == TASK_INPUT_STATUS__REFUSED) { stError("s-task:%s downstream task:0x%x(vgId:%d) refused the dispatch msg, treat it as success", id, pRsp->downstreamTaskId, pRsp->downstreamNodeId); @@ -1147,8 +1147,8 @@ int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp, i } int32_t ref = atomic_add_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s failed to dispatch msg to downstream code:%s, add timer to retry in %dms, ref:%d", - pTask->id.idStr, tstrerror(terrno), DISPATCH_RETRY_INTERVAL_MS, ref); + stDebug("s-task:%s failed to dispatch msg to downstream, add into timer to retry in %dms, ref:%d", + pTask->id.idStr, DISPATCH_RETRY_INTERVAL_MS, ref); streamRetryDispatchData(pTask, DISPATCH_RETRY_INTERVAL_MS); } else { // this message has been sent successfully, let's try next one. diff --git a/source/libs/stream/src/streamStart.c b/source/libs/stream/src/streamStart.c index 0b2bf6b4ba..97eb7b79a2 100644 --- a/source/libs/stream/src/streamStart.c +++ b/source/libs/stream/src/streamStart.c @@ -134,7 +134,7 @@ int32_t streamReExecScanHistoryFuture(SStreamTask* pTask, int32_t idleDuration) pTask->schedHistoryInfo.numOfTicks = numOfTicks; int32_t ref = atomic_add_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s scan-history start in %.2fs, ref:%d", pTask->id.idStr, numOfTicks*0.1, ref); + stDebug("s-task:%s scan-history resumed in %.2fs, ref:%d", pTask->id.idStr, numOfTicks*0.1, ref); if (pTask->schedHistoryInfo.pTimer == NULL) { pTask->schedHistoryInfo.pTimer = taosTmrStart(doReExecScanhistory, SCANHISTORY_IDLE_TIME_SLICE, pTask, streamEnv.timer); diff --git a/source/libs/stream/src/tstreamFileState.c b/source/libs/stream/src/tstreamFileState.c index 8642a990a6..d31e4cbfcd 100644 --- a/source/libs/stream/src/tstreamFileState.c +++ b/source/libs/stream/src/tstreamFileState.c @@ -321,6 +321,10 @@ void popUsedBuffs(SStreamFileState* pFileState, SStreamSnapshot* pFlushList, uin while ((pNode = tdListNext(&iter)) != NULL && i < max) { SRowBuffPos* pPos = *(SRowBuffPos**)pNode->data; if (pPos->beUsed == used) { + if (used && !pPos->pRowBuff) { + ASSERT(pPos->needFree == true); + continue; + } tdListAppend(pFlushList, &pPos); pFileState->flushMark = TMAX(pFileState->flushMark, pFileState->getTs(pPos->pKey)); pFileState->stateBuffRemoveByPosFn(pFileState, pPos); diff --git a/source/libs/transport/src/thttp.c b/source/libs/transport/src/thttp.c index c483d82027..65b0058cfe 100644 --- a/source/libs/transport/src/thttp.c +++ b/source/libs/transport/src/thttp.c @@ -293,6 +293,15 @@ int32_t httpSendQuit() { static int32_t taosSendHttpReportImpl(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen, EHttpCompFlag flag) { + if (server == NULL || uri == NULL) { + tError("http-report failed to report to invalid addr"); + return -1; + } + + if (pCont == NULL || contLen == 0) { + tError("http-report failed to report empty packet"); + return -1; + } SHttpModule* load = taosAcquireRef(httpRefMgt, httpRef); if (load == NULL) { tError("http-report already released"); diff --git a/tests/script/tsim/stream/sliding.sim b/tests/script/tsim/stream/sliding.sim index a92da7f472..cc4d14b6fb 100644 --- a/tests/script/tsim/stream/sliding.sim +++ b/tests/script/tsim/stream/sliding.sim @@ -23,6 +23,8 @@ sql create stream stream_t1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 i sql create stream stream_t2 trigger at_once watermark 1d IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamtST2 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s) sliding (5s); sleep 1000 +sleep 1000 + sql insert into t1 values(1648791210000,1,2,3,1.0); sql insert into t1 values(1648791216000,2,2,3,1.1); sql insert into t1 values(1648791220000,3,2,3,2.1); @@ -314,6 +316,8 @@ sql create stream streams11 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 i sql create stream streams12 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt2 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s, 5s); sleep 1000 +sleep 1000 + sql insert into t1 values(1648791213000,1,2,3,1.0); sql insert into t1 values(1648791223001,2,2,3,1.1); sql insert into t1 values(1648791233002,3,2,3,2.1); @@ -448,6 +452,8 @@ sql create stream streams21 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 i sql create stream streams22 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt22 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s, 5s); sleep 1000 +sleep 1000 + sql insert into t1 values(1648791213000,1,1,1,1.0); sql insert into t1 values(1648791223001,2,2,2,1.1); sql insert into t1 values(1648791233002,3,3,3,2.1); @@ -712,6 +718,8 @@ sql create table t2 using st tags(2,2,2); sql create stream streams4 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt4 as select _wstart as ts, count(*),min(a) c1 from st interval(10s) sliding(5s); sleep 1000 +sleep 1000 + sql insert into t1 values(1648791213000,1,1,1,1.0); sql insert into t1 values(1648791243000,2,1,1,1.0); diff --git a/tests/system-test/2-query/limit.py b/tests/system-test/2-query/limit.py index fb5595a8be..961cff5087 100644 --- a/tests/system-test/2-query/limit.py +++ b/tests/system-test/2-query/limit.py @@ -338,11 +338,38 @@ class TDTestCase: tdLog.printNoPrefix("======== test case 1 end ...... ") + # + def checkVGroups(self): + + # db2 + tdSql.execute("create database db2 vgroups 2;") + tdSql.execute("use db2;") + tdSql.execute("create table st(ts timestamp, age int) tags(area int);") + tdSql.execute("create table t1 using st tags(1);") + tdSql.query("select distinct(tbname) from st limit 1 offset 100;") + tdSql.checkRows(0) + tdLog.info("check db2 vgroups 2 limit 1 offset 100 successfully!") + + + # db1 + tdSql.execute("create database db1 vgroups 1;") + tdSql.execute("use db1;") + tdSql.execute("create table st(ts timestamp, age int) tags(area int);") + tdSql.execute("create table t1 using st tags(1);") + tdSql.query("select distinct(tbname) from st limit 1 offset 100;") + tdSql.checkRows(0) + tdLog.info("check db1 vgroups 1 limit 1 offset 100 successfully!") + + def run(self): # tdSql.prepare() self.prepareTestEnv() self.tmqCase1() + # one vgroup diff more than one vgroup check + self.checkVGroups() + + def stop(self): tdSql.close() tdLog.success(f"{__file__} successfully executed") diff --git a/tools/shell/inc/shellInt.h b/tools/shell/inc/shellInt.h index 57415f8335..1c885b151c 100644 --- a/tools/shell/inc/shellInt.h +++ b/tools/shell/inc/shellInt.h @@ -80,6 +80,7 @@ typedef struct { #ifdef WEBSOCKET bool restful; bool cloud; + bool local; char* dsn; int32_t timeout; #endif diff --git a/tools/shell/src/shellArguments.c b/tools/shell/src/shellArguments.c index 52cb524e3b..4817b23029 100644 --- a/tools/shell/src/shellArguments.c +++ b/tools/shell/src/shellArguments.c @@ -410,7 +410,7 @@ int32_t shellParseArgs(int32_t argc, char *argv[]) { shellInitArgs(argc, argv); shell.info.clientVersion = "Welcome to the %s Command Line Interface, Client Version:%s\r\n" - "Copyright (c) 2022 by %s, all rights reserved.\r\n\r\n"; + "Copyright (c) 2023 by %s, all rights reserved.\r\n\r\n"; #ifdef CUS_NAME strcpy(shell.info.cusName, CUS_NAME); #else diff --git a/tools/shell/src/shellAuto.c b/tools/shell/src/shellAuto.c index 60d6388faa..bd5329d810 100644 --- a/tools/shell/src/shellAuto.c +++ b/tools/shell/src/shellAuto.c @@ -83,6 +83,11 @@ SWords shellCommands[] = { {"alter local \"asynclog\" \"1\";", 0, 0, NULL}, {"alter topic", 0, 0, NULL}, {"alter user ;", 0, 0, NULL}, +#ifdef TD_ENTERPRISE + {"balance vgroup;", 0, 0, NULL}, + {"balance vgroup leader ", 0, 0, NULL}, +#endif + // 20 {"create table using tags(", 0, 0, NULL}, {"create database " @@ -127,9 +132,12 @@ SWords shellCommands[] = { {"kill query ", 0, 0, NULL}, {"kill transaction ", 0, 0, NULL}, #ifdef TD_ENTERPRISE - {"merge vgroup ", 0, 0, NULL}, + {"merge vgroup ", 0, 0, NULL}, #endif {"pause stream ;", 0, 0, NULL}, +#ifdef TD_ENTERPRISE + {"redistribute vgroup dnode ;", 0, 0, NULL}, +#endif {"resume stream ;", 0, 0, NULL}, {"reset query cache;", 0, 0, NULL}, {"restore dnode ;", 0, 0, NULL}, @@ -187,7 +195,7 @@ SWords shellCommands[] = { {"show consumers;", 0, 0, NULL}, {"show grants;", 0, 0, NULL}, #ifdef TD_ENTERPRISE - {"split vgroup ", 0, 0, NULL}, + {"split vgroup ", 0, 0, NULL}, #endif {"insert into values(", 0, 0, NULL}, {"insert into using tags(", 0, 0, NULL}, @@ -268,7 +276,9 @@ char* db_options[] = {"keep ", "wal_retention_size ", "wal_segment_size "}; -char* alter_db_options[] = {"cachemodel ", "replica ", "keep ", "cachesize ", "wal_fsync_period ", "wal_level "}; +char* alter_db_options[] = {"cachemodel ", "replica ", "keep ", "stt_trigger ", + "wal_retention_period ", "wal_retention_size ", "cachesize ", + "wal_fsync_period ", "buffer ", "pages " ,"wal_level "}; char* data_types[] = {"timestamp", "int", "int unsigned", "varchar(16)", @@ -312,26 +322,27 @@ bool waitAutoFill = false; #define WT_VAR_TOPIC 5 #define WT_VAR_STREAM 6 #define WT_VAR_UDFNAME 7 +#define WT_VAR_VGROUPID 8 -#define WT_FROM_DB_MAX 7 // max get content from db +#define WT_FROM_DB_MAX 8 // max get content from db #define WT_FROM_DB_CNT (WT_FROM_DB_MAX + 1) -#define WT_VAR_ALLTABLE 8 -#define WT_VAR_FUNC 9 -#define WT_VAR_KEYWORD 10 -#define WT_VAR_TBACTION 11 -#define WT_VAR_DBOPTION 12 -#define WT_VAR_ALTER_DBOPTION 13 -#define WT_VAR_DATATYPE 14 -#define WT_VAR_KEYTAGS 15 -#define WT_VAR_ANYWORD 16 -#define WT_VAR_TBOPTION 17 -#define WT_VAR_USERACTION 18 -#define WT_VAR_KEYSELECT 19 -#define WT_VAR_SYSTABLE 20 -#define WT_VAR_LANGUAGE 21 +#define WT_VAR_ALLTABLE 9 +#define WT_VAR_FUNC 10 +#define WT_VAR_KEYWORD 11 +#define WT_VAR_TBACTION 12 +#define WT_VAR_DBOPTION 13 +#define WT_VAR_ALTER_DBOPTION 14 +#define WT_VAR_DATATYPE 15 +#define WT_VAR_KEYTAGS 16 +#define WT_VAR_ANYWORD 17 +#define WT_VAR_TBOPTION 18 +#define WT_VAR_USERACTION 19 +#define WT_VAR_KEYSELECT 20 +#define WT_VAR_SYSTABLE 21 +#define WT_VAR_LANGUAGE 22 -#define WT_VAR_CNT 22 +#define WT_VAR_CNT 23 #define WT_TEXT 0xFF @@ -345,11 +356,11 @@ TdThread* threads[WT_FROM_DB_CNT]; // obtain var name with sql from server char varTypes[WT_VAR_CNT][64] = { "", "", "", "", "", "", "", - "", "", "", "", "", "", "", + "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""}; char varSqls[WT_FROM_DB_CNT][64] = {"show databases;", "show stables;", "show tables;", "show dnodes;", - "show users;", "show topics;", "show streams;", "show functions;"}; + "show users;", "show topics;", "show streams;", "show functions;", "show vgroups;"}; // var words current cursor, if user press any one key except tab, cursorVar can be reset to -1 int cursorVar = -1; @@ -520,7 +531,10 @@ void showHelp() { printf( "\n\n\ ----- special commands on enterpise version ----- \n\ + balance vgroup; \n\ + balance vgroup leader \n\ compact database ; \n\ + redistribute vgroup dnode ;\n\ split vgroup ;"); #endif @@ -675,9 +689,9 @@ bool shellAutoInit() { // generate varType GenerateVarType(WT_VAR_FUNC, functions, sizeof(functions) / sizeof(char*)); GenerateVarType(WT_VAR_KEYWORD, keywords, sizeof(keywords) / sizeof(char*)); + GenerateVarType(WT_VAR_TBACTION, tb_actions, sizeof(tb_actions) / sizeof(char*)); GenerateVarType(WT_VAR_DBOPTION, db_options, sizeof(db_options) / sizeof(char*)); GenerateVarType(WT_VAR_ALTER_DBOPTION, alter_db_options, sizeof(alter_db_options) / sizeof(char*)); - GenerateVarType(WT_VAR_TBACTION, tb_actions, sizeof(tb_actions) / sizeof(char*)); GenerateVarType(WT_VAR_DATATYPE, data_types, sizeof(data_types) / sizeof(char*)); GenerateVarType(WT_VAR_KEYTAGS, key_tags, sizeof(key_tags) / sizeof(char*)); GenerateVarType(WT_VAR_TBOPTION, tb_options, sizeof(tb_options) / sizeof(char*)); diff --git a/tools/shell/src/shellMain.c b/tools/shell/src/shellMain.c index 795621dfdd..18f4ca21d1 100644 --- a/tools/shell/src/shellMain.c +++ b/tools/shell/src/shellMain.c @@ -47,6 +47,7 @@ int main(int argc, char *argv[]) { #ifdef WEBSOCKET shell.args.timeout = SHELL_WS_TIMEOUT; shell.args.cloud = true; + shell.args.local = false; #endif #if !defined(WINDOWS) diff --git a/tools/shell/src/shellUtil.c b/tools/shell/src/shellUtil.c index 93451c85a9..0350d934a7 100644 --- a/tools/shell/src/shellUtil.c +++ b/tools/shell/src/shellUtil.c @@ -130,11 +130,21 @@ void shellCheckConnectMode() { } if (shell.args.cloud) { shell.args.dsn = getenv("TDENGINE_CLOUD_DSN"); - if (shell.args.dsn) { + if (shell.args.dsn && strlen(shell.args.dsn) > 4) { shell.args.cloud = true; + shell.args.local = false; shell.args.restful = false; return; } + + shell.args.dsn = getenv("TDENGINE_DSN"); + if (shell.args.dsn && strlen(shell.args.dsn) > 4) { + shell.args.cloud = true; + shell.args.local = true; + shell.args.restful = false; + return; + } + if (shell.args.restful) { if (!shell.args.host) { shell.args.host = "localhost"; diff --git a/tools/shell/src/shellWebsocket.c b/tools/shell/src/shellWebsocket.c index fceec37a64..bff2ef7592 100644 --- a/tools/shell/src/shellWebsocket.c +++ b/tools/shell/src/shellWebsocket.c @@ -59,7 +59,17 @@ int shell_conn_ws_server(bool first) { fprintf(stdout, "successfully connected to %s\n\n", shell.args.dsn); } else if (first && shell.args.cloud) { - fprintf(stdout, "successfully connected to cloud service\n"); + if(shell.args.local) { + const char* host = strstr(shell.args.dsn, "@"); + if(host) { + host += 1; + } else { + host = shell.args.dsn; + } + fprintf(stdout, "successfully connected to %s\n", host); + } else { + fprintf(stdout, "successfully connected to cloud service\n"); + } } fflush(stdout);