diff --git a/Jenkinsfile2 b/Jenkinsfile2 index 249a8d1c9d..6fa3483099 100644 --- a/Jenkinsfile2 +++ b/Jenkinsfile2 @@ -78,7 +78,7 @@ def check_docs(){ file_only_tdgpt_change_except = sh ( script: ''' cd ${WKC} - git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/"|grep -v ".md$" | grep -v "forecastoperator.c\\|anomalywindowoperator.c" |grep -v "tsim/analytics" |grep -v "tdgpt_cases.task" || : + git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/"|grep -v ".md$" | grep -v "forecastoperator.c\\|anomalywindowoperator.c\\|tanalytics.h\\|tanalytics.c" |grep -v "tsim/analytics" |grep -v "tdgpt_cases.task" || : ''', returnStdout: true ).trim() diff --git a/README.md b/README.md index 030be7bc3b..f827c38975 100644 --- a/README.md +++ b/README.md @@ -10,8 +10,6 @@
-[](https://cloud.drone.io/taosdata/TDengine)
-[](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
[](https://bestpractices.coreinfrastructure.org/projects/4201)
diff --git a/docs/en/10-third-party/03-visual/01-grafana.md b/docs/en/10-third-party/03-visual/01-grafana.md
index ec857f7795..8a503b4195 100644
--- a/docs/en/10-third-party/03-visual/01-grafana.md
+++ b/docs/en/10-third-party/03-visual/01-grafana.md
@@ -22,15 +22,11 @@ import imgStep11 from '../../assets/grafana-11.png';
This document describes how to integrate the TDengine data source with the open-source data visualization system [Grafana](https://www.grafana.com/) to achieve data visualization and build a monitoring and alert system. With the TDengine plugin, you can easily display data from TDengine tables on Grafana dashboards without the need for complex development work.
-## Grafana Version Requirements
-
-TDengine currently supports Grafana version 7.5 and above. It is recommended to use the latest version. Please download and install the corresponding version of Grafana according to your system environment.
-
## Prerequisites
To add the TDengine data source to Grafana normally, the following preparations are needed.
-- Grafana service has been deployed and is running normally.
+- Grafana service has been deployed and is running normally. TDengine currently supports Grafana version 7.5 and above. It is recommended to use the latest version.
**Note**: Ensure that the account starting Grafana has write permissions to its installation directory, otherwise you may not be able to install plugins later.
- TDengine cluster has been deployed and is running normally.
- taosAdapter has been installed and is running normally. For details, please refer to the [taosAdapter user manual](../../../tdengine-reference/components/taosadapter/)
diff --git a/docs/en/10-third-party/05-bi/09-seeq.md b/docs/en/10-third-party/05-bi/09-seeq.md
index c8f7462a19..7fb7569461 100644
--- a/docs/en/10-third-party/05-bi/09-seeq.md
+++ b/docs/en/10-third-party/05-bi/09-seeq.md
@@ -13,13 +13,11 @@ Seeq is advanced analytics software for the manufacturing and Industrial Interne
Through the TDengine Java connector, Seeq can easily support querying time-series data provided by TDengine and offer data presentation, analysis, prediction, and other functions.
-## Seeq Installation Method
+## Prerequisites
-Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as Seeq Server and Seeq Data Lab, etc. Seeq Data Lab needs to be installed on a different server from Seeq Server and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
+- Seeq has been installed. Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as Seeq Server and Seeq Data Lab, etc. Seeq Data Lab needs to be installed on a different server from Seeq Server and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
-### TDengine Local Instance Installation Method
-
-Please refer to the [official documentation](../../../get-started).
+- TDengine local instance has been installed. Please refer to the [official documentation](../../../get-started). If using TDengine Cloud, please go to https://cloud.taosdata.com apply for an account and log in to see how to access TDengine Cloud.
## Configuring Seeq to Access TDengine
diff --git a/docs/en/14-reference/01-components/05-taosx-agent.md b/docs/en/14-reference/01-components/05-taosx-agent.md
index 3e8b4f4d63..d7d86a64cb 100644
--- a/docs/en/14-reference/01-components/05-taosx-agent.md
+++ b/docs/en/14-reference/01-components/05-taosx-agent.md
@@ -14,6 +14,7 @@ The default configuration file for `Agent` is located at `/etc/taos/agent.toml`,
- `token`: Required, the Token generated when creating `Agent` in `Explorer`.
- `instanceId`: The instance ID of the current taosx-agent service. If multiple taosx-agent instances are started on the same machine, it is necessary to ensure that the instance IDs of each instance are unique.
- `compression`: Optional, can be configured as `true` or `false`, default is `false`. If set to `true`, it enables data compression in communication between `Agent` and `taosX`.
+- `in_memory_cache_capacity`: Optional, signifies the maximum number of message batches that can be cached in memory and can be configured as a positive integer greater than zero. The default value is set at 64.
- `log_level`: Optional, log level, default is `info`. Like `taosX`, it supports five levels: `error`, `warn`, `info`, `debug`, `trace`. Deprecated, please use `log.level` instead.
- `log_keep_days`: Optional, the number of days to keep logs, default is `30` days. Deprecated, please use `log.keepDays` instead.
- `log.path`: The directory where log files are stored.
@@ -45,6 +46,10 @@ As shown below:
#
#compression = true
+# In-memory cache capacity
+#
+#in_memory_cache_capacity = 64
+
# log configuration
[log]
# All log files are stored in this directory
diff --git a/docs/en/14-reference/09-error-code.md b/docs/en/14-reference/09-error-code.md
index 1d3ea3f9a1..2bbd8f9305 100644
--- a/docs/en/14-reference/09-error-code.md
+++ b/docs/en/14-reference/09-error-code.md
@@ -386,7 +386,7 @@ This document details the server error codes that may be encountered when using
| 0x8000260D | Tags number not matched | Mismatched number of tag columns | Check and correct the SQL statement |
| 0x8000260E | Invalid tag name | Invalid or non-existent tag name | Check and correct the SQL statement |
| 0x80002610 | Value is too long | Value length exceeds limit | Check and correct the SQL statement or API parameters |
-| 0x80002611 | Password can not be empty | Password is empty | Use a valid password |
+| 0x80002611 | Password too short or empty | Password is empty or less than 8 chars | Use a valid password |
| 0x80002612 | Port should be an integer that is less than 65535 and greater than 0 | Illegal port number | Check and correct the port number |
| 0x80002613 | Endpoint should be in the format of 'fqdn:port' | Incorrect address format | Check and correct the address information |
| 0x80002614 | This statement is no longer supported | Feature has been deprecated | Refer to the feature documentation |
diff --git a/docs/zh/10-third-party/03-visual/01-grafana.mdx b/docs/zh/10-third-party/03-visual/01-grafana.mdx
index d7406352c9..043cfcaa5c 100644
--- a/docs/zh/10-third-party/03-visual/01-grafana.mdx
+++ b/docs/zh/10-third-party/03-visual/01-grafana.mdx
@@ -10,15 +10,11 @@ import TabItem from "@theme/TabItem";
## 概述
本文档介绍如何将 TDengine 数据源与开源数据可视化系统 [Grafana](https://www.grafana.com/) 集成,以实现数据的可视化和监测报警系统的搭建。通过 TDengine 插件,您可以轻松地将 TDengine 数据表的数据展示在 Grafana 仪表盘上,且无需进行复杂的开发工作。
-## Grafana 版本要求
-当前 TDengine 支持 Grafana 7.5 及以上版本,建议使用最新版本。请根据您的系统环境下载并安装对应版本的 Grafana。
-
-
## 前置条件
要让 Grafana 能正常添加 TDengine 数据源,需要以下几方面的准备工作。
-- Grafana 服务已经部署并正常运行。
+- Grafana 服务已经部署并正常运行。当前 TDengine 支持 Grafana 7.5 及以上版本,建议使用最新版本。
**注意**:要确保启动 Grafana 的账号有其安装目录的写权限,否则可能后面无法安装插件。
- TDengine 集群已经部署并正常运行。
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](../../../reference/components/taosadapter)
diff --git a/docs/zh/10-third-party/05-bi/05-yhbi.md b/docs/zh/10-third-party/05-bi/05-yhbi.md
index b60b0495f0..70dda71051 100644
--- a/docs/zh/10-third-party/05-bi/05-yhbi.md
+++ b/docs/zh/10-third-party/05-bi/05-yhbi.md
@@ -10,13 +10,10 @@ toc_max_heading_level: 4
一旦数据源配置完成,永洪BI便能直接从TDengine中读取数据,并利用其强大的数据处理和分析功能,为用户提供丰富的数据展示、分析和预测能力。这意味着用户无须编写复杂的代码或进行烦琐的数据转换工作,即可轻松获取所需的业务洞察。
-## 安装永洪 BI
+## 前置条件
-确保永洪 BI 已经安装并运行(如果未安装,请到永洪科技官方下载页面下载)。
-
-## 安装JDBC驱动
-
-从 maven.org 下载 TDengine JDBC 连接器文件 “taos-jdbcdriver-3.2.7-dist.jar”,并安装在永洪 BI 的机器上。
+- 确保永洪 BI 已经安装并运行(如果未安装,请到永洪科技官方下载页面下载)。
+- 安装JDBC驱动。从 maven.org 下载 TDengine JDBC 连接器文件 “taos-jdbcdriver-3.4.0-dist.jar”,并安装在永洪 BI 的机器上。
## 配置JDBC数据源
diff --git a/docs/zh/10-third-party/05-bi/09-seeq.md b/docs/zh/10-third-party/05-bi/09-seeq.md
index 7e61cdcb11..e01deb7e84 100644
--- a/docs/zh/10-third-party/05-bi/09-seeq.md
+++ b/docs/zh/10-third-party/05-bi/09-seeq.md
@@ -8,16 +8,11 @@ Seeq 是制造业和工业互联网(IIOT)高级分析软件。Seeq 支持在
通过 TDengine Java connector, Seeq 可以轻松支持查询 TDengine 提供的时序数据,并提供数据展现、分析、预测等功能。
-## Seeq 安装方法
+## 前置条件
-从 [Seeq 官网](https://www.seeq.com/customer-download)下载相关软件,例如 Seeq Server 和 Seeq Data Lab 等。Seeq Data Lab 需要安装在和 Seeq Server 不同的服务器上,并通过配置和 Seeq Server 互联。详细安装配置指令参见[Seeq 知识库]( https://support.seeq.com/kb/latest/cloud/)。
+- Seeq 已经安装。从 [Seeq 官网](https://www.seeq.com/customer-download)下载相关软件,例如 Seeq Server 和 Seeq Data Lab 等。Seeq Data Lab 需要安装在和 Seeq Server 不同的服务器上,并通过配置和 Seeq Server 互联。详细安装配置指令参见[Seeq 知识库]( https://support.seeq.com/kb/latest/cloud/)。
-### TDengine 本地实例安装方法
-
-请参考[官网文档](../../../get-started)。
-
-### TDengine Cloud 访问方法
-如果使用 Seeq 连接 TDengine Cloud,请在 https://cloud.taosdata.com 申请帐号并登录查看如何访问 TDengine Cloud。
+- TDengine 本地实例已安装。 请参考[官网文档](../../../get-started)。 若使用 TDengine Cloud,请在 https://cloud.taosdata.com 申请帐号并登录查看如何访问 TDengine Cloud。
## 配置 Seeq 访问 TDengine
diff --git a/docs/zh/14-reference/01-components/05-taosx-agent.md b/docs/zh/14-reference/01-components/05-taosx-agent.md
index bf2e6f7e78..1f1276e834 100644
--- a/docs/zh/14-reference/01-components/05-taosx-agent.md
+++ b/docs/zh/14-reference/01-components/05-taosx-agent.md
@@ -12,7 +12,8 @@ sidebar_label: taosX-Agent
- `endpoint`: 必填,`taosX` 的 GRPC 服务地址。
- `token`: 必填,在 `Explorer` 上创建 `Agent` 时,产生的 Token。
- `instanceId`:当前 taosx-agent 服务的实例 ID,如果同一台机器上启动了多个 taosx-agent 实例,必须保证各个实例的实例 ID 互不相同。
-- `compression`: 非必填,可配置为 `ture` 或 `false`, 默认为 `false`。配置为`true`, 则开启 `Agent` 和 `taosX` 通信数据压缩。
+- `compression`: 非必填,可配置为 `true` 或 `false`, 默认为 `false`。配置为`true`, 则开启 `Agent` 和 `taosX` 通信数据压缩。
+- `in_memory_cache_capacity`: 非必填,表示可在内存中缓存的最大消息批次数,可配置为大于 0 的整数。默认为 `64`。
- `log_level`: 非必填,日志级别,默认为 `info`, 同 `taosX` 一样,支持 `error`,`warn`,`info`,`debug`,`trace` 五级。已弃用,请使用 `log.level` 代替。
- `log_keep_days`:非必填,日志保存天数,默认为 `30` 天。已弃用,请使用 `log.keepDays` 代替。
- `log.path`:日志文件存放的目录。
@@ -44,6 +45,10 @@ sidebar_label: taosX-Agent
#
#compression = true
+# In-memory cache capacity
+#
+#in_memory_cache_capacity = 64
+
# log configuration
[log]
# All log files are stored in this directory
diff --git a/docs/zh/14-reference/09-error-code.md b/docs/zh/14-reference/09-error-code.md
index 2bebe2406b..51453cef4c 100644
--- a/docs/zh/14-reference/09-error-code.md
+++ b/docs/zh/14-reference/09-error-code.md
@@ -403,7 +403,7 @@ description: TDengine 服务端的错误码列表和详细说明
| 0x8000260D | Tags number not matched | tag列个数不匹配 | 检查并修正SQL语句 |
| 0x8000260E | Invalid tag name | 无效或不存在的tag名 | 检查并修正SQL语句 |
| 0x80002610 | Value is too long | 值长度超出限制 | 检查并修正SQL语句或API参数 |
-| 0x80002611 | Password can not be empty | 密码为空 | 使用合法的密码 |
+| 0x80002611 | Password too short or empty | 密码为空或少于 8 个字符 | 使用合法的密码 |
| 0x80002612 | Port should be an integer that is less than 65535 and greater than 0 | 端口号非法 | 检查并修正端口号 |
| 0x80002613 | Endpoint should be in the format of 'fqdn:port' | 地址格式错误 | 检查并修正地址信息 |
| 0x80002614 | This statement is no longer supported | 功能已经废弃 | 参考功能文档说明 |
diff --git a/include/common/tanalytics.h b/include/common/tanalytics.h
index d0af84ecfb..6ebdb38fa6 100644
--- a/include/common/tanalytics.h
+++ b/include/common/tanalytics.h
@@ -28,8 +28,8 @@ extern "C" {
#define ANAL_FORECAST_DEFAULT_ROWS 10
#define ANAL_FORECAST_DEFAULT_CONF 95
#define ANAL_FORECAST_DEFAULT_WNCHECK 1
-#define ANAL_FORECAST_MAX_ROWS 10000
-#define ANAL_ANOMALY_WINDOW_MAX_ROWS 10000
+#define ANAL_FORECAST_MAX_ROWS 40000
+#define ANAL_ANOMALY_WINDOW_MAX_ROWS 40000
typedef struct {
EAnalAlgoType type;
diff --git a/include/libs/function/function.h b/include/libs/function/function.h
index 0ca1962b4e..126ed2c9b0 100644
--- a/include/libs/function/function.h
+++ b/include/libs/function/function.h
@@ -288,6 +288,7 @@ struct SScalarParam {
bool colAlloced;
SColumnInfoData *columnData;
SHashObj *pHashFilter;
+ SHashObj *pHashFilterOthers;
int32_t hashValueType;
void *param; // other parameter, such as meta handle from vnode, to extract table name/tag value
int32_t numOfRows;
diff --git a/include/libs/scalar/scalar.h b/include/libs/scalar/scalar.h
index fd936dd087..67fd954ad7 100644
--- a/include/libs/scalar/scalar.h
+++ b/include/libs/scalar/scalar.h
@@ -40,7 +40,7 @@ pDst need to freed in caller
int32_t scalarCalculate(SNode *pNode, SArray *pBlockList, SScalarParam *pDst);
int32_t scalarGetOperatorParamNum(EOperatorType type);
-int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type);
+int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type, int8_t processType);
int32_t vectorGetConvertType(int32_t type1, int32_t type2);
int32_t vectorConvertSingleColImpl(const SScalarParam *pIn, SScalarParam *pOut, int32_t *overflow, int32_t startIndex, int32_t numOfRows);
diff --git a/include/util/taoserror.h b/include/util/taoserror.h
index 64ef0b3829..e317fdd65a 100644
--- a/include/util/taoserror.h
+++ b/include/util/taoserror.h
@@ -801,7 +801,7 @@ int32_t taosGetErrSize();
#define TSDB_CODE_PAR_TAGS_NOT_MATCHED TAOS_DEF_ERROR_CODE(0, 0x260D)
#define TSDB_CODE_PAR_INVALID_TAG_NAME TAOS_DEF_ERROR_CODE(0, 0x260E)
#define TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG TAOS_DEF_ERROR_CODE(0, 0x2610)
-#define TSDB_CODE_PAR_PASSWD_EMPTY TAOS_DEF_ERROR_CODE(0, 0x2611)
+#define TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY TAOS_DEF_ERROR_CODE(0, 0x2611)
#define TSDB_CODE_PAR_INVALID_PORT TAOS_DEF_ERROR_CODE(0, 0x2612)
#define TSDB_CODE_PAR_INVALID_ENDPOINT TAOS_DEF_ERROR_CODE(0, 0x2613)
#define TSDB_CODE_PAR_EXPRIE_STATEMENT TAOS_DEF_ERROR_CODE(0, 0x2614)
diff --git a/packaging/deb/makedeb.sh b/packaging/deb/makedeb.sh
index 906a227ad5..9d28b63a15 100755
--- a/packaging/deb/makedeb.sh
+++ b/packaging/deb/makedeb.sh
@@ -44,27 +44,27 @@ mkdir -p ${pkg_dir}${install_home_path}/include
#mkdir -p ${pkg_dir}${install_home_path}/init.d
mkdir -p ${pkg_dir}${install_home_path}/script
-# download taoskeeper and build
-if [ "$cpuType" = "x64" ] || [ "$cpuType" = "x86_64" ] || [ "$cpuType" = "amd64" ]; then
- arch=amd64
-elif [ "$cpuType" = "x32" ] || [ "$cpuType" = "i386" ] || [ "$cpuType" = "i686" ]; then
- arch=386
-elif [ "$cpuType" = "arm" ] || [ "$cpuType" = "aarch32" ]; then
- arch=arm
-elif [ "$cpuType" = "arm64" ] || [ "$cpuType" = "aarch64" ]; then
- arch=arm64
-else
- arch=$cpuType
-fi
+# # download taoskeeper and build
+# if [ "$cpuType" = "x64" ] || [ "$cpuType" = "x86_64" ] || [ "$cpuType" = "amd64" ]; then
+# arch=amd64
+# elif [ "$cpuType" = "x32" ] || [ "$cpuType" = "i386" ] || [ "$cpuType" = "i686" ]; then
+# arch=386
+# elif [ "$cpuType" = "arm" ] || [ "$cpuType" = "aarch32" ]; then
+# arch=arm
+# elif [ "$cpuType" = "arm64" ] || [ "$cpuType" = "aarch64" ]; then
+# arch=arm64
+# else
+# arch=$cpuType
+# fi
-echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}"
-echo "$top_dir=${top_dir}"
-taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}`
-echo "taoskeeper_binary: ${taoskeeper_binary}"
+# echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}"
+# echo "$top_dir=${top_dir}"
+# taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}`
+# echo "taoskeeper_binary: ${taoskeeper_binary}"
# copy config files
-cp $(dirname ${taoskeeper_binary})/config/taoskeeper.toml ${pkg_dir}${install_home_path}/cfg
-cp $(dirname ${taoskeeper_binary})/taoskeeper.service ${pkg_dir}${install_home_path}/cfg
+# cp $(dirname ${taoskeeper_binary})/config/taoskeeper.toml ${pkg_dir}${install_home_path}/cfg
+# cp $(dirname ${taoskeeper_binary})/taoskeeper.service ${pkg_dir}${install_home_path}/cfg
cp ${compile_dir}/../packaging/cfg/taos.cfg ${pkg_dir}${install_home_path}/cfg
cp ${compile_dir}/../packaging/cfg/taosd.service ${pkg_dir}${install_home_path}/cfg
@@ -75,7 +75,12 @@ fi
if [ -f "${compile_dir}/test/cfg/taosadapter.service" ]; then
cp ${compile_dir}/test/cfg/taosadapter.service ${pkg_dir}${install_home_path}/cfg || :
fi
-
+if [ -f "${compile_dir}/test/cfg/taoskeeper.toml" ]; then
+ cp ${compile_dir}/test/cfg/taoskeeper.toml ${pkg_dir}${install_home_path}/cfg || :
+fi
+if [ -f "${compile_dir}/test/cfg/taoskeeper.service" ]; then
+ cp ${compile_dir}/test/cfg/taoskeeper.service ${pkg_dir}${install_home_path}/cfg || :
+fi
if [ -f "${compile_dir}/../../../explorer/target/taos-explorer.service" ]; then
cp ${compile_dir}/../../../explorer/target/taos-explorer.service ${pkg_dir}${install_home_path}/cfg || :
fi
@@ -83,7 +88,7 @@ if [ -f "${compile_dir}/../../../explorer/server/example/explorer.toml" ]; then
cp ${compile_dir}/../../../explorer/server/example/explorer.toml ${pkg_dir}${install_home_path}/cfg || :
fi
-cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin
+# cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin
#cp ${compile_dir}/../packaging/deb/taosd ${pkg_dir}${install_home_path}/init.d
cp ${compile_dir}/../packaging/tools/post.sh ${pkg_dir}${install_home_path}/script
cp ${compile_dir}/../packaging/tools/preun.sh ${pkg_dir}${install_home_path}/script
@@ -104,6 +109,9 @@ cp ${compile_dir}/build/bin/taosdump ${pkg_dir}${install_home_path
if [ -f "${compile_dir}/build/bin/taosadapter" ]; then
cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||:
fi
+if [ -f "${compile_dir}/build/bin/taoskeeper" ]; then
+ cp ${compile_dir}/build/bin/taoskeeper ${pkg_dir}${install_home_path}/bin ||:
+fi
if [ -f "${compile_dir}/../../../explorer/target/release/taos-explorer" ]; then
cp ${compile_dir}/../../../explorer/target/release/taos-explorer ${pkg_dir}${install_home_path}/bin ||:
@@ -185,7 +193,7 @@ else
exit 1
fi
-rm -rf ${pkg_dir}/build-taoskeeper
+# rm -rf ${pkg_dir}/build-taoskeeper
# make deb package
dpkg -b ${pkg_dir} $debname
echo "make deb package success!"
diff --git a/packaging/rpm/makerpm.sh b/packaging/rpm/makerpm.sh
index f895193b6b..091e056a79 100755
--- a/packaging/rpm/makerpm.sh
+++ b/packaging/rpm/makerpm.sh
@@ -56,24 +56,23 @@ fi
${csudo}mkdir -p ${pkg_dir}
cd ${pkg_dir}
-# download taoskeeper and build
-if [ "$cpuType" = "x64" ] || [ "$cpuType" = "x86_64" ] || [ "$cpuType" = "amd64" ]; then
- arch=amd64
-elif [ "$cpuType" = "x32" ] || [ "$cpuType" = "i386" ] || [ "$cpuType" = "i686" ]; then
- arch=386
-elif [ "$cpuType" = "arm" ] || [ "$cpuType" = "aarch32" ]; then
- arch=arm
-elif [ "$cpuType" = "arm64" ] || [ "$cpuType" = "aarch64" ]; then
- arch=arm64
-else
- arch=$cpuType
-fi
+# # download taoskeeper and build
+# if [ "$cpuType" = "x64" ] || [ "$cpuType" = "x86_64" ] || [ "$cpuType" = "amd64" ]; then
+# arch=amd64
+# elif [ "$cpuType" = "x32" ] || [ "$cpuType" = "i386" ] || [ "$cpuType" = "i686" ]; then
+# arch=386
+# elif [ "$cpuType" = "arm" ] || [ "$cpuType" = "aarch32" ]; then
+# arch=arm
+# elif [ "$cpuType" = "arm64" ] || [ "$cpuType" = "aarch64" ]; then
+# arch=arm64
+# else
+# arch=$cpuType
+# fi
-cd ${top_dir}
-echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}"
-taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}`
-echo "taoskeeper_binary: ${taoskeeper_binary}"
-cd ${package_dir}
+# cd ${top_dir}
+# echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}"
+# taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}`
+# echo "taoskeeper_binary: ${taoskeeper_binary}"
${csudo}mkdir -p BUILD BUILDROOT RPMS SOURCES SPECS SRPMS
@@ -106,4 +105,4 @@ mv ${output_dir}/TDengine-${tdengine_ver}.rpm ${output_dir}/${rpmname}
cd ..
${csudo}rm -rf ${pkg_dir}
-rm -rf ${top_dir}/build-taoskeeper
\ No newline at end of file
+# rm -rf ${top_dir}/build-taoskeeper
\ No newline at end of file
diff --git a/packaging/rpm/tdengine.spec b/packaging/rpm/tdengine.spec
index 3e23e29a40..c8a6270456 100644
--- a/packaging/rpm/tdengine.spec
+++ b/packaging/rpm/tdengine.spec
@@ -68,12 +68,12 @@ if [ -f %{_compiledir}/test/cfg/taosadapter.service ]; then
cp %{_compiledir}/test/cfg/taosadapter.service %{buildroot}%{homepath}/cfg
fi
-if [ -f %{_compiledir}/../build-taoskeeper/config/taoskeeper.toml ]; then
- cp %{_compiledir}/../build-taoskeeper/config/taoskeeper.toml %{buildroot}%{homepath}/cfg ||:
+if [ -f %{_compiledir}/test/cfg/taoskeeper.toml ]; then
+ cp %{_compiledir}/test/cfg/taoskeeper.toml %{buildroot}%{homepath}/cfg ||:
fi
-if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper.service ]; then
- cp %{_compiledir}/../build-taoskeeper/taoskeeper.service %{buildroot}%{homepath}/cfg ||:
+if [ -f %{_compiledir}/test/cfg/taoskeeper.service ]; then
+ cp %{_compiledir}/test/cfg/taoskeeper.service %{buildroot}%{homepath}/cfg ||:
fi
if [ -f %{_compiledir}/../../../explorer/target/taos-explorer.service ]; then
@@ -104,8 +104,8 @@ if [ -f %{_compiledir}/../../../explorer/target/release/taos-explorer ]; then
cp %{_compiledir}/../../../explorer/target/release/taos-explorer %{buildroot}%{homepath}/bin
fi
-if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper ]; then
- cp %{_compiledir}/../build-taoskeeper/taoskeeper %{buildroot}%{homepath}/bin
+if [ -f %{_compiledir}/build/bin//taoskeeper ]; then
+ cp %{_compiledir}/build/bin//taoskeeper %{buildroot}%{homepath}/bin
fi
if [ -f %{_compiledir}/build/bin/taosadapter ]; then
diff --git a/packaging/tools/makeclient.sh b/packaging/tools/makeclient.sh
index d67d436fa7..87f4f57fd3 100755
--- a/packaging/tools/makeclient.sh
+++ b/packaging/tools/makeclient.sh
@@ -282,5 +282,3 @@ else
rm -rf ${install_dir} ||:
# mv ../"$(basename ${pkg_name}).tar.gz" .
fi
-
-cd ${curr_dir}
diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c
index b311a9e06d..3683a0d454 100644
--- a/source/client/src/clientMain.c
+++ b/source/client/src/clientMain.c
@@ -2201,6 +2201,11 @@ int taos_stmt2_bind_param(TAOS_STMT2 *stmt, TAOS_STMT2_BINDV *bindv, int32_t col
if (code) {
goto out;
}
+ } else {
+ code = stmtSetTbTags2(stmt, NULL);
+ if (code) {
+ return code;
+ }
}
if (bindv->bind_cols && bindv->bind_cols[i]) {
diff --git a/source/client/src/clientStmt2.c b/source/client/src/clientStmt2.c
index 950444df52..acd118acc9 100644
--- a/source/client/src/clientStmt2.c
+++ b/source/client/src/clientStmt2.c
@@ -1012,10 +1012,10 @@ int stmtSetTbTags2(TAOS_STMT2* stmt, TAOS_STMT2_BIND* tags) {
}
SBoundColInfo* tags_info = (SBoundColInfo*)pStmt->bInfo.boundTags;
- if (tags_info->numOfBound <= 0 || tags_info->numOfCols <= 0) {
- tscWarn("no tags or cols bound in sql, will not bound tags");
- return TSDB_CODE_SUCCESS;
- }
+ // if (tags_info->numOfBound <= 0 || tags_info->numOfCols <= 0) {
+ // tscWarn("no tags or cols bound in sql, will not bound tags");
+ // return TSDB_CODE_SUCCESS;
+ // }
STableDataCxt** pDataBlock =
(STableDataCxt**)taosHashGet(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName));
diff --git a/source/dnode/mgmt/mgmt_dnode/inc/dmInt.h b/source/dnode/mgmt/mgmt_dnode/inc/dmInt.h
index 2108a097ee..bfe4cd165e 100644
--- a/source/dnode/mgmt/mgmt_dnode/inc/dmInt.h
+++ b/source/dnode/mgmt/mgmt_dnode/inc/dmInt.h
@@ -69,6 +69,7 @@ int32_t dmStartStatusThread(SDnodeMgmt *pMgmt);
int32_t dmStartConfigThread(SDnodeMgmt *pMgmt);
int32_t dmStartStatusInfoThread(SDnodeMgmt *pMgmt);
void dmStopStatusThread(SDnodeMgmt *pMgmt);
+void dmStopConfigThread(SDnodeMgmt *pMgmt);
void dmStopStatusInfoThread(SDnodeMgmt *pMgmt);
int32_t dmStartNotifyThread(SDnodeMgmt *pMgmt);
void dmStopNotifyThread(SDnodeMgmt *pMgmt);
diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmInt.c b/source/dnode/mgmt/mgmt_dnode/src/dmInt.c
index b58c1a216d..b3b1df314a 100644
--- a/source/dnode/mgmt/mgmt_dnode/src/dmInt.c
+++ b/source/dnode/mgmt/mgmt_dnode/src/dmInt.c
@@ -52,6 +52,7 @@ static void dmStopMgmt(SDnodeMgmt *pMgmt) {
dmStopMonitorThread(pMgmt);
dmStopAuditThread(pMgmt);
dmStopStatusThread(pMgmt);
+ dmStopConfigThread(pMgmt);
dmStopStatusInfoThread(pMgmt);
#if defined(TD_ENTERPRISE)
dmStopNotifyThread(pMgmt);
diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmWorker.c b/source/dnode/mgmt/mgmt_dnode/src/dmWorker.c
index 8f890f6805..ef4e76031d 100644
--- a/source/dnode/mgmt/mgmt_dnode/src/dmWorker.c
+++ b/source/dnode/mgmt/mgmt_dnode/src/dmWorker.c
@@ -343,7 +343,7 @@ int32_t dmStartConfigThread(SDnodeMgmt *pMgmt) {
int32_t code = 0;
TdThreadAttr thAttr;
(void)taosThreadAttrInit(&thAttr);
- (void)taosThreadAttrSetDetachState(&thAttr, PTHREAD_CREATE_DETACHED);
+ (void)taosThreadAttrSetDetachState(&thAttr, PTHREAD_CREATE_JOINABLE);
if (taosThreadCreate(&pMgmt->configThread, &thAttr, dmConfigThreadFp, pMgmt) != 0) {
code = TAOS_SYSTEM_ERROR(errno);
dError("failed to create config thread since %s", tstrerror(code));
@@ -378,6 +378,13 @@ void dmStopStatusThread(SDnodeMgmt *pMgmt) {
}
}
+void dmStopConfigThread(SDnodeMgmt *pMgmt) {
+ if (taosCheckPthreadValid(pMgmt->configThread)) {
+ (void)taosThreadJoin(pMgmt->configThread, NULL);
+ taosThreadClear(&pMgmt->configThread);
+ }
+}
+
void dmStopStatusInfoThread(SDnodeMgmt *pMgmt) {
if (taosCheckPthreadValid(pMgmt->statusInfoThread)) {
(void)taosThreadJoin(pMgmt->statusInfoThread, NULL);
diff --git a/source/dnode/mnode/impl/src/mndUser.c b/source/dnode/mnode/impl/src/mndUser.c
index e1518d3752..5b2a5fa8aa 100644
--- a/source/dnode/mnode/impl/src/mndUser.c
+++ b/source/dnode/mnode/impl/src/mndUser.c
@@ -1805,12 +1805,21 @@ _OVER:
TAOS_RETURN(code);
}
-static int32_t mndCheckPasswordFmt(const char *pwd) {
- int32_t len = strlen(pwd);
- if (len < TSDB_PASSWORD_MIN_LEN || len > TSDB_PASSWORD_MAX_LEN) {
+static int32_t mndCheckPasswordMinLen(const char *pwd, int32_t len) {
+ if (len < TSDB_PASSWORD_MIN_LEN) {
return -1;
}
+ return 0;
+}
+static int32_t mndCheckPasswordMaxLen(const char *pwd, int32_t len) {
+ if (len > TSDB_PASSWORD_MAX_LEN) {
+ return -1;
+ }
+ return 0;
+}
+
+static int32_t mndCheckPasswordFmt(const char *pwd, int32_t len) {
if (strcmp(pwd, "taosdata") == 0) {
return 0;
}
@@ -1875,14 +1884,17 @@ static int32_t mndProcessCreateUserReq(SRpcMsg *pReq) {
TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_USER_FORMAT, &lino, _OVER);
}
- if (mndCheckPasswordFmt(createReq.pass) != 0) {
- TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_PASS_FORMAT, &lino, _OVER);
- }
-
+ int32_t len = strlen(createReq.pass);
if (createReq.isImport != 1) {
- if (strlen(createReq.pass) >= TSDB_PASSWORD_LEN) {
+ if (mndCheckPasswordMinLen(createReq.pass, len) != 0) {
+ TAOS_CHECK_GOTO(TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY, &lino, _OVER);
+ }
+ if (mndCheckPasswordMaxLen(createReq.pass, len) != 0) {
TAOS_CHECK_GOTO(TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG, &lino, _OVER);
}
+ if (mndCheckPasswordFmt(createReq.pass, len) != 0) {
+ TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_PASS_FORMAT, &lino, _OVER);
+ }
}
code = mndAcquireUser(pMnode, createReq.user, &pUser);
@@ -2364,8 +2376,17 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) {
TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_USER_FORMAT, &lino, _OVER);
}
- if (TSDB_ALTER_USER_PASSWD == alterReq.alterType && mndCheckPasswordFmt(alterReq.pass) != 0) {
- TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_PASS_FORMAT, &lino, _OVER);
+ if (TSDB_ALTER_USER_PASSWD == alterReq.alterType) {
+ int32_t len = strlen(alterReq.pass);
+ if (mndCheckPasswordMinLen(alterReq.pass, len) != 0) {
+ TAOS_CHECK_GOTO(TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY, &lino, _OVER);
+ }
+ if (mndCheckPasswordMaxLen(alterReq.pass, len) != 0) {
+ TAOS_CHECK_GOTO(TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG, &lino, _OVER);
+ }
+ if (mndCheckPasswordFmt(alterReq.pass, len) != 0) {
+ TAOS_CHECK_GOTO(TSDB_CODE_MND_INVALID_PASS_FORMAT, &lino, _OVER);
+ }
}
TAOS_CHECK_GOTO(mndAcquireUser(pMnode, alterReq.user, &pUser), &lino, _OVER);
diff --git a/source/libs/executor/src/forecastoperator.c b/source/libs/executor/src/forecastoperator.c
index a56b0dd214..2985e5e000 100644
--- a/source/libs/executor/src/forecastoperator.c
+++ b/source/libs/executor/src/forecastoperator.c
@@ -72,17 +72,20 @@ static FORCE_INLINE int32_t forecastEnsureBlockCapacity(SSDataBlock* pBlock, int
return TSDB_CODE_SUCCESS;
}
-static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock) {
- if (pSupp->cachedRows > ANAL_FORECAST_MAX_ROWS) {
- return TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS;
- }
-
- int32_t code = TSDB_CODE_SUCCESS;
- int32_t lino = 0;
+static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock, const char* id) {
+ int32_t code = TSDB_CODE_SUCCESS;
+ int32_t lino = 0;
SAnalyticBuf* pBuf = &pSupp->analBuf;
- qDebug("block:%d, %p rows:%" PRId64, pSupp->numOfBlocks, pBlock, pBlock->info.rows);
+ if (pSupp->cachedRows > ANAL_FORECAST_MAX_ROWS) {
+ code = TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS;
+ qError("%s rows:%" PRId64 " for forecast cache, error happens, code:%s, upper limit:%d", id, pSupp->cachedRows,
+ tstrerror(code), ANAL_FORECAST_MAX_ROWS);
+ return code;
+ }
+
pSupp->numOfBlocks++;
+ qDebug("%s block:%d, %p rows:%" PRId64, id, pSupp->numOfBlocks, pBlock, pBlock->info.rows);
for (int32_t j = 0; j < pBlock->info.rows; ++j) {
SColumnInfoData* pValCol = taosArrayGet(pBlock->pDataBlock, pSupp->inputValSlot);
@@ -98,10 +101,16 @@ static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock) {
pSupp->numOfRows++;
code = taosAnalBufWriteColData(pBuf, 0, TSDB_DATA_TYPE_TIMESTAMP, &ts);
- if (TSDB_CODE_SUCCESS != code) return code;
+ if (TSDB_CODE_SUCCESS != code) {
+ qError("%s failed to write ts in buf, code:%s", id, tstrerror(code));
+ return code;
+ }
code = taosAnalBufWriteColData(pBuf, 1, valType, val);
- if (TSDB_CODE_SUCCESS != code) return code;
+ if (TSDB_CODE_SUCCESS != code) {
+ qError("%s failed to write val in buf, code:%s", id, tstrerror(code));
+ return code;
+ }
}
return 0;
@@ -394,7 +403,7 @@ static int32_t forecastNext(SOperatorInfo* pOperator, SSDataBlock** ppRes) {
pSupp->cachedRows += pBlock->info.rows;
qDebug("%s group:%" PRId64 ", blocks:%d, rows:%" PRId64 ", total rows:%" PRId64, pId, pSupp->groupId, numOfBlocks,
pBlock->info.rows, pSupp->cachedRows);
- code = forecastCacheBlock(pSupp, pBlock);
+ code = forecastCacheBlock(pSupp, pBlock, pId);
QUERY_CHECK_CODE(code, lino, _end);
} else {
qDebug("%s group:%" PRId64 ", read finish for new group coming, blocks:%d", pId, pSupp->groupId, numOfBlocks);
@@ -405,7 +414,7 @@ static int32_t forecastNext(SOperatorInfo* pOperator, SSDataBlock** ppRes) {
pSupp->cachedRows = pBlock->info.rows;
qDebug("%s group:%" PRId64 ", new group, rows:%" PRId64 ", total rows:%" PRId64, pId, pSupp->groupId,
pBlock->info.rows, pSupp->cachedRows);
- code = forecastCacheBlock(pSupp, pBlock);
+ code = forecastCacheBlock(pSupp, pBlock, pId);
QUERY_CHECK_CODE(code, lino, _end);
}
diff --git a/source/libs/index/src/indexFilter.c b/source/libs/index/src/indexFilter.c
index 02e5bd34a6..1d1bc66414 100644
--- a/source/libs/index/src/indexFilter.c
+++ b/source/libs/index/src/indexFilter.c
@@ -326,7 +326,7 @@ static int32_t sifInitParam(SNode *node, SIFParam *param, SIFCtx *ctx) {
indexError("invalid length for node:%p, length: %d", node, LIST_LENGTH(nl->pNodeList));
SIF_ERR_RET(TSDB_CODE_QRY_INVALID_INPUT);
}
- SIF_ERR_RET(scalarGenerateSetFromList((void **)¶m->pFilter, node, nl->node.resType.type));
+ SIF_ERR_RET(scalarGenerateSetFromList((void **)¶m->pFilter, node, nl->node.resType.type, 0));
if (taosHashPut(ctx->pRes, &node, POINTER_BYTES, param, sizeof(*param))) {
taosHashCleanup(param->pFilter);
param->pFilter = NULL;
diff --git a/source/libs/parser/src/parAstCreater.c b/source/libs/parser/src/parAstCreater.c
index a13472620b..fa656667af 100644
--- a/source/libs/parser/src/parAstCreater.c
+++ b/source/libs/parser/src/parAstCreater.c
@@ -116,7 +116,7 @@ static bool checkPassword(SAstCreateContext* pCxt, const SToken* pPasswordToken,
strncpy(pPassword, pPasswordToken->z, pPasswordToken->n);
(void)strdequote(pPassword);
if (strtrim(pPassword) <= 0) {
- pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_PASSWD_EMPTY);
+ pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY);
} else if (invalidPassword(pPassword)) {
pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_PASSWD);
}
diff --git a/source/libs/parser/src/parInsertUtil.c b/source/libs/parser/src/parInsertUtil.c
index 502dbb57dd..ed1f498a32 100644
--- a/source/libs/parser/src/parInsertUtil.c
+++ b/source/libs/parser/src/parInsertUtil.c
@@ -101,10 +101,13 @@ int32_t insCreateSName(SName* pName, SToken* pTableName, int32_t acctId, const c
return buildInvalidOperationMsg(pMsgBuf, msg1);
}
} else { // get current DB name first, and then set it into path
- if (pTableName->n >= TSDB_TABLE_NAME_LEN) {
+ char tbname[TSDB_TABLE_FNAME_LEN] = {0};
+ strncpy(tbname, pTableName->z, pTableName->n);
+ int32_t tbLen = strdequote(tbname);
+ if (tbLen >= TSDB_TABLE_NAME_LEN) {
return buildInvalidOperationMsg(pMsgBuf, msg1);
}
- if (pTableName->n == 0) {
+ if (tbLen == 0) {
return generateSyntaxErrMsg(pMsgBuf, TSDB_CODE_PAR_INVALID_IDENTIFIER_NAME, "invalid table name");
}
diff --git a/source/libs/parser/src/parUtil.c b/source/libs/parser/src/parUtil.c
index 9706644324..0cda428487 100644
--- a/source/libs/parser/src/parUtil.c
+++ b/source/libs/parser/src/parUtil.c
@@ -57,8 +57,8 @@ static char* getSyntaxErrFormat(int32_t errCode) {
return "Invalid tag name: %s";
case TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG:
return "Name or password too long";
- case TSDB_CODE_PAR_PASSWD_EMPTY:
- return "Password can not be empty";
+ case TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY:
+ return "Password too short or empty";
case TSDB_CODE_PAR_INVALID_PORT:
return "Port should be an integer that is less than 65535 and greater than 0";
case TSDB_CODE_PAR_INVALID_ENDPOINT:
diff --git a/source/libs/scalar/inc/sclInt.h b/source/libs/scalar/inc/sclInt.h
index b04e26ac5d..8caa3edf42 100644
--- a/source/libs/scalar/inc/sclInt.h
+++ b/source/libs/scalar/inc/sclInt.h
@@ -143,7 +143,7 @@ int32_t sclConvertToTsValueNode(int8_t precision, SValueNode* valueNode);
#define GET_PARAM_PRECISON(_c) ((_c)->columnData->info.precision)
void sclFreeParam(SScalarParam* param);
-int32_t doVectorCompare(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam *pOut, int32_t startIndex, int32_t numOfRows,
+int32_t doVectorCompare(SScalarParam* pLeft, SScalarParam *pLeftVar, SScalarParam* pRight, SScalarParam *pOut, int32_t startIndex, int32_t numOfRows,
int32_t _ord, int32_t optr);
int32_t vectorCompareImpl(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam *pOut, int32_t startIndex, int32_t numOfRows,
int32_t _ord, int32_t optr);
diff --git a/source/libs/scalar/src/filter.c b/source/libs/scalar/src/filter.c
index a149384163..b329bbbd44 100644
--- a/source/libs/scalar/src/filter.c
+++ b/source/libs/scalar/src/filter.c
@@ -1298,7 +1298,6 @@ int32_t fltAddGroupUnitFromNode(void *pContext, SFilterInfo *info, SNode *tree,
if (node->opType == OP_TYPE_IN && (!IS_VAR_DATA_TYPE(type))) {
SNodeListNode *listNode = (SNodeListNode *)node->pRight;
- SListCell *cell = listNode->pNodeList->pHead;
SScalarParam out = {.columnData = taosMemoryCalloc(1, sizeof(SColumnInfoData))};
if (out.columnData == NULL) {
@@ -1308,8 +1307,9 @@ int32_t fltAddGroupUnitFromNode(void *pContext, SFilterInfo *info, SNode *tree,
out.columnData->info.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; // reserved space for simple_copy
int32_t overflowCount = 0;
- for (int32_t i = 0; i < listNode->pNodeList->length; ++i) {
- SValueNode *valueNode = (SValueNode *)cell->pNode;
+ SNode* nodeItem = NULL;
+ FOREACH(nodeItem, listNode->pNodeList) {
+ SValueNode *valueNode = (SValueNode *)nodeItem;
if (valueNode->node.resType.type != type) {
int32_t overflow = 0;
code = sclConvertValueToSclParam(valueNode, &out, &overflow);
@@ -1319,7 +1319,6 @@ int32_t fltAddGroupUnitFromNode(void *pContext, SFilterInfo *info, SNode *tree,
}
if (overflow) {
- cell = cell->pNext;
++overflowCount;
continue;
}
@@ -1358,8 +1357,6 @@ int32_t fltAddGroupUnitFromNode(void *pContext, SFilterInfo *info, SNode *tree,
code = terrno;
break;
}
-
- cell = cell->pNext;
}
if(overflowCount == listNode->pNodeList->length) {
ctx->ignore = true;
@@ -2228,7 +2225,7 @@ int32_t fltInitValFieldData(SFilterInfo *info) {
}
if (unit->compare.optr == OP_TYPE_IN) {
- FLT_ERR_RET(scalarGenerateSetFromList((void **)&fi->data, fi->desc, type));
+ FLT_ERR_RET(scalarGenerateSetFromList((void **)&fi->data, fi->desc, type, 0));
if (fi->data == NULL) {
fltError("failed to convert in param");
FLT_ERR_RET(TSDB_CODE_APP_ERROR);
@@ -4765,7 +4762,7 @@ EDealRes fltReviseRewriter(SNode **pNode, void *pContext) {
return DEAL_RES_CONTINUE;
}
- if (node->opType == OP_TYPE_NOT_IN || node->opType == OP_TYPE_NOT_LIKE || node->opType > OP_TYPE_IS_NOT_NULL ||
+ if (node->opType == OP_TYPE_NOT_LIKE || node->opType > OP_TYPE_IS_NOT_NULL ||
node->opType == OP_TYPE_NOT_EQUAL) {
stat->scalarMode = true;
return DEAL_RES_CONTINUE;
@@ -4839,7 +4836,7 @@ EDealRes fltReviseRewriter(SNode **pNode, void *pContext) {
}
}
- if (OP_TYPE_IN == node->opType && QUERY_NODE_NODE_LIST != nodeType(node->pRight)) {
+ if ((OP_TYPE_IN == node->opType || OP_TYPE_NOT_IN == node->opType) && QUERY_NODE_NODE_LIST != nodeType(node->pRight)) {
fltError("invalid IN operator node, rightType:%d", nodeType(node->pRight));
stat->code = TSDB_CODE_APP_ERROR;
return DEAL_RES_ERROR;
@@ -4847,25 +4844,37 @@ EDealRes fltReviseRewriter(SNode **pNode, void *pContext) {
SColumnNode *refNode = (SColumnNode *)node->pLeft;
SExprNode *exprNode = NULL;
- if (OP_TYPE_IN != node->opType) {
+ if (OP_TYPE_IN != node->opType && OP_TYPE_NOT_IN != node->opType) {
SValueNode *valueNode = (SValueNode *)node->pRight;
if (FILTER_GET_FLAG(stat->info->options, FLT_OPTION_TIMESTAMP) &&
TSDB_DATA_TYPE_UBIGINT == valueNode->node.resType.type && valueNode->datum.u <= INT64_MAX) {
valueNode->node.resType.type = TSDB_DATA_TYPE_BIGINT;
}
exprNode = &valueNode->node;
+ int32_t type = vectorGetConvertType(refNode->node.resType.type, exprNode->resType.type);
+ if (0 != type && type != refNode->node.resType.type) {
+ stat->scalarMode = true;
+ }
} else {
SNodeListNode *listNode = (SNodeListNode *)node->pRight;
- if (LIST_LENGTH(listNode->pNodeList) > 10) {
+ if (LIST_LENGTH(listNode->pNodeList) > 10 || OP_TYPE_NOT_IN == node->opType) {
stat->scalarMode = true;
- return DEAL_RES_CONTINUE;
}
+ int32_t type = refNode->node.resType.type;
exprNode = &listNode->node;
- }
- int32_t type = vectorGetConvertType(refNode->node.resType.type, exprNode->resType.type);
- if (0 != type && type != refNode->node.resType.type) {
- stat->scalarMode = true;
- return DEAL_RES_CONTINUE;
+ SNode* nodeItem = NULL;
+ FOREACH(nodeItem, listNode->pNodeList) {
+ SValueNode *valueNode = (SValueNode *)nodeItem;
+ int32_t tmp = vectorGetConvertType(type, valueNode->node.resType.type);
+ if (tmp != 0){
+ stat->scalarMode = true;
+ type = tmp;
+ }
+
+ }
+ if (IS_NUMERIC_TYPE(type)){
+ exprNode->resType.type = type;
+ }
}
}
@@ -5016,11 +5025,11 @@ int32_t fltSclBuildRangePoints(SFltSclOperator *oper, SArray *points) {
}
case OP_TYPE_IN: {
SNodeListNode *listNode = (SNodeListNode *)oper->valNode;
- SListCell *cell = listNode->pNodeList->pHead;
SFltSclDatum minDatum = {.kind = FLT_SCL_DATUM_KIND_INT64, .i = INT64_MAX, .type = oper->colNode->node.resType};
SFltSclDatum maxDatum = {.kind = FLT_SCL_DATUM_KIND_INT64, .i = INT64_MIN, .type = oper->colNode->node.resType};
- for (int32_t i = 0; i < listNode->pNodeList->length; ++i) {
- SValueNode *valueNode = (SValueNode *)cell->pNode;
+ SNode* nodeItem = NULL;
+ FOREACH(nodeItem, listNode->pNodeList) {
+ SValueNode *valueNode = (SValueNode *)nodeItem;
SFltSclDatum valDatum;
FLT_ERR_RET(fltSclBuildDatumFromValueNode(&valDatum, valueNode));
if(valueNode->node.resType.type == TSDB_DATA_TYPE_FLOAT || valueNode->node.resType.type == TSDB_DATA_TYPE_DOUBLE) {
@@ -5030,7 +5039,6 @@ int32_t fltSclBuildRangePoints(SFltSclOperator *oper, SArray *points) {
minDatum.i = TMIN(minDatum.i, valDatum.i);
maxDatum.i = TMAX(maxDatum.i, valDatum.i);
}
- cell = cell->pNext;
}
SFltSclPoint startPt = {.start = true, .excl = false, .val = minDatum};
SFltSclPoint endPt = {.start = false, .excl = false, .val = maxDatum};
diff --git a/source/libs/scalar/src/scalar.c b/source/libs/scalar/src/scalar.c
index 6bd08f9ed2..9bab697772 100644
--- a/source/libs/scalar/src/scalar.c
+++ b/source/libs/scalar/src/scalar.c
@@ -116,7 +116,8 @@ _return:
SCL_RET(code);
}
-int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type) {
+// processType = 0 means all type. 1 means number, 2 means var, 3 means float, 4 means var&integer
+int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type, int8_t processType) {
SHashObj *pObj = taosHashInit(256, taosGetDefaultHashFunction(type), true, false);
if (NULL == pObj) {
sclError("taosHashInit failed, size:%d", 256);
@@ -127,7 +128,6 @@ int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type) {
int32_t code = 0;
SNodeListNode *nodeList = (SNodeListNode *)pNode;
- SListCell *cell = nodeList->pNodeList->pHead;
SScalarParam out = {.columnData = taosMemoryCalloc(1, sizeof(SColumnInfoData))};
if (out.columnData == NULL) {
SCL_ERR_JRET(terrno);
@@ -135,8 +135,14 @@ int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type) {
int32_t len = 0;
void *buf = NULL;
- for (int32_t i = 0; i < nodeList->pNodeList->length; ++i) {
- SValueNode *valueNode = (SValueNode *)cell->pNode;
+ SNode* nodeItem = NULL;
+ FOREACH(nodeItem, nodeList->pNodeList) {
+ SValueNode *valueNode = (SValueNode *)nodeItem;
+ if ((IS_VAR_DATA_TYPE(valueNode->node.resType.type) && (processType == 1 || processType == 3)) ||
+ (IS_INTEGER_TYPE(valueNode->node.resType.type) && (processType == 2 || processType == 3)) ||
+ (IS_FLOAT_TYPE(valueNode->node.resType.type) && (processType == 2 || processType == 4))) {
+ continue;
+ }
if (valueNode->node.resType.type != type) {
out.columnData->info.type = type;
@@ -158,7 +164,6 @@ int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type) {
}
if (overflow) {
- cell = cell->pNext;
continue;
}
@@ -184,7 +189,6 @@ int32_t scalarGenerateSetFromList(void **data, void *pNode, uint32_t type) {
}
colInfoDataCleanup(out.columnData, out.numOfRows);
- cell = cell->pNext;
}
*data = pObj;
@@ -230,6 +234,11 @@ void sclFreeParam(SScalarParam *param) {
taosHashCleanup(param->pHashFilter);
param->pHashFilter = NULL;
}
+
+ if (param->pHashFilterOthers != NULL) {
+ taosHashCleanup(param->pHashFilterOthers);
+ param->pHashFilterOthers = NULL;
+ }
}
int32_t sclCopyValueNodeValue(SValueNode *pNode, void **res) {
@@ -369,17 +378,37 @@ int32_t sclInitParam(SNode *node, SScalarParam *param, SScalarCtx *ctx, int32_t
SCL_RET(TSDB_CODE_QRY_INVALID_INPUT);
}
- int32_t type = vectorGetConvertType(ctx->type.selfType, ctx->type.peerType);
- if (type == 0) {
- type = nodeList->node.resType.type;
+ int32_t type = ctx->type.selfType;
+ SNode* nodeItem = NULL;
+ FOREACH(nodeItem, nodeList->pNodeList) {
+ SValueNode *valueNode = (SValueNode *)nodeItem;
+ int32_t tmp = vectorGetConvertType(type, valueNode->node.resType.type);
+ if (tmp != 0){
+ type = tmp;
+ }
+
+ }
+ if (IS_NUMERIC_TYPE(type)){
+ ctx->type.peerType = type;
+ }
+ type = ctx->type.peerType;
+ if (IS_VAR_DATA_TYPE(ctx->type.selfType) && IS_NUMERIC_TYPE(type)){
+ SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilter, node, type, 1));
+ SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilterOthers, node, ctx->type.selfType, 2));
+ } else if (IS_INTEGER_TYPE(ctx->type.selfType) && IS_FLOAT_TYPE(type)){
+ SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilter, node, type, 3));
+ SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilterOthers, node, ctx->type.selfType, 4));
+ } else {
+ SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilter, node, type, 0));
}
- SCL_ERR_RET(scalarGenerateSetFromList((void **)¶m->pHashFilter, node, type));
param->hashValueType = type;
param->colAlloced = true;
if (taosHashPut(ctx->pRes, &node, POINTER_BYTES, param, sizeof(*param))) {
taosHashCleanup(param->pHashFilter);
param->pHashFilter = NULL;
+ taosHashCleanup(param->pHashFilterOthers);
+ param->pHashFilterOthers = NULL;
sclError("taosHashPut nodeList failed, size:%d", (int32_t)sizeof(*param));
return terrno;
}
@@ -512,14 +541,15 @@ int32_t sclInitParamList(SScalarParam **pParams, SNodeList *pParamList, SScalarC
}
if (0 == *rowNum) {
- taosMemoryFreeClear(paramList);
+ sclFreeParamList(paramList, *paramNum);
+ paramList = NULL;
}
*pParams = paramList;
return TSDB_CODE_SUCCESS;
_return:
- taosMemoryFreeClear(paramList);
+ sclFreeParamList(paramList, *paramNum);
SCL_RET(code);
}
@@ -588,7 +618,6 @@ int32_t sclInitOperatorParams(SScalarParam **pParams, SOperatorNode *node, SScal
SCL_ERR_JRET(sclInitParam(node->pLeft, ¶mList[0], ctx, rowNum));
setTzCharset(¶mList[0], node->tz, node->charsetCxt);
if (paramNum > 1) {
- TSWAP(ctx->type.selfType, ctx->type.peerType);
SCL_ERR_JRET(sclInitParam(node->pRight, ¶mList[1], ctx, rowNum));
setTzCharset(¶mList[1], node->tz, node->charsetCxt);
}
diff --git a/source/libs/scalar/src/sclvector.c b/source/libs/scalar/src/sclvector.c
index 81ce23cb10..5b432535fd 100644
--- a/source/libs/scalar/src/sclvector.c
+++ b/source/libs/scalar/src/sclvector.c
@@ -1009,28 +1009,29 @@ int32_t vectorConvertSingleColImpl(const SScalarParam *pIn, SScalarParam *pOut,
}
int8_t gConvertTypes[TSDB_DATA_TYPE_MAX][TSDB_DATA_TYPE_MAX] = {
- /* NULL BOOL TINY SMAL INT BIG FLOA DOUB VARC TIME NCHA UTIN USMA UINT UBIG JSON VARB DECI BLOB MEDB GEOM*/
- /*NULL*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- /*BOOL*/ 0, 0, 2, 3, 4, 5, 6, 7, 5, 9, 7, 11, 12, 13, 14, 0, -1, 0, 0, 0, -1,
- /*TINY*/ 0, 0, 0, 3, 4, 5, 6, 7, 5, 9, 7, 3, 4, 5, 7, 0, -1, 0, 0, 0, -1,
- /*SMAL*/ 0, 0, 0, 0, 4, 5, 6, 7, 5, 9, 7, 3, 4, 5, 7, 0, -1, 0, 0, 0, -1,
- /*INT */ 0, 0, 0, 0, 0, 5, 6, 7, 5, 9, 7, 4, 4, 5, 7, 0, -1, 0, 0, 0, -1,
- /*BIGI*/ 0, 0, 0, 0, 0, 0, 6, 7, 5, 9, 7, 5, 5, 5, 7, 0, -1, 0, 0, 0, -1,
- /*FLOA*/ 0, 0, 0, 0, 0, 0, 0, 7, 7, 6, 7, 6, 6, 6, 6, 0, -1, 0, 0, 0, -1,
- /*DOUB*/ 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 7, 7, 7, 7, 7, 0, -1, 0, 0, 0, -1,
- /*VARC*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 8, 7, 7, 7, 7, 0, 16, 0, 0, 0, 20,
- /*TIME*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 7, 0, -1, 0, 0, 0, -1,
- /*NCHA*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 7, 7, 0, 16, 0, 0, 0, -1,
- /*UTIN*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 13, 14, 0, -1, 0, 0, 0, -1,
- /*USMA*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 14, 0, -1, 0, 0, 0, -1,
- /*UINT*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0, -1, 0, 0, 0, -1,
- /*UBIG*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
- /*JSON*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
- /*VARB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1,-1, -1,
- /*DECI*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
- /*BLOB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
- /*MEDB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
- /*GEOM*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0};
+ /*NULL BOOL TINY SMAL INT BIG FLOA DOUB VARC TIME NCHA UTIN USMA UINT UBIG JSON VARB DECI BLOB MEDB GEOM*/
+ /*NULL*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ /*BOOL*/ 0, 0, 2, 3, 4, 5, 6, 7, 5, 9, 5, 11, 12, 13, 14, 0, -1, 0, 0, 0, -1,
+ /*TINY*/ 0, 0, 0, 3, 4, 5, 6, 7, 5, 9, 5, 3, 4, 5, 7, 0, -1, 0, 0, 0, -1,
+ /*SMAL*/ 0, 0, 0, 0, 4, 5, 6, 7, 5, 9, 5, 3, 4, 5, 7, 0, -1, 0, 0, 0, -1,
+ /*INT */ 0, 0, 0, 0, 0, 5, 6, 7, 5, 9, 5, 4, 4, 5, 7, 0, -1, 0, 0, 0, -1,
+ /*BIGI*/ 0, 0, 0, 0, 0, 0, 6, 7, 5, 9, 5, 5, 5, 5, 7, 0, -1, 0, 0, 0, -1,
+ /*FLOA*/ 0, 0, 0, 0, 0, 0, 0, 7, 6, 6, 6, 6, 6, 6, 6, 0, -1, 0, 0, 0, -1,
+ /*DOUB*/ 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 7, 7, 7, 7, 7, 0, -1, 0, 0, 0, -1,
+ /*VARC*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 8, 7, 7, 7, 7, 0, 16, 0, 0, 0, 20,
+ /*TIME*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 7, 0, -1, 0, 0, 0, -1,
+ /*NCHA*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 7, 7, 0, 16, 0, 0, 0, -1,
+ /*UTIN*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 13, 14, 0, -1, 0, 0, 0, -1,
+ /*USMA*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 14, 0, -1, 0, 0, 0, -1,
+ /*UINT*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0, -1, 0, 0, 0, -1,
+ /*UBIG*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
+ /*JSON*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
+ /*VARB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, -1, -1,
+ /*DECI*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
+ /*BLOB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
+ /*MEDB*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1,
+ /*GEOM*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0
+};
int8_t gDisplyTypes[TSDB_DATA_TYPE_MAX][TSDB_DATA_TYPE_MAX] = {
/*NULL BOOL TINY SMAL INT BIGI FLOA DOUB VARC TIM NCHA UTIN USMA UINT UBIG JSON VARB DECI BLOB MEDB GEOM*/
@@ -1071,6 +1072,9 @@ int32_t vectorGetConvertType(int32_t type1, int32_t type2) {
int32_t vectorConvertSingleCol(SScalarParam *input, SScalarParam *output, int32_t type, int32_t startIndex,
int32_t numOfRows) {
+ if (input->columnData == NULL && (input->pHashFilter != NULL || input->pHashFilterOthers != NULL)){
+ return TSDB_CODE_SUCCESS;
+ }
output->numOfRows = input->numOfRows;
SDataType t = {.type = type};
@@ -1101,36 +1105,18 @@ int32_t vectorConvertCols(SScalarParam *pLeft, SScalarParam *pRight, SScalarPara
int8_t type = 0;
int32_t code = 0;
- SScalarParam *param1 = NULL, *paramOut1 = NULL;
- SScalarParam *param2 = NULL, *paramOut2 = NULL;
+ SScalarParam *param1 = pLeft, *paramOut1 = pLeftOut;
+ SScalarParam *param2 = pRight, *paramOut2 = pRightOut;
// always convert least data
if (IS_VAR_DATA_TYPE(leftType) && IS_VAR_DATA_TYPE(rightType) && (pLeft->numOfRows != pRight->numOfRows) &&
leftType != TSDB_DATA_TYPE_JSON && rightType != TSDB_DATA_TYPE_JSON) {
- param1 = pLeft;
- param2 = pRight;
- paramOut1 = pLeftOut;
- paramOut2 = pRightOut;
-
if (pLeft->numOfRows > pRight->numOfRows) {
type = leftType;
} else {
type = rightType;
}
} else {
- // we only define half value in the convert-matrix, so make sure param1 always less equal than param2
- if (leftType < rightType) {
- param1 = pLeft;
- param2 = pRight;
- paramOut1 = pLeftOut;
- paramOut2 = pRightOut;
- } else {
- param1 = pRight;
- param2 = pLeft;
- paramOut1 = pRightOut;
- paramOut2 = pLeftOut;
- }
-
type = vectorGetConvertType(GET_PARAM_TYPE(param1), GET_PARAM_TYPE(param2));
if (0 == type) {
return TSDB_CODE_SUCCESS;
@@ -1986,13 +1972,14 @@ int32_t doVectorCompareImpl(SScalarParam *pLeft, SScalarParam *pRight, SScalarPa
return code;
}
-int32_t doVectorCompare(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOut, int32_t startIndex,
+int32_t doVectorCompare(SScalarParam *pLeft, SScalarParam *pLeftVar, SScalarParam *pRight, SScalarParam *pOut, int32_t startIndex,
int32_t numOfRows, int32_t _ord, int32_t optr) {
int32_t i = 0;
int32_t step = ((_ord) == TSDB_ORDER_ASC) ? 1 : -1;
int32_t lType = GET_PARAM_TYPE(pLeft);
int32_t rType = GET_PARAM_TYPE(pRight);
__compar_fn_t fp = NULL;
+ __compar_fn_t fpVar = NULL;
int32_t compRows = 0;
if (lType == rType) {
SCL_ERR_RET(filterGetCompFunc(&fp, lType, optr));
@@ -2000,6 +1987,9 @@ int32_t doVectorCompare(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam
fp = filterGetCompFuncEx(lType, rType, optr);
}
+ if (pLeftVar != NULL) {
+ SCL_ERR_RET(filterGetCompFunc(&fpVar, GET_PARAM_TYPE(pLeftVar), optr));
+ }
if (startIndex < 0) {
i = ((_ord) == TSDB_ORDER_ASC) ? 0 : TMAX(pLeft->numOfRows, pRight->numOfRows) - 1;
pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
@@ -2019,6 +2009,18 @@ int32_t doVectorCompare(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam
char *pLeftData = colDataGetData(pLeft->columnData, i);
bool res = filterDoCompare(fp, optr, pLeftData, pRight->pHashFilter);
+ if (pLeftVar != NULL && taosHashGetSize(pRight->pHashFilterOthers) > 0){
+ do{
+ if (optr == OP_TYPE_IN && res){
+ break;
+ }
+ if (optr == OP_TYPE_NOT_IN && !res){
+ break;
+ }
+ pLeftData = colDataGetData(pLeftVar->columnData, i);
+ res = filterDoCompare(fpVar, optr, pLeftData, pRight->pHashFilterOthers);
+ }while(0);
+ }
colDataSetInt8(pOut->columnData, i, (int8_t *)&res);
if (res) {
pOut->numOfQualified++;
@@ -2036,6 +2038,7 @@ int32_t vectorCompareImpl(SScalarParam *pLeft, SScalarParam *pRight, SScalarPara
SScalarParam pRightOut = {0};
SScalarParam *param1 = NULL;
SScalarParam *param2 = NULL;
+ SScalarParam *param3 = NULL;
int32_t code = TSDB_CODE_SUCCESS;
setTzCharset(&pLeftOut, pLeft->tz, pLeft->charsetCxt);
setTzCharset(&pRightOut, pLeft->tz, pLeft->charsetCxt);
@@ -2046,9 +2049,12 @@ int32_t vectorCompareImpl(SScalarParam *pLeft, SScalarParam *pRight, SScalarPara
SCL_ERR_JRET(vectorConvertCols(pLeft, pRight, &pLeftOut, &pRightOut, startIndex, numOfRows));
param1 = (pLeftOut.columnData != NULL) ? &pLeftOut : pLeft;
param2 = (pRightOut.columnData != NULL) ? &pRightOut : pRight;
+ if (pRight->pHashFilterOthers != NULL){
+ param3 = pLeft;
+ }
}
- SCL_ERR_JRET(doVectorCompare(param1, param2, pOut, startIndex, numOfRows, _ord, optr));
+ SCL_ERR_JRET(doVectorCompare(param1, param3, param2, pOut, startIndex, numOfRows, _ord, optr));
_return:
sclFreeParam(&pLeftOut);
diff --git a/source/libs/scalar/test/scalar/scalarTests.cpp b/source/libs/scalar/test/scalar/scalarTests.cpp
index 3eae06d9bb..865fb30814 100644
--- a/source/libs/scalar/test/scalar/scalarTests.cpp
+++ b/source/libs/scalar/test/scalar/scalarTests.cpp
@@ -2106,7 +2106,7 @@ TEST(columnTest, int_column_in_double_list) {
SNode *pLeft = NULL, *pRight = NULL, *listNode = NULL, *opNode = NULL;
int32_t leftv[5] = {1, 2, 3, 4, 5};
double rightv1 = 1.1, rightv2 = 2.2, rightv3 = 3.3;
- bool eRes[5] = {true, true, true, false, false};
+ bool eRes[5] = {false, false, false, false, false};
SSDataBlock *src = NULL;
int32_t rowNum = sizeof(leftv) / sizeof(leftv[0]);
int32_t code = TSDB_CODE_SUCCESS;
diff --git a/source/util/src/terror.c b/source/util/src/terror.c
index 195cb21618..d2d551a539 100644
--- a/source/util/src/terror.c
+++ b/source/util/src/terror.c
@@ -648,7 +648,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_PAR_NOT_SINGLE_GROUP, "Not a single-group g
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TAGS_NOT_MATCHED, "Tags number not matched")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_TAG_NAME, "Invalid tag name")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG, "Name or password too long")
-TAOS_DEFINE_ERROR(TSDB_CODE_PAR_PASSWD_EMPTY, "Password can not be empty")
+TAOS_DEFINE_ERROR(TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY, "Password too short or empty")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_PORT, "Port should be an integer that is less than 65535 and greater than 0")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_ENDPOINT, "Endpoint should be in the format of 'fqdn:port'")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_EXPRIE_STATEMENT, "This statement is no longer supported")
diff --git a/source/util/test/errorCodeTable.ini b/source/util/test/errorCodeTable.ini
index e837954a0b..f67c8ab834 100644
--- a/source/util/test/errorCodeTable.ini
+++ b/source/util/test/errorCodeTable.ini
@@ -463,7 +463,7 @@ TSDB_CODE_PAR_NOT_SINGLE_GROUP = 0x8000260C
TSDB_CODE_PAR_TAGS_NOT_MATCHED = 0x8000260D
TSDB_CODE_PAR_INVALID_TAG_NAME = 0x8000260E
TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG = 0x80002610
-TSDB_CODE_PAR_PASSWD_EMPTY = 0x80002611
+TSDB_CODE_PAR_PASSWD_TOO_SHORT_OR_EMPTY = 0x80002611
TSDB_CODE_PAR_INVALID_PORT = 0x80002612
TSDB_CODE_PAR_INVALID_ENDPOINT = 0x80002613
TSDB_CODE_PAR_EXPRIE_STATEMENT = 0x80002614
diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task
index 1aef2195db..c9d28e0623 100644
--- a/tests/parallel_test/cases.task
+++ b/tests/parallel_test/cases.task
@@ -218,6 +218,8 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/slimit.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/slimit.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/slimit.py -Q 4
+,,n,system-test,./pytest.sh python3 ./test.py -f 2-query/ts-5761.py
+,,n,system-test,./pytest.sh python3 ./test.py -f 2-query/ts-5761-scalemode.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ts-5712.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ts-4233.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ts-4233.py -Q 2
diff --git a/tests/script/api/stmt2-performance.c b/tests/script/api/stmt2-performance.c
index aa8e5b9450..a539affaf1 100644
--- a/tests/script/api/stmt2-performance.c
+++ b/tests/script/api/stmt2-performance.c
@@ -5,9 +5,9 @@
#include