Merge branch '3.0' into test/jcy

This commit is contained in:
jiacy-jcy 2022-06-09 17:56:59 +08:00
commit 8a90af3fbc
69 changed files with 981 additions and 485 deletions

View File

@ -3,6 +3,9 @@ sidebar_label: Grafana
title: Grafana
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/) 快速集成搭建数据监测报警系统整个过程无需任何代码开发TDengine 中数据表的内容可以在仪表盘(DashBoard)上进行可视化展现。关于 TDengine 插件的使用您可以在[GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md)中了解更多。
## 前置条件
@ -12,12 +15,42 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
记录以下信息:
- TDengine 集群 REST API 地址,如:`http://tdengine.local:6041`。
- TDengine 集群认证信息,可使用用户名及密码。
## 安装 Grafana
目前 TDengine 支持 Grafana 7.0 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:<https://grafana.com/grafana/download>。
目前 TDengine 支持 Grafana 7.5 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:<https://grafana.com/grafana/download>。
## 配置 Grafana
### 安装 Grafana Plugin 并配置数据源
<Tabs defaultValue="script">
<TabItem value="script" label="使用安装脚本">
将集群信息设置为环境变量;也可以使用 `.env` 文件,请参考 [dotenv](https://hexdocs.pm/dotenvy/dotenv-file-format.html)
```sh
export TDENGINE_API=http://tdengine.local:6041
# user + password
export TDENGINE_USER=user
export TDENGINE_PASSWORD=password
```
运行安装脚本:
```sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)"
```
该脚本将自动安装 Grafana 插件并配置数据源。安装完毕后,需要重启 Grafana 服务后生效。
</TabItem>
<TabItem value="manual" label="手动安装并配置">
使用 [`grafana-cli` 命令行工具](https://grafana.com/docs/grafana/latest/administration/cli/) 进行插件[安装](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation)。
```bash
@ -48,11 +81,7 @@ sudo unzip tdengine-datasource-$GF_VERSION.zip -d /var/lib/grafana/plugins/
GF_INSTALL_PLUGINS=tdengine-datasource
```
## 使用 Grafana
### 配置数据源
用户可以直接通过 <http://localhost:3000> 的网址,登录 Grafana 服务器(用户名/密码admin/admin通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
之后,用户可以直接通过 <http://localhost:3000> 的网址,登录 Grafana 服务器(用户名/密码admin/admin通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![TDengine Database Grafana plugin add data source](./add_datasource1.webp)
@ -72,6 +101,9 @@ GF_INSTALL_PLUGINS=tdengine-datasource
![TDengine Database Grafana plugin add data source](./add_datasource4.webp)
</TabItem>
</Tabs>
### 创建 Dashboard
回到主界面创建 Dashboard点击 Add Query 进入面板查询页面:
@ -92,4 +124,11 @@ GF_INSTALL_PLUGINS=tdengine-datasource
### 导入 Dashboard
在 2.3.3.0 及以上版本,您可以导入 TDinsight Dashboard (Grafana Dashboard ID [15167](https://grafana.com/grafana/dashboards/15167)) 作为 TDengine 集群的监控可视化工具。安装和使用说明请见 [TDinsight 用户手册](/reference/tdinsight/)。
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。该 Dashboard 已发布在 Grafana[Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。其他安装方式和相关使用说明请见 [TDinsight 用户手册](/reference/tdinsight/)。
使用 TDengine 作为数据源的其他面板,可以[在此搜索](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource)。以下是一份不完全列表:
- [15146](https://grafana.com/grafana/dashboards/15146): 监控多个 TDengine 集群
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine 告警示例
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf 采集节点信息的数据展示

View File

@ -3,6 +3,9 @@ sidebar_label: Grafana
title: Grafana
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard.
You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md).
@ -14,12 +17,44 @@ In order for Grafana to add the TDengine data source successfully, the following
1. The TDengine cluster is deployed and functioning properly
2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
Record these values:
- TDengine REST API url: `http://tdengine.local:6041`.
- TDengine cluster authorization, with user + password.
## Installing Grafana
TDengine currently supports Grafana versions 7.0 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: <https://grafana.com/grafana/download>.
TDengine currently supports Grafana versions 7.5 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: <https://grafana.com/grafana/download>.
## Configuring Grafana
### Install Grafana Plugin and Configure Data Source
<Tabs defaultValue="script">
<TabItem value="script" label="Using Script">
Set the url and authorization environment variables by `export` or a [`.env`(dotenv) file](https://hexdocs.pm/dotenvy/dotenv-file-format.html):
```sh
export TDENGINE_API=http://tdengine.local:6041
# user + password
export TDENGINE_USER=user
export TDENGINE_PASSWORD=password
```
Run `install.sh`:
```sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)"
```
With this script, TDengine data source plugin and the Grafana data source will be installed and created automatically with Grafana provisioning configurations.
And then, restart Grafana service and open Grafana in web-browser, usually <http://localhost:3000>.
</TabItem>
<TabItem value="manual" label="Install & Configure Manually">
Follow the installation steps in [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) with the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation.
```bash
@ -50,11 +85,7 @@ If Grafana is running in a Docker environment, the TDengine plugin can be automa
GF_INSTALL_PLUGINS=tdengine-datasource
```
## Using Grafana
### Configuring Data Sources
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
Now users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
![TDengine Database TDinsight plugin add datasource 1](./grafana/add_datasource1.webp)
@ -74,6 +105,9 @@ Click `Save & Test` to test. You should see a success message if the test worked
![TDengine Database TDinsight plugin add database 4](./grafana/add_datasource4.webp)
</TabItem>
</Tabs>
### Create Dashboard
Go back to the main interface to create a dashboard and click Add Query to enter the panel query page:
@ -94,4 +128,11 @@ Follow the default prompt to query the average system memory usage for the speci
### Importing the Dashboard
In version 2.3.3.0 and above, you can import the TDinsight Dashboard (Grafana Dashboard ID: [15168](https://grafana.com/grafana/dashboards/15167)) as a monitoring visualization tool for TDengine clusters. You can find installation and usage instructions in the TDinsight User Manual (/reference/tdinsight/).
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. The dashboard is published in Grafana as [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167). Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:
- [15146](https://grafana.com/grafana/dashboards/15146): Monitor multiple TDengine clusters.
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.

View File

@ -2510,6 +2510,7 @@ typedef struct {
int32_t tSerializeSTableIndexRsp(void* buf, int32_t bufLen, const STableIndexRsp* pRsp);
int32_t tDeserializeSTableIndexRsp(void* buf, int32_t bufLen, STableIndexRsp* pRsp);
void tFreeSTableIndexInfo(void *pInfo);
typedef struct {

View File

@ -67,6 +67,7 @@ typedef struct SCatalogReq {
SArray *pUdf; // element is udf name
SArray *pIndex; // element is index name
SArray *pUser; // element is SUserAuthInfo
SArray *pTableIndex; // element is SNAME
bool qNodeRequired; // valid qnode
bool forceUpdate;
} SCatalogReq;
@ -82,6 +83,7 @@ typedef struct SMetaData {
SArray *pDbInfo; // pRes = SDbInfo*
SArray *pTableMeta; // pRes = STableMeta*
SArray *pTableHash; // pRes = SVgroupInfo*
SArray *pTableIndex; // pRes = SArray<STableIndexInfo>*
SArray *pUdfList; // pRes = SFuncInfo*
SArray *pIndex; // pRes = SIndexInfo*
SArray *pUser; // pRes = bool*
@ -277,7 +279,7 @@ int32_t catalogGetDBCfg(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, cons
int32_t catalogGetIndexMeta(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, const char* indexName, SIndexInfo* pInfo);
int32_t catalogGetTableIndex(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const char* tbFName, SArray** pRes);
int32_t catalogGetTableIndex(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const SName* pTableName, SArray** pRes);
int32_t catalogGetUdfInfo(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, const char* funcName, SFuncInfo* pInfo);

View File

@ -129,6 +129,8 @@ typedef enum EFunctionType {
FUNCTION_TYPE_SPREAD_MERGE,
FUNCTION_TYPE_HISTOGRAM_PARTIAL,
FUNCTION_TYPE_HISTOGRAM_MERGE,
FUNCTION_TYPE_HYPERLOGLOG_PARTIAL,
FUNCTION_TYPE_HYPERLOGLOG_MERGE,
// user defined funcion
FUNCTION_TYPE_UDF = 10000

View File

@ -136,14 +136,14 @@ int indexRebuild(SIndex* index, SIndexOpts* opt);
* @param index (output, index json object)
* @return error code
*/
int tIndexJsonOpen(SIndexJsonOpts* opts, const char* path, SIndexJson** index);
int indexJsonOpen(SIndexJsonOpts* opts, const char* path, SIndexJson** index);
/*
* close index
* @param index (input, index to be closed)
* @return void
*/
void tIndexJsonClose(SIndexJson* index);
void indexJsonClose(SIndexJson* index);
/*
* insert terms into index
@ -152,7 +152,7 @@ void tIndexJsonClose(SIndexJson* index);
* @param uid (input, uid of terms)
* @return error code
*/
int tIndexJsonPut(SIndexJson* index, SIndexJsonMultiTerm* terms, uint64_t uid);
int indexJsonPut(SIndexJson* index, SIndexJsonMultiTerm* terms, uint64_t uid);
/*
* search index
* @param index (input, index object)
@ -161,7 +161,7 @@ int tIndexJsonPut(SIndexJson* index, SIndexJsonMultiTerm* terms, uint64_t uid);
* @return error code
*/
int tIndexJsonSearch(SIndexJson* index, SIndexJsonMultiTermQuery* query, SArray* result);
int indexJsonSearch(SIndexJson* index, SIndexJsonMultiTermQuery* query, SArray* result);
/*
* @param
* @param

View File

@ -304,6 +304,7 @@ typedef struct SDownstreamSourceNode {
typedef struct SExchangePhysiNode {
SPhysiNode node;
int32_t srcGroupId; // group id of datasource suplans
bool singleChannel;
SNodeList* pSrcEndPoints; // element is SDownstreamSource, scheduler fill by calling qSetSuplanExecutionNode
} SExchangePhysiNode;

View File

@ -46,6 +46,7 @@ typedef struct SRpcHandleInfo {
int32_t noResp; // has response or not(default 0, 0: resp, 1: no resp);
int32_t persistHandle; // persist handle or not
SRpcConnInfo connInfo;
// app info
void *ahandle; // app handle set by client
void *wrapper; // wrapper handle

View File

@ -303,7 +303,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
uError("SML:0x%" PRIx64 " apply schema action. reset query cache. error: %s", info->id, taos_errstr(res2));
}
taos_free_result(res2);
taosMsleep(500);
taosMsleep(10);
}
break;
}
@ -327,7 +327,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
uError("SML:0x%" PRIx64 " apply schema action. reset query cache. error: %s", info->id, taos_errstr(res2));
}
taos_free_result(res2);
taosMsleep(500);
taosMsleep(10);
}
break;
}
@ -350,7 +350,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
uError("SML:0x%" PRIx64 " apply schema action. reset query cache. error: %s", info->id, taos_errstr(res2));
}
taos_free_result(res2);
taosMsleep(500);
taosMsleep(10);
}
break;
}
@ -373,7 +373,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
uError("SML:0x%" PRIx64 " apply schema action. reset query cache. error: %s", info->id, taos_errstr(res2));
}
taos_free_result(res2);
taosMsleep(500);
taosMsleep(10);
}
break;
}
@ -424,7 +424,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
uError("SML:0x%" PRIx64 " apply schema action. reset query cache. error: %s", info->id, taos_errstr(res2));
}
taos_free_result(res2);
taosMsleep(500);
taosMsleep(10);
}
break;
}
@ -461,18 +461,18 @@ static int32_t smlProcessSchemaAction(SSmlHandle* info, SSchema* schemaField, SH
static int32_t smlModifyDBSchemas(SSmlHandle* info) {
int32_t code = 0;
SEpSet ep = getEpSet_s(&info->taos->pAppInfo->mgmtEp);
SName pName = {TSDB_TABLE_NAME_T, info->taos->acctId, {0}, {0}};
strcpy(pName.dbname, info->pRequest->pDb);
SSmlSTableMeta** tableMetaSml = (SSmlSTableMeta**)taosHashIterate(info->superTables, NULL);
while (tableMetaSml) {
SSmlSTableMeta* sTableData = *tableMetaSml;
STableMeta *pTableMeta = NULL;
SEpSet ep = getEpSet_s(&info->taos->pAppInfo->mgmtEp);
size_t superTableLen = 0;
void *superTable = taosHashGetKey(tableMetaSml, &superTableLen);
SName pName = {TSDB_TABLE_NAME_T, info->taos->acctId, {0}, {0}};
strcpy(pName.dbname, info->pRequest->pDb);
memset(pName.tname, 0, TSDB_TABLE_NAME_LEN);
memcpy(pName.tname, superTable, superTableLen);
code = catalogGetSTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, &pTableMeta);
@ -487,7 +487,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) {
code = smlApplySchemaAction(info, &schemaAction);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%"PRIx64" smlApplySchemaAction failed. can not create %s", info->id, schemaAction.createSTable.sTableName);
return code;
goto end;
}
info->cost.numOfCreateSTables++;
}else if (code == TSDB_CODE_SUCCESS) {
@ -502,7 +502,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) {
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, &schemaAction, true);
if (code != TSDB_CODE_SUCCESS) {
taosHashCleanup(hashTmp);
return code;
goto end;
}
taosHashClear(hashTmp);
@ -512,29 +512,33 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) {
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, &schemaAction, false);
taosHashCleanup(hashTmp);
if (code != TSDB_CODE_SUCCESS) {
return code;
goto end;
}
code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, -1);
code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, 1);
if (code != TSDB_CODE_SUCCESS) {
return code;
goto end;
}
} else {
uError("SML:0x%"PRIx64" load table meta error: %s", info->id, tstrerror(code));
return code;
goto end;
}
if(pTableMeta) taosMemoryFree(pTableMeta);
code = catalogGetSTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, &pTableMeta);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%"PRIx64" catalogGetSTableMeta failed. super table name %s", info->id, (char*)superTable);
return code;
goto end;
}
sTableData->tableMeta = pTableMeta;
tableMetaSml = (SSmlSTableMeta**)taosHashIterate(info->superTables, tableMetaSml);
}
return 0;
end:
catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, 1);
return code;
}
//=========================================================================
@ -1544,7 +1548,7 @@ static int32_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int64_t *tsV
}
size_t typeLen = strlen(type->valuestring);
if (typeLen == 1 && type->valuestring[0] == 's') {
if (typeLen == 1 && (type->valuestring[0] == 's' || type->valuestring[0] == 'S')) {
//seconds
timeDouble = timeDouble * 1e9;
if(smlDoubleToInt64OverFlow(timeDouble)){
@ -1552,9 +1556,10 @@ static int32_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int64_t *tsV
return TSDB_CODE_TSC_INVALID_TIME_STAMP;
}
*tsVal = timeDouble;
} else if (typeLen == 2 && type->valuestring[1] == 's') {
} else if (typeLen == 2 && (type->valuestring[1] == 's' || type->valuestring[1] == 'S')) {
switch (type->valuestring[0]) {
case 'm':
case 'M':
//milliseconds
timeDouble = timeDouble * 1e6;
if(smlDoubleToInt64OverFlow(timeDouble)){
@ -1564,6 +1569,7 @@ static int32_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int64_t *tsV
*tsVal = timeDouble;
break;
case 'u':
case 'U':
//microseconds
timeDouble = timeDouble * 1e3;
if(smlDoubleToInt64OverFlow(timeDouble)){
@ -1573,6 +1579,7 @@ static int32_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int64_t *tsV
*tsVal = timeDouble;
break;
case 'n':
case 'N':
//nanoseconds
*tsVal = timeDouble;
break;
@ -2285,6 +2292,8 @@ static int32_t smlParseLine(SSmlHandle *info, char* lines[], int numLines){
static int smlProcess(SSmlHandle *info, char* lines[], int numLines) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t retryNum = 0;
info->cost.parseTime = taosGetTimestampUs();
code = smlParseLine(info, lines, numLines);
@ -2298,7 +2307,12 @@ static int smlProcess(SSmlHandle *info, char* lines[], int numLines) {
info->cost.numOfCTables = taosHashGetSize(info->childTables);
info->cost.schemaTime = taosGetTimestampUs();
do{
code = smlModifyDBSchemas(info);
if (code == 0) break;
} while (retryNum++ < taosHashGetSize(info->superTables));
if (code != 0) {
uError("SML:0x%"PRIx64" smlModifyDBSchemas error : %s", info->id, tstrerror(code));
goto cleanup;
@ -2409,6 +2423,7 @@ TAOS_RES* taos_schemaless_insert(TAOS* taos, char* lines[], int numLines, int pr
info->pRequest->code = smlProcess(info, lines, numLines);
end:
info->taos->schemalessType = 0;
uDebug("result:%s", info->msgBuf.buf);
smlDestroyInfo(info);
return (TAOS_RES*)request;

View File

@ -1272,14 +1272,40 @@ TEST(testCase, sml_params_Test) {
};
TAOS_RES* res = taos_schemaless_insert(taos, (char**)sql, 1, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
ASSERT_EQ(taos_errno(res), TSDB_CODE_PAR_DB_NOT_SPECIFIED);
taos_free_result(pRes);
taos_free_result(res);
pRes = taos_query(taos, "use param");
taos_free_result(pRes);
taos_free_result(res);
res = taos_schemaless_insert(taos, (char**)sql, 1, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
ASSERT_EQ(taos_errno(res), TSDB_CODE_SML_INVALID_DB_CONF);
taos_free_result(res);
}
TEST(testCase, sml_16384_Test) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
TAOS_RES* pRes = taos_query(taos, "create database if not exists d16384 schemaless 1");
taos_free_result(pRes);
const char *sql[] = {
"qelhxo,id=pnnqhsa,t0=t,t1=127i8 c0=t,c1=127i8 1626006833639000000",
};
pRes = taos_query(taos, "use d16384");
taos_free_result(pRes);
TAOS_RES* res = taos_schemaless_insert(taos, (char**)sql, 1, TSDB_SML_LINE_PROTOCOL, 0);
ASSERT_EQ(taos_errno(res), 0);
taos_free_result(res);
const char *sql1[] = {
"qelhxo,id=pnnqhsa,t0=t,t1=127i8 c0=f,c1=127i8,c11=L\"ncharColValue\",c10=t 1626006833639000000",
};
TAOS_RES* res1 = taos_schemaless_insert(taos, (char**)sql1, 1, TSDB_SML_LINE_PROTOCOL, 0);
ASSERT_EQ(taos_errno(res1), 0);
taos_free_result(res1);
}
TEST(testCase, sml_oom_Test) {
@ -1303,3 +1329,29 @@ TEST(testCase, sml_oom_Test) {
ASSERT_EQ(taos_errno(res), 0);
taos_free_result(pRes);
}
TEST(testCase, sml_16368_Test) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
TAOS_RES* pRes = taos_query(taos, "create database if not exists d16368 schemaless 1");
taos_free_result(pRes);
pRes = taos_query(taos, "use d16368");
taos_free_result(pRes);
const char *sql[] = {
"[{\"metric\": \"st123456\", \"timestamp\": {\"value\": 1626006833639000, \"type\": \"us\"}, \"value\": 1, \"tags\": {\"t1\": 3, \"t2\": {\"value\": 4, \"type\": \"double\"}, \"t3\": {\"value\": \"t3\", \"type\": \"binary\"}}},\n"
"{\"metric\": \"st123456\", \"timestamp\": {\"value\": 1626006833739000, \"type\": \"us\"}, \"value\": 2, \"tags\": {\"t1\": {\"value\": 4, \"type\": \"double\"}, \"t3\": {\"value\": \"t4\", \"type\": \"binary\"}, \"t2\": {\"value\": 5, \"type\": \"double\"}, \"t4\": {\"value\": 5, \"type\": \"double\"}}},\n"
"{\"metric\": \"stb_name\", \"timestamp\": {\"value\": 1626006833639100, \"type\": \"us\"}, \"value\": 3, \"tags\": {\"t2\": {\"value\": 5, \"type\": \"double\"}, \"t3\": {\"value\": \"ste\", \"type\": \"nchar\"}}},\n"
"{\"metric\": \"stf567890\", \"timestamp\": {\"value\": 1626006833639200, \"type\": \"us\"}, \"value\": 4, \"tags\": {\"t1\": {\"value\": 4, \"type\": \"bigint\"}, \"t3\": {\"value\": \"t4\", \"type\": \"binary\"}, \"t2\": {\"value\": 5, \"type\": \"double\"}, \"t4\": {\"value\": 5, \"type\": \"double\"}}},\n"
"{\"metric\": \"st123456\", \"timestamp\": {\"value\": 1626006833639300, \"type\": \"us\"}, \"value\": {\"value\": 5, \"type\": \"double\"}, \"tags\": {\"t1\": {\"value\": 4, \"type\": \"double\"}, \"t2\": 5.0, \"t3\": {\"value\": \"t4\", \"type\": \"binary\"}}},\n"
"{\"metric\": \"stb_name\", \"timestamp\": {\"value\": 1626006833639400, \"type\": \"us\"}, \"value\": {\"value\": 6, \"type\": \"double\"}, \"tags\": {\"t2\": 5.0, \"t3\": {\"value\": \"ste2\", \"type\": \"nchar\"}}},\n"
"{\"metric\": \"stb_name\", \"timestamp\": {\"value\": 1626006834639400, \"type\": \"us\"}, \"value\": {\"value\": 7, \"type\": \"double\"}, \"tags\": {\"t2\": {\"value\": 5.0, \"type\": \"double\"}, \"t3\": {\"value\": \"ste2\", \"type\": \"nchar\"}}},\n"
"{\"metric\": \"st123456\", \"timestamp\": {\"value\": 1626006833839006, \"type\": \"us\"}, \"value\": {\"value\": 8, \"type\": \"double\"}, \"tags\": {\"t1\": {\"value\": 4, \"type\": \"double\"}, \"t3\": {\"value\": \"t4\", \"type\": \"binary\"}, \"t2\": {\"value\": 5, \"type\": \"double\"}, \"t4\": {\"value\": 5, \"type\": \"double\"}}},\n"
"{\"metric\": \"st123456\", \"timestamp\": {\"value\": 1626006833939007, \"type\": \"us\"}, \"value\": {\"value\": 9, \"type\": \"double\"}, \"tags\": {\"t1\": 4, \"t3\": {\"value\": \"t4\", \"type\": \"binary\"}, \"t2\": {\"value\": 5, \"type\": \"double\"}, \"t4\": {\"value\": 5, \"type\": \"double\"}}}]"
};
pRes = taos_schemaless_insert(taos, (char**)sql, sizeof(sql)/sizeof(sql[0]), TSDB_SML_JSON_PROTOCOL, TSDB_SML_TIMESTAMP_MICRO_SECONDS);
ASSERT_EQ(taos_errno(pRes), 0);
taos_free_result(pRes);
}

View File

@ -244,7 +244,7 @@ int32_t tTSRowNew(STSRowBuilder *pBuilder, SArray *pArray, STSchema *pTSchema, S
}
}
// ASSERT(flags); // only 1 column(ts)
ASSERT(flags);
// decide
uint32_t nData = 0;
@ -268,8 +268,8 @@ int32_t tTSRowNew(STSRowBuilder *pBuilder, SArray *pArray, STSchema *pTSchema, S
nDataT = BIT2_SIZE(pTSchema->numOfCols - 1) + pTSchema->flen + ntv;
break;
default:
break; // only ts column
// ASSERT(0);
break;
ASSERT(0);
}
uint8_t tflags = 0;
@ -374,7 +374,7 @@ int32_t tTSRowNew(STSRowBuilder *pBuilder, SArray *pArray, STSchema *pTSchema, S
ptv = pf + pTSchema->flen;
break;
default:
// ASSERT(0);
ASSERT(0);
break;
}
} else {
@ -421,12 +421,26 @@ int32_t tTSRowNew(STSRowBuilder *pBuilder, SArray *pArray, STSchema *pTSchema, S
_set_none:
if ((flags & 0xf0) == 0) {
setBitMap(pb, 0, iColumn - 1, flags);
if (flags & TSROW_HAS_VAL) { // set 0
if (IS_VAR_DATA_TYPE(pTColumn->type)) {
*(VarDataOffsetT *)(pf + pTColumn->offset) = 0;
} else {
tPutValue(pf + pTColumn->offset, &((SValue){0}), pTColumn->type);
}
}
}
continue;
_set_null:
if ((flags & 0xf0) == 0) {
setBitMap(pb, 1, iColumn - 1, flags);
if (flags & TSROW_HAS_VAL) { // set 0
if (IS_VAR_DATA_TYPE(pTColumn->type)) {
*(VarDataOffsetT *)(pf + pTColumn->offset) = 0;
} else {
tPutValue(pf + pTColumn->offset, &((SValue){0}), pTColumn->type);
}
}
} else {
SET_IDX(pidx, pTSKVRow->nCols, nkv, flags);
pTSKVRow->nCols++;
@ -497,7 +511,7 @@ void tTSRowGet(STSRow2 *pRow, STSchema *pTSchema, int32_t iCol, SColVal *pColVal
SValue value;
ASSERT(iCol < pTSchema->numOfCols);
// ASSERT(flags); // only 1 ts column
ASSERT(flags);
ASSERT(pRow->sver == pTSchema->version);
if (iCol == 0) {

View File

@ -2491,6 +2491,15 @@ int32_t tDeserializeSTableIndexRsp(void *buf, int32_t bufLen, STableIndexRsp *pR
return 0;
}
void tFreeSTableIndexInfo(void* info) {
if (NULL == info) {
return;
}
STableIndexInfo *pInfo = (STableIndexInfo*)info;
taosMemoryFree(pInfo->expr);
}
int32_t tSerializeSShowReq(void *buf, int32_t bufLen, SShowReq *pReq) {
SEncoder encoder = {0};

View File

@ -52,61 +52,61 @@ STSchema *genSTSchema(int16_t nCols) {
switch (i) {
case 0: {
pSchema[0].type = TSDB_DATA_TYPE_TIMESTAMP;
pSchema[0].bytes = TYPE_BYTES[pSchema[0].type];
pSchema[i].type = TSDB_DATA_TYPE_TIMESTAMP;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 1: {
pSchema[1].type = TSDB_DATA_TYPE_INT;
pSchema[1].bytes = TYPE_BYTES[pSchema[1].type];
pSchema[i].type = TSDB_DATA_TYPE_INT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
;
} break;
case 2: {
pSchema[2].type = TSDB_DATA_TYPE_BIGINT;
pSchema[2].bytes = TYPE_BYTES[pSchema[2].type];
pSchema[i].type = TSDB_DATA_TYPE_BIGINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 3: {
pSchema[3].type = TSDB_DATA_TYPE_FLOAT;
pSchema[3].bytes = TYPE_BYTES[pSchema[3].type];
pSchema[i].type = TSDB_DATA_TYPE_FLOAT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 4: {
pSchema[4].type = TSDB_DATA_TYPE_DOUBLE;
pSchema[4].bytes = TYPE_BYTES[pSchema[4].type];
pSchema[i].type = TSDB_DATA_TYPE_DOUBLE;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 5: {
pSchema[5].type = TSDB_DATA_TYPE_BINARY;
pSchema[5].bytes = 12;
pSchema[i].type = TSDB_DATA_TYPE_BINARY;
pSchema[i].bytes = 12;
} break;
case 6: {
pSchema[6].type = TSDB_DATA_TYPE_NCHAR;
pSchema[6].bytes = 42;
pSchema[i].type = TSDB_DATA_TYPE_NCHAR;
pSchema[i].bytes = 42;
} break;
case 7: {
pSchema[7].type = TSDB_DATA_TYPE_TINYINT;
pSchema[7].bytes = TYPE_BYTES[pSchema[7].type];
pSchema[i].type = TSDB_DATA_TYPE_TINYINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 8: {
pSchema[8].type = TSDB_DATA_TYPE_SMALLINT;
pSchema[8].bytes = TYPE_BYTES[pSchema[8].type];
pSchema[i].type = TSDB_DATA_TYPE_SMALLINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 9: {
pSchema[9].type = TSDB_DATA_TYPE_BOOL;
pSchema[9].bytes = TYPE_BYTES[pSchema[9].type];
pSchema[i].type = TSDB_DATA_TYPE_BOOL;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 10: {
pSchema[10].type = TSDB_DATA_TYPE_UTINYINT;
pSchema[10].bytes = TYPE_BYTES[pSchema[10].type];
pSchema[i].type = TSDB_DATA_TYPE_UTINYINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 11: {
pSchema[11].type = TSDB_DATA_TYPE_USMALLINT;
pSchema[11].bytes = TYPE_BYTES[pSchema[11].type];
pSchema[i].type = TSDB_DATA_TYPE_USMALLINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 12: {
pSchema[12].type = TSDB_DATA_TYPE_UINT;
pSchema[12].bytes = TYPE_BYTES[pSchema[12].type];
pSchema[i].type = TSDB_DATA_TYPE_UINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
case 13: {
pSchema[13].type = TSDB_DATA_TYPE_UBIGINT;
pSchema[13].bytes = TYPE_BYTES[pSchema[13].type];
pSchema[i].type = TSDB_DATA_TYPE_UBIGINT;
pSchema[i].bytes = TYPE_BYTES[pSchema[i].type];
} break;
default:
@ -146,9 +146,9 @@ static int32_t genTestData(const char **data, int16_t nCols, SArray **pArray) {
case 0:
sscanf(data[i], "%" PRIi64, &colVal.value.ts);
break;
case 1: {
case 1:
sscanf(data[i], "%" PRIi32, &colVal.value.i32);
} break;
break;
case 2:
sscanf(data[i], "%" PRIi64, &colVal.value.i64);
break;
@ -274,9 +274,6 @@ int32_t debugPrintSColVal(SColVal *cv, int8_t type) {
case TSDB_DATA_TYPE_MEDIUMBLOB:
printf("MedBLOB ");
break;
// case TSDB_DATA_TYPE_BINARY:
// printf("BINARY ");
// break;
case TSDB_DATA_TYPE_MAX:
printf("UNDEF ");
break;
@ -404,7 +401,8 @@ static void checkTSRow(const char **data, STSRow2 *row, STSchema *pTSchema) {
}
TEST(testCase, AllNormTest) {
int16_t nCols = 1;
int16_t nCols = 14;
STSRowBuilder rb = {0};
STSRow2 *row = nullptr;
SArray *pArray = taosArrayInit(nCols, sizeof(SColVal));
EXPECT_NE(pArray, nullptr);
@ -414,15 +412,16 @@ TEST(testCase, AllNormTest) {
// ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(10), c6 nchar(10), c7 tinyint, c8 smallint,
// c9 bool
char *data[10] = {"1653694220000", "10", "20", "10.1", "10.1", "binary10", "nchar10", "10", "10", "1"};
char *data[14] = {"1653694220000", "no", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "no", "no", "no", "no", "no"};
genTestData((const char **)&data, nCols, &pArray);
tTSRowNew(NULL, pArray, pTSchema, &row);
tTSRowNew(&rb, pArray, pTSchema, &row);
debugPrintTSRow(row, pTSchema, __func__, __LINE__);
checkTSRow((const char **)&data, row, pTSchema);
tsRowBuilderClear(&rb);
taosArrayDestroy(pArray);
taosMemoryFree(pTSchema);
}
@ -443,12 +442,12 @@ TEST(testCase, NoneTest) {
const char *data[nRows][nCols] = {
{"1653694220000", "no", "20", "10.1", "10.1", "binary10", "no", "10", "10", "nu", "10", "20", "30", "40"},
{"1653694220001", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no"},
{"1653694220002", "no", "no", "no", "no", "no", "nu", "no", "no", "no", "no", "no", "no", "nu"},
{"1653694220003", "nu", "no", "no", "no", "no", "nu", "no", "no", "no", "no", "no", "no", "no"},
{"1653694220002", "10", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no"},
{"1653694220003", "10", "10", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no"},
{"1653694220004", "no", "20", "no", "no", "no", "nchar10", "no", "no", "no", "no", "no", "no", "no"},
{"1653694220005", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu"},
{"1653694220006", "no", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu"},
{"1653694220007", "no", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "no"},
{"1653694220007", "no", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "no", "no", "no", "no", "no"},
{"1653694220008", "no", "nu", "nu", "nu", "binary10", "nu", "nu", "nu", "nu", "nu", "nu", "nu", "no"},
{"1653694220009", "no", "nu", "nu", "nu", "binary10", "nu", "nu", "10", "no", "nu", "nu", "nu", "100"},
{"1653694220010", "-1", "-1", "-1", "-1", "binary10", "nu", "-1", "0", "0", "0", "0", "0", "0"},
@ -465,13 +464,12 @@ TEST(testCase, NoneTest) {
{"1653694220019", "no", "9223372036854775807", "nu", "nu", "bin10", "nu", "nu", "10", "no", "254", "nu", "nu",
"no"}};
for (int r = 0; r < nRows; ++r) {
genTestData((const char **)&data[r], nCols, &pArray);
tTSRowNew(NULL, pArray, pTSchema, &row);
debugPrintTSRow(row, pTSchema, __func__, __LINE__); // debug print
checkTSRow((const char **)&data[r], row, pTSchema); // check
taosMemoryFreeClear(row);
tTSRowFree(row);
taosArrayClear(pArray);
}

View File

@ -22,17 +22,17 @@ static void dmSendRsp(SRpcMsg *pMsg);
static void dmBuildMnodeRedirectRsp(SDnode *pDnode, SRpcMsg *pMsg);
static inline int32_t dmBuildNodeMsg(SRpcMsg *pMsg, SRpcMsg *pRpc) {
SRpcConnInfo connInfo = {0};
if (IsReq(pRpc) && rpcGetConnInfo(pRpc->info.handle, &connInfo) != 0) {
terrno = TSDB_CODE_MND_NO_USER_FROM_CONN;
dError("failed to build msg since %s, app:%p handle:%p", terrstr(), pRpc->info.ahandle, pRpc->info.handle);
return -1;
}
SRpcConnInfo *pConnInfo = &(pRpc->info.connInfo);
// if (IsReq(pRpc)) {
// terrno = TSDB_CODE_MND_NO_USER_FROM_CONN;
// dError("failed to build msg since %s, app:%p handle:%p", terrstr(), pRpc->info.ahandle, pRpc->info.handle);
// return -1;
//}
memcpy(pMsg, pRpc, sizeof(SRpcMsg));
memcpy(pMsg->conn.user, connInfo.user, TSDB_USER_LEN);
pMsg->conn.clientIp = connInfo.clientIp;
pMsg->conn.clientPort = connInfo.clientPort;
memcpy(pMsg->conn.user, pConnInfo->user, TSDB_USER_LEN);
pMsg->conn.clientIp = pConnInfo->clientIp;
pMsg->conn.clientPort = pConnInfo->clientPort;
return 0;
}

View File

@ -99,7 +99,7 @@ static int metaSaveJsonVarToIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const
indexMultiTermAdd(terms, term);
}
}
tIndexJsonPut(pMeta->pTagIvtIdx, terms, tuid);
indexJsonPut(pMeta->pTagIvtIdx, terms, tuid);
indexMultiTermDestroy(terms);
#endif
return 0;

View File

@ -67,6 +67,7 @@ typedef enum {
CTG_TASK_GET_DB_INFO,
CTG_TASK_GET_TB_META,
CTG_TASK_GET_TB_HASH,
CTG_TASK_GET_TB_INDEX,
CTG_TASK_GET_INDEX,
CTG_TASK_GET_UDF,
CTG_TASK_GET_USER,
@ -93,6 +94,10 @@ typedef struct SCtgTbMetaCtx {
int32_t flag;
} SCtgTbMetaCtx;
typedef struct SCtgTbIndexCtx {
SName* pName;
} SCtgTbIndexCtx;
typedef struct SCtgDbVgCtx {
char dbFName[TSDB_DB_FNAME_LEN];
} SCtgDbVgCtx;
@ -189,6 +194,7 @@ typedef struct SCtgJob {
int32_t indexNum;
int32_t userNum;
int32_t dbInfoNum;
int32_t tbIndexNum;
} SCtgJob;
typedef struct SCtgMsgCtx {
@ -490,7 +496,7 @@ int32_t ctgGetDBVgInfoFromMnode(CTG_PARAMS, SBuildUseDBInput *input, SUseDbOutpu
int32_t ctgGetQnodeListFromMnode(CTG_PARAMS, SArray *out, SCtgTask* pTask);
int32_t ctgGetDBCfgFromMnode(CTG_PARAMS, const char *dbFName, SDbCfgInfo *out, SCtgTask* pTask);
int32_t ctgGetIndexInfoFromMnode(CTG_PARAMS, const char *indexName, SIndexInfo *out, SCtgTask* pTask);
int32_t ctgGetTbIndexFromMnode(CTG_PARAMS, const char *tbFName, SArray** out, SCtgTask* pTask);
int32_t ctgGetTbIndexFromMnode(CTG_PARAMS, SName* name, SArray** out, SCtgTask* pTask);
int32_t ctgGetUdfInfoFromMnode(CTG_PARAMS, const char *funcName, SFuncInfo *out, SCtgTask* pTask);
int32_t ctgGetUserDbAuthFromMnode(CTG_PARAMS, const char *user, SGetUserAuthRsp *out, SCtgTask* pTask);
int32_t ctgGetTbMetaFromMnodeImpl(CTG_PARAMS, char *dbFName, char* tbName, STableMetaOutput* out, SCtgTask* pTask);

View File

@ -1136,14 +1136,14 @@ int32_t catalogGetIndexMeta(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps
CTG_API_LEAVE(ctgGetIndexInfoFromMnode(CTG_PARAMS_LIST(), indexName, pInfo, NULL));
}
int32_t catalogGetTableIndex(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const char* tbFName, SArray** pRes) {
int32_t catalogGetTableIndex(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const SName* pTableName, SArray** pRes) {
CTG_API_ENTER();
if (NULL == pCtg || NULL == pTrans || NULL == pMgmtEps || NULL == tbFName || NULL == pRes) {
if (NULL == pCtg || NULL == pTrans || NULL == pMgmtEps || NULL == pTableName || NULL == pRes) {
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
CTG_API_LEAVE(ctgGetTbIndexFromMnode(CTG_PARAMS_LIST(), tbFName, pRes, NULL));
CTG_API_LEAVE(ctgGetTbIndexFromMnode(CTG_PARAMS_LIST(), (SName*)pTableName, pRes, NULL));
}

View File

@ -44,7 +44,7 @@ int32_t ctgInitGetTbMetaTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
taosArrayPush(pJob->pTasks, &task);
qDebug("QID:0x%" PRIx64 " the %d task type %s initialized, tableName:%s", pJob->queryId, taskIdx, ctgTaskTypeStr(task.type), name->tname);
qDebug("QID:0x%" PRIx64 " the %d task type %s initialized, tbName:%s", pJob->queryId, taskIdx, ctgTaskTypeStr(task.type), name->tname);
return TSDB_CODE_SUCCESS;
}
@ -232,6 +232,35 @@ int32_t ctgInitGetUserTask(SCtgJob *pJob, int32_t taskIdx, SUserAuthInfo *user)
return TSDB_CODE_SUCCESS;
}
int32_t ctgInitGetTbIndexTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
SCtgTask task = {0};
task.type = CTG_TASK_GET_TB_INDEX;
task.taskId = taskIdx;
task.pJob = pJob;
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgTbIndexCtx));
if (NULL == task.taskCtx) {
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
}
SCtgTbIndexCtx* ctx = task.taskCtx;
ctx->pName = taosMemoryMalloc(sizeof(*name));
if (NULL == ctx->pName) {
taosMemoryFree(task.taskCtx);
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
}
memcpy(ctx->pName, name, sizeof(*name));
taosArrayPush(pJob->pTasks, &task);
qDebug("QID:%" PRIx64 " the %d task type %s initialized, tbName:%s", pJob->queryId, taskIdx, ctgTaskTypeStr(task.type), name->tname);
return TSDB_CODE_SUCCESS;
}
int32_t ctgHandleForceUpdate(SCatalog* pCtg, SCtgJob *pJob, const SCatalogReq* pReq) {
int32_t dbNum = pJob->dbCfgNum + pJob->dbVgNum + pJob->dbInfoNum;
if (dbNum > 0) {
@ -329,8 +358,9 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
int32_t indexNum = (int32_t)taosArrayGetSize(pReq->pIndex);
int32_t userNum = (int32_t)taosArrayGetSize(pReq->pUser);
int32_t dbInfoNum = (int32_t)taosArrayGetSize(pReq->pDbInfo);
int32_t tbIndexNum = (int32_t)taosArrayGetSize(pReq->pTableIndex);
*taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum + dbInfoNum;
*taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum + dbInfoNum + tbIndexNum;
if (*taskNum <= 0) {
ctgDebug("Empty input for job, no need to retrieve meta, reqId:0x%" PRIx64, reqId);
return TSDB_CODE_SUCCESS;
@ -360,6 +390,7 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
pJob->indexNum = indexNum;
pJob->userNum = userNum;
pJob->dbInfoNum = dbInfoNum;
pJob->tbIndexNum = tbIndexNum;
pJob->pTasks = taosArrayInit(*taskNum, sizeof(SCtgTask));
@ -398,6 +429,11 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
CTG_ERR_JRET(ctgInitGetTbHashTask(pJob, taskIdx++, name));
}
for (int32_t i = 0; i < tbIndexNum; ++i) {
SName* name = taosArrayGet(pReq->pTableIndex, i);
CTG_ERR_JRET(ctgInitGetTbIndexTask(pJob, taskIdx++, name));
}
for (int32_t i = 0; i < indexNum; ++i) {
char* indexName = taosArrayGet(pReq->pIndex, i);
CTG_ERR_JRET(ctgInitGetIndexTask(pJob, taskIdx++, indexName));
@ -479,6 +515,21 @@ int32_t ctgDumpTbHashRes(SCtgTask* pTask) {
return TSDB_CODE_SUCCESS;
}
int32_t ctgDumpTbIndexRes(SCtgTask* pTask) {
SCtgJob* pJob = pTask->pJob;
if (NULL == pJob->jobRes.pTableIndex) {
pJob->jobRes.pTableIndex = taosArrayInit(pJob->tbIndexNum, sizeof(SMetaRes));
if (NULL == pJob->jobRes.pTableIndex) {
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
}
}
SMetaRes res = {.code = pTask->code, .pRes = pTask->res};
taosArrayPush(pJob->jobRes.pTableHash, &res);
return TSDB_CODE_SUCCESS;
}
int32_t ctgDumpIndexRes(SCtgTask* pTask) {
SCtgJob* pJob = pTask->pJob;
if (NULL == pJob->jobRes.pIndex) {
@ -817,6 +868,20 @@ _return:
CTG_RET(code);
}
int32_t ctgHandleGetTbIndexRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
int32_t code = 0;
CTG_ERR_JRET(ctgProcessRspMsg(pTask->msgCtx.out, reqType, pMsg->pData, pMsg->len, rspCode, pTask->msgCtx.target));
TSWAP(pTask->res, pTask->msgCtx.out);
_return:
ctgHandleTaskEnd(pTask, code);
CTG_RET(code);
}
int32_t ctgHandleGetDbCfgRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
int32_t code = 0;
CTG_ERR_JRET(ctgProcessRspMsg(pTask->msgCtx.out, reqType, pMsg->pData, pMsg->len, rspCode, pTask->msgCtx.target));
@ -1056,13 +1121,24 @@ _return:
CTG_RET(code);
}
int32_t ctgLaunchGetTbIndexTask(SCtgTask *pTask) {
int32_t code = 0;
SCatalog* pCtg = pTask->pJob->pCtg;
void *pTrans = pTask->pJob->pTrans;
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
SCtgTbIndexCtx* pCtx = (SCtgTbIndexCtx*)pTask->taskCtx;
CTG_ERR_RET(ctgGetTbIndexFromMnode(CTG_PARAMS_LIST(), pCtx->pName, NULL, pTask));
return TSDB_CODE_SUCCESS;
}
int32_t ctgLaunchGetQnodeTask(SCtgTask *pTask) {
SCatalog* pCtg = pTask->pJob->pCtg;
void *pTrans = pTask->pJob->pTrans;
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
CTG_ERR_RET(ctgGetQnodeListFromMnode(CTG_PARAMS_LIST(), NULL, pTask));
return TSDB_CODE_SUCCESS;
}
@ -1174,6 +1250,7 @@ SCtgAsyncFps gCtgAsyncFps[] = {
{ctgLaunchGetDbInfoTask, ctgHandleGetDbInfoRsp, ctgDumpDbInfoRes},
{ctgLaunchGetTbMetaTask, ctgHandleGetTbMetaRsp, ctgDumpTbMetaRes},
{ctgLaunchGetTbHashTask, ctgHandleGetTbHashRsp, ctgDumpTbHashRes},
{ctgLaunchGetTbIndexTask, ctgHandleGetTbIndexRsp, ctgDumpTbIndexRes},
{ctgLaunchGetIndexTask, ctgHandleGetIndexRsp, ctgDumpIndexRes},
{ctgLaunchGetUdfTask, ctgHandleGetUdfRsp, ctgDumpUdfRes},
{ctgLaunchGetUserTask, ctgHandleGetUserRsp, ctgDumpUserRes},

View File

@ -427,11 +427,13 @@ int32_t ctgGetIndexInfoFromMnode(CTG_PARAMS, const char *indexName, SIndexInfo *
return TSDB_CODE_SUCCESS;
}
int32_t ctgGetTbIndexFromMnode(CTG_PARAMS, const char *tbFName, SArray** out, SCtgTask* pTask) {
int32_t ctgGetTbIndexFromMnode(CTG_PARAMS, SName *name, SArray** out, SCtgTask* pTask) {
char *msg = NULL;
int32_t msgLen = 0;
int32_t reqType = TDMT_MND_GET_TABLE_INDEX;
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
char tbFName[TSDB_TABLE_FNAME_LEN];
tNameExtractFullName(name, tbFName);
ctgDebug("try to get tb index from mnode, tbFName:%s", tbFName);

View File

@ -60,6 +60,9 @@ void ctgFreeSMetaData(SMetaData* pData) {
taosArrayDestroy(pData->pTableHash);
pData->pTableHash = NULL;
taosArrayDestroy(pData->pTableIndex);
pData->pTableIndex = NULL;
taosArrayDestroy(pData->pUdfList);
pData->pUdfList = NULL;
@ -248,6 +251,14 @@ void ctgFreeMsgCtx(SCtgMsgCtx* pCtx) {
taosMemoryFreeClear(pCtx->out);
break;
}
case TDMT_MND_GET_TABLE_INDEX: {
SArray** pOut = (SArray**)pCtx->out;
if (pOut) {
taosArrayDestroyEx(*pOut, tFreeSTableIndexInfo);
taosMemoryFreeClear(pCtx->out);
}
break;
}
case TDMT_MND_RETRIEVE_FUNC: {
SFuncInfo* pOut = (SFuncInfo*)pCtx->out;
taosMemoryFree(pOut->pCode);
@ -344,6 +355,13 @@ void ctgFreeTask(SCtgTask* pTask) {
taosMemoryFreeClear(pTask->res);
break;
}
case CTG_TASK_GET_TB_INDEX: {
SCtgTbIndexCtx* taskCtx = (SCtgTbIndexCtx*)pTask->taskCtx;
taosMemoryFreeClear(taskCtx->pName);
taosMemoryFreeClear(pTask->taskCtx);
taosArrayDestroyEx(pTask->res, tFreeSTableIndexInfo);
break;
}
case CTG_TASK_GET_INDEX: {
taosMemoryFreeClear(pTask->taskCtx);
taosMemoryFreeClear(pTask->res);

View File

@ -32,15 +32,17 @@ extern "C" {
#define EXPLAIN_PROJECTION_FORMAT "Projection"
#define EXPLAIN_JOIN_FORMAT "%s"
#define EXPLAIN_AGG_FORMAT "Aggragate"
#define EXPLAIN_INDEF_ROWS_FORMAT "Indefinite Rows Function"
#define EXPLAIN_EXCHANGE_FORMAT "Data Exchange %d:1"
#define EXPLAIN_SORT_FORMAT "Sort"
#define EXPLAIN_INTERVAL_FORMAT "Interval on Column %s"
#define EXPLAIN_FILL_FORMAT "Fill"
#define EXPLAIN_SESSION_FORMAT "Session"
#define EXPLAIN_STATE_WINDOW_FORMAT "StateWindow on Column %s"
#define EXPLAIN_PARITION_FORMAT "Partition on Column %s"
#define EXPLAIN_ORDER_FORMAT "Order: %s"
#define EXPLAIN_FILTER_FORMAT "Filter: "
#define EXPLAIN_FILL_FORMAT "Fill: %s"
#define EXPLAIN_FILL_VALUE_FORMAT "Fill Values: "
#define EXPLAIN_ON_CONDITIONS_FORMAT "Join Cond: "
#define EXPLAIN_TIMERANGE_FORMAT "Time Range: [%" PRId64 ", %" PRId64 "]"
#define EXPLAIN_OUTPUT_FORMAT "Output: "
@ -66,6 +68,8 @@ extern "C" {
#define EXPLAIN_WIDTH_FORMAT "width=%d"
#define EXPLAIN_FUNCTIONS_FORMAT "functions=%d"
#define EXPLAIN_EXECINFO_FORMAT "cost=%.3f..%.3f rows=%" PRIu64
#define EXPLAIN_MODE_FORMAT "mode=%s"
#define EXPLAIN_STRING_TYPE_FORMAT "%s"
typedef struct SExplainGroup {
int32_t nodeNum;

View File

@ -179,6 +179,21 @@ int32_t qExplainGenerateResChildren(SPhysiNode *pNode, SExplainGroup *group, SNo
pPhysiChildren = mergePhysiNode->node.pChildren;
break;
}
case QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC: {
SIndefRowsFuncPhysiNode *indefPhysiNode = (SIndefRowsFuncPhysiNode *)pNode;
pPhysiChildren = indefPhysiNode->node.pChildren;
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_INTERVAL: {
SMergeIntervalPhysiNode *intPhysiNode = (SMergeIntervalPhysiNode *)pNode;
pPhysiChildren = intPhysiNode->window.node.pChildren;
break;
}
case QUERY_NODE_PHYSICAL_PLAN_FILL: {
SFillPhysiNode *fillPhysiNode = (SFillPhysiNode *)pNode;
pPhysiChildren = fillPhysiNode->node.pChildren;
break;
}
default:
qError("not supported physical node type %d", pNode->type);
QRY_ERR_RET(TSDB_CODE_QRY_APP_ERROR);
@ -212,12 +227,15 @@ int32_t qExplainGenerateResNodeExecInfo(SArray **pExecInfo, SExplainGroup *group
SExplainRsp *rsp = NULL;
for (int32_t i = 0; i < group->nodeNum; ++i) {
rsp = taosArrayGet(group->nodeExecInfo, i);
/*
if (group->physiPlanExecIdx >= rsp->numOfPlans) {
qError("physiPlanIdx %d exceed plan num %d", group->physiPlanExecIdx, rsp->numOfPlans);
return TSDB_CODE_QRY_APP_ERROR;
}
taosArrayPush(*pExecInfo, rsp->subplanInfo + group->physiPlanExecIdx);
*/
taosArrayPush(*pExecInfo, rsp->subplanInfo);
}
++group->physiPlanExecIdx;
@ -599,6 +617,42 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC: {
SIndefRowsFuncPhysiNode *pIndefNode = (SIndefRowsFuncPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_INDEF_ROWS_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
if (pIndefNode->pVectorFuncs) {
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pIndefNode->pVectorFuncs->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pIndefNode->node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
nodesGetOutputNumFromSlotList(pIndefNode->node.pOutputDataBlockDesc->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pIndefNode->node.pOutputDataBlockDesc->outputRowSize);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pIndefNode->node.pConditions) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILTER_FORMAT);
QRY_ERR_RET(nodesNodeToSQL(pIndefNode->node.pConditions, tbuf + VARSTR_HEADER_SIZE,
TSDB_EXPLAIN_RESULT_ROW_SIZE, &tlen));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_EXCHANGE: {
SExchangePhysiNode *pExchNode = (SExchangePhysiNode *)pNode;
SExplainGroup *group = taosHashGet(ctx->groupHash, &pExchNode->srcGroupId, sizeof(pExchNode->srcGroupId));
@ -607,7 +661,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
QRY_ERR_RET(TSDB_CODE_QRY_APP_ERROR);
}
EXPLAIN_ROW_NEW(level, EXPLAIN_EXCHANGE_FORMAT, group->nodeNum);
EXPLAIN_ROW_NEW(level, EXPLAIN_EXCHANGE_FORMAT, pExchNode->singleChannel ? 1 : group->nodeNum);
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
@ -750,6 +804,106 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_INTERVAL: {
SMergeIntervalPhysiNode *pIntNode = (SMergeIntervalPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_INTERVAL_FORMAT, nodesGetNameFromColumnNode(pIntNode->window.pTspk));
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pIntNode->window.pFuncs->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pIntNode->window.node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
nodesGetOutputNumFromSlotList(pIntNode->window.node.pOutputDataBlockDesc->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pIntNode->window.node.pOutputDataBlockDesc->outputRowSize);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
uint8_t precision = getIntervalPrecision(pIntNode);
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_TIME_WINDOWS_FORMAT,
INVERAL_TIME_FROM_PRECISION_TO_UNIT(pIntNode->interval, pIntNode->intervalUnit, precision),
pIntNode->intervalUnit, pIntNode->offset, getPrecisionUnit(precision),
INVERAL_TIME_FROM_PRECISION_TO_UNIT(pIntNode->sliding, pIntNode->slidingUnit, precision),
pIntNode->slidingUnit);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pIntNode->window.node.pConditions) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILTER_FORMAT);
QRY_ERR_RET(nodesNodeToSQL(pIntNode->window.node.pConditions, tbuf + VARSTR_HEADER_SIZE,
TSDB_EXPLAIN_RESULT_ROW_SIZE, &tlen));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_FILL: {
SFillPhysiNode *pFillNode = (SFillPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_FILL_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_MODE_FORMAT, nodesGetFillModeString(pFillNode->mode));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pFillNode->node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
nodesGetOutputNumFromSlotList(pFillNode->node.pOutputDataBlockDesc->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pFillNode->node.pOutputDataBlockDesc->outputRowSize);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pFillNode->pValues) {
SNodeListNode *pValues = (SNodeListNode*)pFillNode->pValues;
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILL_VALUE_FORMAT);
SNode* tNode = NULL;
int32_t i = 0;
FOREACH(tNode, pValues->pNodeList) {
if (i) {
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
SValueNode* tValue = (SValueNode*)tNode;
char *value = nodesGetStrValueFromNode(tValue);
EXPLAIN_ROW_APPEND(EXPLAIN_STRING_TYPE_FORMAT, value);
taosMemoryFree(value);
++i;
}
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_TIMERANGE_FORMAT, pFillNode->timeRange.skey,
pFillNode->timeRange.ekey);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pFillNode->node.pConditions) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILTER_FORMAT);
QRY_ERR_RET(nodesNodeToSQL(pFillNode->node.pConditions, tbuf + VARSTR_HEADER_SIZE,
TSDB_EXPLAIN_RESULT_ROW_SIZE, &tlen));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_SESSION: {
SSessionWinodwPhysiNode *pSessNode = (SSessionWinodwPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_SESSION_FORMAT);

View File

@ -1756,6 +1756,7 @@ SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, STagScanPhysi
;
pInfo->readHandle = *pReadHandle;
pInfo->curPos = 0;
pInfo->pFilterNode = pPhyNode->node.pConditions;
pOperator->name = "TagScanOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN;
pOperator->blocking = false;

View File

@ -126,7 +126,10 @@ int32_t getHistogramInfoSize();
bool getHLLFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t hllFunction(SqlFunctionCtx* pCtx);
int32_t hllFunctionMerge(SqlFunctionCtx* pCtx);
int32_t hllFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t hllPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t getHLLInfoSize();
bool getStateFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool stateFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);

View File

@ -313,6 +313,7 @@ static int32_t translateApercentileImpl(SFunctionNode* pFunc, char* pErrBuf, int
static int32_t translateApercentilePartial(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateApercentileImpl(pFunc, pErrBuf, len, true);
}
static int32_t translateApercentileMerge(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateApercentileImpl(pFunc, pErrBuf, len, false);
}
@ -401,6 +402,7 @@ static int32_t translateSpreadImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t
static int32_t translateSpreadPartial(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateSpreadImpl(pFunc, pErrBuf, len, true);
}
static int32_t translateSpreadMerge(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateSpreadImpl(pFunc, pErrBuf, len, false);
}
@ -551,6 +553,7 @@ static int32_t translateHistogramImpl(SFunctionNode* pFunc, char* pErrBuf, int32
static int32_t translateHistogramPartial(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateHistogramImpl(pFunc, pErrBuf, len, true);
}
static int32_t translateHistogramMerge(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateHistogramImpl(pFunc, pErrBuf, len, false);
}
@ -564,6 +567,28 @@ static int32_t translateHLL(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return TSDB_CODE_SUCCESS;
}
static int32_t translateHLLImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t len, bool isPartial) {
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
if (isPartial) {
pFunc->node.resType = (SDataType){.bytes = getHistogramInfoSize() + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY};
} else {
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes, .type = TSDB_DATA_TYPE_BIGINT};
}
return TSDB_CODE_SUCCESS;
}
static int32_t translateHLLPartial(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateHLLImpl(pFunc, pErrBuf, len, true);
}
static int32_t translateHLLMerge(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return translateHLLImpl(pFunc, pErrBuf, len, false);
}
static bool validateStateOper(const SValueNode* pVal) {
if (TSDB_DATA_TYPE_BINARY != pVal->node.resType.type) {
return false;
@ -1478,6 +1503,28 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getHLLFuncEnv,
.initFunc = functionSetup,
.processFunc = hllFunction,
.finalizeFunc = hllFinalize,
.pPartialFunc = "_hyperloglog_partial",
.pMergeFunc = "_hyperloglog_merge"
},
{
.name = "_hyperloglog_partial",
.type = FUNCTION_TYPE_HYPERLOGLOG_PARTIAL,
.classification = FUNC_MGT_AGG_FUNC,
.translateFunc = translateHLLPartial,
.getEnvFunc = getHLLFuncEnv,
.initFunc = functionSetup,
.processFunc = hllFunction,
.finalizeFunc = hllPartialFinalize
},
{
.name = "_hyperloglog_merge",
.type = FUNCTION_TYPE_HYPERLOGLOG_MERGE,
.classification = FUNC_MGT_AGG_FUNC,
.translateFunc = translateHLLMerge,
.getEnvFunc = getHLLFuncEnv,
.initFunc = functionSetup,
.processFunc = hllFunctionMerge,
.finalizeFunc = hllFinalize
},
{

View File

@ -3411,7 +3411,6 @@ int32_t histogramFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
}
int32_t histogramPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
SHistoFuncInfo* pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
int32_t resultBytes = getHistogramInfoSize();
char *res = taosMemoryCalloc(resultBytes + VARSTR_HEADER_SIZE, sizeof(char));
@ -3428,6 +3427,10 @@ int32_t histogramPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return 1;
}
int32_t getHLLInfoSize() {
return (int32_t)sizeof(SHLLInfo);
}
bool getHLLFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SHLLInfo);
return true;
@ -3553,6 +3556,27 @@ int32_t hllFunction(SqlFunctionCtx *pCtx) {
return TSDB_CODE_SUCCESS;
}
int32_t hllFunctionMerge(SqlFunctionCtx *pCtx) {
SInputColumnInfoData* pInput = &pCtx->input;
SColumnInfoData* pCol = pInput->pData[0];
ASSERT(pCol->info.type == TSDB_DATA_TYPE_BINARY);
SHLLInfo* pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
int32_t start = pInput->startRowIndex;
char* data = colDataGetData(pCol, start);
SHLLInfo* pInputInfo = (SHLLInfo *)varDataVal(data);
for (int32_t k = 0; k < HLL_BUCKETS; ++k) {
if (pInfo->buckets[k] < pInputInfo->buckets[k]) {
pInfo->buckets[k] = pInputInfo->buckets[k];
}
}
SET_VAL(GET_RES_INFO(pCtx), 1, 1);
return TSDB_CODE_SUCCESS;
}
int32_t hllFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo *pInfo = GET_RES_INFO(pCtx);
@ -3565,6 +3589,24 @@ int32_t hllFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return functionFinalize(pCtx, pBlock);
}
int32_t hllPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
SHLLInfo* pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
int32_t resultBytes = getHLLInfoSize();
char *res = taosMemoryCalloc(resultBytes + VARSTR_HEADER_SIZE, sizeof(char));
memcpy(varDataVal(res), pInfo, resultBytes);
varDataSetLen(res, resultBytes);
int32_t slotId = pCtx->pExpr->base.resSchema.slotId;
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
colDataAppend(pCol, pBlock->info.rows, res, false);
taosMemoryFree(res);
return pResInfo->numOfRes;
}
bool getStateFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SStateInfo);
return true;

View File

@ -131,7 +131,7 @@ typedef struct TFileCacheKey {
char* colName;
int32_t nColName;
} ICacheKey;
int indexFlushCacheToTFile(SIndex* sIdx, void*, bool quit);
int idxFlushCacheToTFile(SIndex* sIdx, void*, bool quit);
int64_t indexAddRef(void* p);
int32_t indexRemoveRef(int64_t ref);

View File

@ -455,7 +455,7 @@ static void idxDestroyFinalRslt(SArray* result) {
taosArrayDestroy(result);
}
int indexFlushCacheToTFile(SIndex* sIdx, void* cache, bool quit) {
int idxFlushCacheToTFile(SIndex* sIdx, void* cache, bool quit) {
if (sIdx == NULL) {
return -1;
}

View File

@ -69,7 +69,7 @@ static int32_t (*cacheSearch[][QUERY_MAX])(void* cache, SIndexTerm* ct, SIdxTRsl
cacheSearchRange_JSON}};
static void doMergeWork(SSchedMsg* msg);
static bool indexCacheIteratorNext(Iterate* itera);
static bool idxCacheIteratorNext(Iterate* itera);
static int32_t cacheSearchTerm(void* cache, SIndexTerm* term, SIdxTRslt* tr, STermValueType* s) {
if (cache == NULL) {
@ -476,7 +476,7 @@ Iterate* indexCacheIteratorCreate(IndexCache* cache) {
iiter->val.val = taosArrayInit(1, sizeof(uint64_t));
iiter->val.colVal = NULL;
iiter->iter = tbl != NULL ? tSkipListCreateIter(tbl->mem) : NULL;
iiter->next = indexCacheIteratorNext;
iiter->next = idxCacheIteratorNext;
iiter->getValue = indexCacheIteratorGetValue;
taosThreadMutexUnlock(&cache->mtx);
@ -748,9 +748,9 @@ static void doMergeWork(SSchedMsg* msg) {
int quit = msg->thandle ? true : false;
taosMemoryFree(msg->thandle);
indexFlushCacheToTFile(sidx, pCache, quit);
idxFlushCacheToTFile(sidx, pCache, quit);
}
static bool indexCacheIteratorNext(Iterate* itera) {
static bool idxCacheIteratorNext(Iterate* itera) {
SSkipListIterator* iter = itera->iter;
if (iter == NULL) {
return false;

View File

@ -355,7 +355,7 @@ static int32_t sifDoIndex(SIFParam *left, SIFParam *right, int8_t operType, SIFP
SIndexMultiTermQuery *mtm = indexMultiTermQueryCreate(MUST);
indexMultiTermQueryAdd(mtm, tm, qtype);
ret = tIndexJsonSearch(arg->ivtIdx, mtm, output->result);
ret = indexJsonSearch(arg->ivtIdx, mtm, output->result);
} else {
bool reverse;
Filter filterFunc = sifGetFilterFunc(qtype, &reverse);

View File

@ -15,11 +15,11 @@
#include "index.h"
#include "indexInt.h"
int tIndexJsonOpen(SIndexJsonOpts *opts, const char *path, SIndexJson **index) {
int indexJsonOpen(SIndexJsonOpts *opts, const char *path, SIndexJson **index) {
// handle
return indexOpen(opts, path, index);
}
int tIndexJsonPut(SIndexJson *index, SIndexJsonMultiTerm *terms, uint64_t uid) {
int indexJsonPut(SIndexJson *index, SIndexJsonMultiTerm *terms, uint64_t uid) {
for (int i = 0; i < taosArrayGetSize(terms); i++) {
SIndexJsonTerm *p = taosArrayGetP(terms, i);
if (p->colType == TSDB_DATA_TYPE_BOOL) {
@ -36,7 +36,7 @@ int tIndexJsonPut(SIndexJson *index, SIndexJsonMultiTerm *terms, uint64_t uid) {
return indexPut(index, terms, uid);
}
int tIndexJsonSearch(SIndexJson *index, SIndexJsonMultiTermQuery *tq, SArray *result) {
int indexJsonSearch(SIndexJson *index, SIndexJsonMultiTermQuery *tq, SArray *result) {
SArray *terms = tq->query;
for (int i = 0; i < taosArrayGetSize(terms); i++) {
SIndexJsonTerm *p = taosArrayGetP(terms, i);
@ -54,7 +54,7 @@ int tIndexJsonSearch(SIndexJson *index, SIndexJsonMultiTermQuery *tq, SArray *re
return indexSearch(index, tq, result);
}
void tIndexJsonClose(SIndexJson *index) {
void indexJsonClose(SIndexJson *index) {
// handle close
return indexClose(index);
}

View File

@ -56,11 +56,11 @@ class JsonEnv : public ::testing::Test {
initLog();
opts = indexOptsCreate();
int ret = tIndexJsonOpen(opts, dir.c_str(), &index);
int ret = indexJsonOpen(opts, dir.c_str(), &index);
assert(ret == 0);
}
virtual void TearDown() {
tIndexJsonClose(index);
indexJsonClose(index);
indexOptsDestroy(opts);
printf("destory\n");
taosMsleep(1000);
@ -75,7 +75,7 @@ static void WriteData(SIndexJson* index, const std::string& colName, int8_t dtyp
(const char*)data, dlen);
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, (int64_t)tableId);
indexJsonPut(index, terms, (int64_t)tableId);
indexMultiTermDestroy(terms);
}
@ -86,7 +86,7 @@ static void delData(SIndexJson* index, const std::string& colName, int8_t dtype,
(const char*)data, dlen);
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, (int64_t)tableId);
indexJsonPut(index, terms, (int64_t)tableId);
indexMultiTermDestroy(terms);
}
@ -99,7 +99,7 @@ static void Search(SIndexJson* index, const std::string& colNam, int8_t dtype, v
SArray* res = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, (EIndexQueryType)filterType);
tIndexJsonSearch(index, mq, res);
indexJsonSearch(index, mq, res);
indexMultiTermQueryDestroy(mq);
*result = res;
}
@ -112,7 +112,7 @@ TEST_F(JsonEnv, testWrite) {
colVal.c_str(), colVal.size());
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -125,7 +125,7 @@ TEST_F(JsonEnv, testWrite) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -138,7 +138,7 @@ TEST_F(JsonEnv, testWrite) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -152,7 +152,7 @@ TEST_F(JsonEnv, testWrite) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(100, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -167,7 +167,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -182,7 +182,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -196,7 +196,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -210,7 +210,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(10, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -225,7 +225,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -240,7 +240,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(10, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -258,7 +258,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -271,7 +271,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -284,7 +284,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -297,7 +297,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -310,7 +310,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -324,7 +324,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -339,7 +339,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -354,7 +354,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -369,7 +369,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -385,7 +385,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -398,7 +398,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -412,7 +412,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -426,7 +426,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -441,7 +441,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -455,7 +455,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -469,7 +469,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(1000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -483,7 +483,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i);
indexJsonPut(index, terms, i);
indexMultiTermDestroy(terms);
}
}
@ -498,7 +498,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(0, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}
@ -511,7 +511,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SIndexMultiTerm* terms = indexMultiTermCreate();
indexMultiTermAdd(terms, term);
tIndexJsonPut(index, terms, i + 1000);
indexJsonPut(index, terms, i + 1000);
indexMultiTermDestroy(terms);
}
}
@ -526,7 +526,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
SArray* result = taosArrayInit(1, sizeof(uint64_t));
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
tIndexJsonSearch(index, mq, result);
indexJsonSearch(index, mq, result);
EXPECT_EQ(2000, taosArrayGetSize(result));
indexMultiTermQueryDestroy(mq);
}

View File

@ -1186,6 +1186,7 @@ static int32_t createExchangePhysiNodeByMerge(SMergePhysiNode* pMerge) {
return TSDB_CODE_OUT_OF_MEMORY;
}
pExchange->srcGroupId = pMerge->srcGroupId;
pExchange->singleChannel = true;
pExchange->node.pParent = (SPhysiNode*)pMerge;
pExchange->node.pOutputDataBlockDesc = nodesCloneNode(pMerge->node.pOutputDataBlockDesc);
if (NULL == pExchange->node.pOutputDataBlockDesc) {

View File

@ -348,7 +348,7 @@ static FORCE_INLINE void varToNchar(char* buf, SScalarParam* pOut, int32_t rowIn
int32_t outputMaxLen = (inputLen + 1) * TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE;
char* t = taosMemoryCalloc(1, outputMaxLen);
/*int32_t resLen = */taosMbsToUcs4(varDataVal(buf), inputLen, (TdUcs4*) varDataVal(t), outputMaxLen, &len);
/*int32_t resLen = */taosMbsToUcs4(varDataVal(buf), inputLen, (TdUcs4*) varDataVal(t), outputMaxLen - VARSTR_HEADER_SIZE, &len);
varDataSetLen(t, len);
colDataAppend(pOut->columnData, rowIndex, t, false);

View File

@ -307,6 +307,13 @@ static void uvHandleReq(SSvrConn* pConn) {
if (pHead->noResp == 1) {
transMsg.info.refId = -1;
}
// set up conn info
SRpcConnInfo* pConnInfo = &(transMsg.info.connInfo);
pConnInfo->clientIp = (uint32_t)(pConn->addr.sin_addr.s_addr);
pConnInfo->clientPort = ntohs(pConn->addr.sin_port);
tstrncpy(pConnInfo->user, pConn->user, sizeof(pConnInfo->user));
transReleaseExHandle(refMgt, pConn->refId);
STrans* pTransInst = pConn->pTransInst;
@ -1153,23 +1160,6 @@ _return2:
rpcFreeCont(msg->pCont);
}
int transGetConnInfo(void* thandle, STransHandleInfo* pInfo) {
if (thandle == NULL) {
tTrace("invalid handle %p, failed to Get Conn info", thandle);
return -1;
}
SExHandle* ex = thandle;
SSvrConn* pConn = ex->handle;
if (pConn == NULL) {
tTrace("invalid handle %p, failed to Get Conn info", thandle);
return -1;
}
struct sockaddr_in addr = pConn->addr;
pInfo->clientIp = (uint32_t)(addr.sin_addr.s_addr);
pInfo->clientPort = ntohs(addr.sin_port);
tstrncpy(pInfo->user, pConn->user, sizeof(pInfo->user));
return 0;
}
int transGetConnInfo(void* thandle, STransHandleInfo* pConnInfo) { return -1; }
#endif

View File

@ -28,9 +28,9 @@
#ifdef WINDOWS
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
//#define TM_YEAR_BASE 1970 //origin
#define TM_YEAR_BASE 1900 // slguan
/*
@ -39,18 +39,18 @@
*/
#define ALT_E 0x01
#define ALT_O 0x02
#define LEGAL_ALT(x) { if (alt_format & ~(x)) return (0); }
#define LEGAL_ALT(x) \
{ \
if (alt_format & ~(x)) return (0); \
}
static int conv_num(const char **buf, int *dest, int llim, int ulim)
{
static int conv_num(const char **buf, int *dest, int llim, int ulim) {
int result = 0;
/* The limit also determines the number of valid digits. */
int rulim = ulim;
if (**buf < '0' || **buf > '9')
return (0);
if (**buf < '0' || **buf > '9') return (0);
do {
result *= 10;
@ -58,62 +58,18 @@ static int conv_num(const char **buf, int *dest, int llim, int ulim)
rulim /= 10;
} while ((result * 10 <= ulim) && rulim && **buf >= '0' && **buf <= '9');
if (result < llim || result > ulim)
return (0);
if (result < llim || result > ulim) return (0);
*dest = result;
return (1);
}
static const char *day[7] = {
"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday",
"Friday", "Saturday"
};
static const char *abday[7] = {
"Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"
};
static const char *mon[12] = {
"January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December"
};
static const char *abmon[12] = {
"Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
};
static const char *am_pm[2] = {
"AM", "PM"
};
#define BILLION (1E9)
static BOOL g_first_time = 1;
static LARGE_INTEGER g_counts_per_sec;
int clock_gettime(int dummy, struct timespec *ct)
{
LARGE_INTEGER count;
if (g_first_time)
{
g_first_time = 0;
if (0 == QueryPerformanceFrequency(&g_counts_per_sec))
{
g_counts_per_sec.QuadPart = 0;
}
}
if ((NULL == ct) || (g_counts_per_sec.QuadPart <= 0) ||
(0 == QueryPerformanceCounter(&count)))
{
return -1;
}
ct->tv_sec = count.QuadPart / g_counts_per_sec.QuadPart;
ct->tv_nsec = ((count.QuadPart % g_counts_per_sec.QuadPart) * BILLION) / g_counts_per_sec.QuadPart;
return 0;
}
static const char *day[7] = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"};
static const char *abday[7] = {"Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"};
static const char *mon[12] = {"January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"};
static const char *abmon[12] = {"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"};
static const char *am_pm[2] = {"AM", "PM"};
#else
#include <sys/time.h>
@ -134,22 +90,19 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
/* Eat up white-space. */
if (isspace(c)) {
while (isspace(*bp))
bp++;
while (isspace(*bp)) bp++;
fmt++;
continue;
}
if ((c = *fmt++) != '%')
goto literal;
if ((c = *fmt++) != '%') goto literal;
again: switch (c = *fmt++) {
again:
switch (c = *fmt++) {
case '%': /* "%%" is converted to "%". */
literal:
if (c != *bp++)
return (0);
if (c != *bp++) return (0);
break;
/*
@ -171,44 +124,37 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
*/
case 'c': /* Date and time, using the locale's format. */
LEGAL_ALT(ALT_E);
if (!(bp = taosStrpTime(bp, "%x %X", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%x %X", tm))) return (0);
break;
case 'D': /* The date as "%m/%d/%y". */
LEGAL_ALT(0);
if (!(bp = taosStrpTime(bp, "%m/%d/%y", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%m/%d/%y", tm))) return (0);
break;
case 'R': /* The time as "%H:%M". */
LEGAL_ALT(0);
if (!(bp = taosStrpTime(bp, "%H:%M", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%H:%M", tm))) return (0);
break;
case 'r': /* The time in 12-hour clock representation. */
LEGAL_ALT(0);
if (!(bp = taosStrpTime(bp, "%I:%M:%S %p", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%I:%M:%S %p", tm))) return (0);
break;
case 'T': /* The time as "%H:%M:%S". */
LEGAL_ALT(0);
if (!(bp = taosStrpTime(bp, "%H:%M:%S", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%H:%M:%S", tm))) return (0);
break;
case 'X': /* The time, using the locale's format. */
LEGAL_ALT(ALT_E);
if (!(bp = taosStrpTime(bp, "%H:%M:%S", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%H:%M:%S", tm))) return (0);
break;
case 'x': /* The date, using the locale's format. */
LEGAL_ALT(ALT_E);
if (!(bp = taosStrpTime(bp, "%m/%d/%y", tm)))
return (0);
if (!(bp = taosStrpTime(bp, "%m/%d/%y", tm))) return (0);
break;
/*
@ -220,18 +166,15 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
for (i = 0; i < 7; i++) {
/* Full name. */
len = strlen(day[i]);
if (strncmp(day[i], bp, len) == 0)
break;
if (strncmp(day[i], bp, len) == 0) break;
/* Abbreviated name. */
len = strlen(abday[i]);
if (strncmp(abday[i], bp, len) == 0)
break;
if (strncmp(abday[i], bp, len) == 0) break;
}
/* Nothing matched. */
if (i == 7)
return (0);
if (i == 7) return (0);
tm->tm_wday = i;
bp += len;
@ -244,18 +187,15 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
for (i = 0; i < 12; i++) {
/* Full name. */
len = strlen(mon[i]);
if (strncmp(mon[i], bp, len) == 0)
break;
if (strncmp(mon[i], bp, len) == 0) break;
/* Abbreviated name. */
len = strlen(abmon[i]);
if (strncmp(abmon[i], bp, len) == 0)
break;
if (strncmp(abmon[i], bp, len) == 0) break;
}
/* Nothing matched. */
if (i == 12)
return (0);
if (i == 12) return (0);
tm->tm_mon = i;
bp += len;
@ -263,13 +203,11 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
case 'C': /* The century number. */
LEGAL_ALT(ALT_E);
if (!(conv_num(&bp, &i, 0, 99)))
return (0);
if (!(conv_num(&bp, &i, 0, 99))) return (0);
if (split_year) {
tm->tm_year = (tm->tm_year % 100) + (i * 100);
}
else {
} else {
tm->tm_year = i * 100;
split_year = 1;
}
@ -278,8 +216,7 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
case 'd': /* The day of month. */
case 'e':
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_mday, 1, 31)))
return (0);
if (!(conv_num(&bp, &tm->tm_mday, 1, 31))) return (0);
break;
case 'k': /* The hour (24-hour clock representation). */
@ -287,8 +224,7 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
/* FALLTHROUGH */
case 'H':
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_hour, 0, 23)))
return (0);
if (!(conv_num(&bp, &tm->tm_hour, 0, 23))) return (0);
break;
case 'l': /* The hour (12-hour clock representation). */
@ -296,29 +232,24 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
/* FALLTHROUGH */
case 'I':
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_hour, 1, 12)))
return (0);
if (tm->tm_hour == 12)
tm->tm_hour = 0;
if (!(conv_num(&bp, &tm->tm_hour, 1, 12))) return (0);
if (tm->tm_hour == 12) tm->tm_hour = 0;
break;
case 'j': /* The day of year. */
LEGAL_ALT(0);
if (!(conv_num(&bp, &i, 1, 366)))
return (0);
if (!(conv_num(&bp, &i, 1, 366))) return (0);
tm->tm_yday = i - 1;
break;
case 'M': /* The minute. */
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_min, 0, 59)))
return (0);
if (!(conv_num(&bp, &tm->tm_min, 0, 59))) return (0);
break;
case 'm': /* The month. */
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &i, 1, 12)))
return (0);
if (!(conv_num(&bp, &i, 1, 12))) return (0);
tm->tm_mon = i - 1;
break;
@ -326,16 +257,14 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
LEGAL_ALT(0);
/* AM? */
if (strcmp(am_pm[0], bp) == 0) {
if (tm->tm_hour > 11)
return (0);
if (tm->tm_hour > 11) return (0);
bp += strlen(am_pm[0]);
break;
}
/* PM? */
else if (strcmp(am_pm[1], bp) == 0) {
if (tm->tm_hour > 11)
return (0);
if (tm->tm_hour > 11) return (0);
tm->tm_hour += 12;
bp += strlen(am_pm[1]);
@ -347,8 +276,7 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
case 'S': /* The seconds. */
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_sec, 0, 61)))
return (0);
if (!(conv_num(&bp, &tm->tm_sec, 0, 61))) return (0);
break;
case 'U': /* The week of year, beginning on sunday. */
@ -360,28 +288,24 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
* point to calculate a real value, so just check the
* range for now.
*/
if (!(conv_num(&bp, &i, 0, 53)))
return (0);
if (!(conv_num(&bp, &i, 0, 53))) return (0);
break;
case 'w': /* The day of week, beginning on sunday. */
LEGAL_ALT(ALT_O);
if (!(conv_num(&bp, &tm->tm_wday, 0, 6)))
return (0);
if (!(conv_num(&bp, &tm->tm_wday, 0, 6))) return (0);
break;
case 'Y': /* The year. */
LEGAL_ALT(ALT_E);
if (!(conv_num(&bp, &i, 0, 9999)))
return (0);
if (!(conv_num(&bp, &i, 0, 9999))) return (0);
tm->tm_year = i - TM_YEAR_BASE;
break;
case 'y': /* The year within 100 years of the epoch. */
LEGAL_ALT(ALT_E | ALT_O);
if (!(conv_num(&bp, &i, 0, 99)))
return (0);
if (!(conv_num(&bp, &i, 0, 99))) return (0);
if (split_year) {
tm->tm_year = ((tm->tm_year / 100) * 100) + i;
@ -400,16 +324,12 @@ char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm) {
case 'n': /* Any kind of white-space. */
case 't':
LEGAL_ALT(0);
while (isspace(*bp))
bp++;
while (isspace(*bp)) bp++;
break;
default: /* Unknown/unsupported conversion. */
return (0);
}
}
/* LINTED functional specification */
@ -435,13 +355,9 @@ FORCE_INLINE int32_t taosGetTimeOfDay(struct timeval *tv) {
#endif
}
time_t taosTime(time_t *t) {
return time(t);
}
time_t taosTime(time_t *t) { return time(t); }
time_t taosMktime(struct tm *timep) {
return mktime(timep);
}
time_t taosMktime(struct tm *timep) { return mktime(timep); }
struct tm *taosLocalTime(const time_t *timep, struct tm *result) {
if (result == NULL) {
@ -456,5 +372,36 @@ struct tm *taosLocalTime(const time_t *timep, struct tm *result) {
}
int32_t taosGetTimestampSec() { return (int32_t)time(NULL); }
int32_t taosClockGetTime(int clock_id, struct timespec *pTS) {
#ifdef WINDOWS
LARGE_INTEGER t;
FILETIME f;
static FILETIME ff;
static SYSTEMTIME ss;
static LARGE_INTEGER offset;
int32_t taosClockGetTime(int clock_id, struct timespec *pTS) { return clock_gettime(clock_id, pTS); }
ss.wYear = 1970;
ss.wMonth = 1;
ss.wDay = 1;
ss.wHour = 0;
ss.wMinute = 0;
ss.wSecond = 0;
ss.wMilliseconds = 0;
SystemTimeToFileTime(&ss, &ff);
offset.QuadPart = ff.dwHighDateTime;
offset.QuadPart <<= 32;
offset.QuadPart |= ff.dwLowDateTime;
GetSystemTimeAsFileTime(&f);
t.QuadPart = f.dwHighDateTime;
t.QuadPart <<= 32;
t.QuadPart |= f.dwLowDateTime;
t.QuadPart -= offset.QuadPart;
pTS->tv_sec = t.QuadPart / 10000000;
pTS->tv_nsec = (t.QuadPart % 10000000)*100;
return (0);
#else
return clock_gettime(clock_id, pTS);
#endif
}

View File

@ -574,6 +574,9 @@ class TDDnodes:
def stopAll(self):
tdLog.info("stop all dnodes")
if (not self.dnodes[0].remoteIP == ""):
self.dnodes[0].remoteExec(self.dnodes[0].cfgDict, "for i in range(len(tdDnodes.dnodes)):\n tdDnodes.dnodes[i].running=1\ntdDnodes.stopAll()")
return
for i in range(len(self.dnodes)):
self.dnodes[i].stop()

View File

@ -23,7 +23,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -37,7 +37,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -9,8 +9,9 @@ from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
# import win32gui
# import threading
if (platform.system().lower() == 'windows'):
import win32gui
import threading
class TDTestCase:
@ -535,17 +536,18 @@ class TDTestCase:
return udf1_sqls ,udf2_sqls
# def checkRunTimeError(self):
# while 1:
# time.sleep(1)
# hwnd = win32gui.FindWindow(None, "Microsoft Visual C++ Runtime Library")
# if hwnd:
# os.system("TASKKILL /F /IM udfd.exe")
def checkRunTimeError(self):
if (platform.system().lower() == 'windows' and tdDnodes.dnodes[0].remoteIP == ""):
while 1:
time.sleep(1)
hwnd = win32gui.FindWindow(None, "Microsoft Visual C++ Runtime Library")
if hwnd:
os.system("TASKKILL /F /IM udfd.exe")
def unexpected_create(self):
# if (platform.system().lower() == 'windows' and tdDnodes.dnodes[0].remoteIP == ""):
# checkErrorThread = threading.Thread(target=self.checkRunTimeError,daemon=True)
# checkErrorThread.start()
if (platform.system().lower() == 'windows' and tdDnodes.dnodes[0].remoteIP == ""):
checkErrorThread = threading.Thread(target=self.checkRunTimeError,daemon=True)
checkErrorThread.start()
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")

View File

@ -3,6 +3,7 @@ import taos
import time
import inspect
import traceback
import socket
from dataclasses import dataclass
from util.log import *
@ -102,7 +103,7 @@ class TDconnect:
def taos_connect(
host = "127.0.0.1",
host = socket.gethostname(),
port = 6030,
user = "root",
passwd = "taosdata",

View File

@ -54,7 +54,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]

View File

@ -52,7 +52,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]

View File

@ -49,7 +49,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]

View File

@ -49,7 +49,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]

View File

@ -49,7 +49,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]

View File

@ -400,7 +400,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -3,7 +3,10 @@ from util.log import *
from util.sql import *
from util.cases import *
import platform
import os
if platform.system().lower() == 'windows':
import tzlocal
class TDTestCase:
@ -15,8 +18,12 @@ class TDTestCase:
def run(self): # sourcery skip: extract-duplicate-method
tdSql.prepare()
# get system timezone
time_zone_arr = os.popen('timedatectl | grep zone').read(
).strip().split(':')
if platform.system().lower() == 'windows':
time_zone_1 = tzlocal.get_localzone_name()
time_zone_2 = time.strftime('(UTC, %z)')
time_zone = time_zone_1 + " " + time_zone_2
else:
time_zone_arr = os.popen('timedatectl | grep zone').read().strip().split(':')
if len(time_zone_arr) > 1:
time_zone = time_zone_arr[1].lstrip()
else:

View File

@ -13,6 +13,12 @@ from util.dnodes import *
class TDTestCase:
hostname = socket.gethostname()
if (platform.system().lower() == 'windows' and not tdDnodes.dnodes[0].remoteIP == ""):
try:
config = eval(tdDnodes.dnodes[0].remoteIP)
hostname = config["host"]
except Exception:
hostname = tdDnodes.dnodes[0].remoteIP
#rpcDebugFlagVal = '143'
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
@ -34,7 +40,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
@ -192,6 +198,9 @@ class TDTestCase:
shellCmd = 'nohup ' + buildPath + '/build/bin/tmq_sim -c ' + cfgPath
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, parameterDict["dbName"], showMsg, showRow, cdbName)
if (platform.system().lower() == 'windows'):
shellCmd += "> nul 2>&1 &"
else:
shellCmd += "> /dev/null 2>&1 &"
tdLog.info(shellCmd)
os.system(shellCmd)
@ -306,6 +315,9 @@ class TDTestCase:
shellCmd = 'nohup ' + buildPath + '/build/bin/tmq_sim -c ' + cfgPath
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, parameterDict["dbName"], showMsg, showRow, cdbName)
if (platform.system().lower() == 'windows'):
shellCmd += "> nul 2>&1 &"
else:
shellCmd += "> /dev/null 2>&1 &"
tdLog.info(shellCmd)
os.system(shellCmd)
@ -438,6 +450,9 @@ class TDTestCase:
shellCmd = 'nohup ' + buildPath + '/build/bin/tmq_sim -c ' + cfgPath
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, parameterDict["dbName"], showMsg, showRow, cdbName)
if (platform.system().lower() == 'windows'):
shellCmd += "> nul 2>&1 &"
else:
shellCmd += "> /dev/null 2>&1 &"
tdLog.info(shellCmd)
os.system(shellCmd)

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -33,7 +33,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -34,7 +34,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -41,7 +41,7 @@ class TDTestCase:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]

View File

@ -3,7 +3,7 @@ python3 .\test.py -f 0-others\taosShell.py
python3 .\test.py -f 0-others\taosShellError.py
python3 .\test.py -f 0-others\taosShellNetChk.py
python3 .\test.py -f 0-others\telemetry.py
python3 .\test.py -f 0-others\taosdMonitor.py
@REM python3 .\test.py -f 0-others\taosdMonitor.py
python3 .\test.py -f 0-others\udfTest.py
python3 .\test.py -f 0-others\udf_create.py
python3 .\test.py -f 0-others\udf_restart_taosd.py

View File

@ -20,6 +20,7 @@ import time
import base64
import json
import platform
import socket
from distutils.log import warn as printf
from fabric2 import Connection
sys.path.append("../pytest")
@ -149,7 +150,7 @@ if __name__ == "__main__":
tdLog.info('stop All dnodes')
if masterIp == "":
host = '127.0.0.1'
host = socket.gethostname()
else:
try:
config = eval(masterIp)
@ -170,8 +171,8 @@ if __name__ == "__main__":
try:
if key_word in open(fileName, encoding='UTF-8').read():
is_test_framework = 1
except:
pass
except Exception as r:
print(r)
updateCfgDictStr = ''
if is_test_framework:
moduleName = fileName.replace(".py", "").replace(os.sep, ".")
@ -181,8 +182,8 @@ if __name__ == "__main__":
if ((json.dumps(updateCfgDict) == '{}') and (ucase.updatecfgDict is not None)):
updateCfgDict = ucase.updatecfgDict
updateCfgDictStr = "-d %s"%base64.b64encode(json.dumps(updateCfgDict).encode()).decode()
except :
pass
except Exception as r:
print(r)
else:
pass
tdDnodes.deploy(1,updateCfgDict)