[td-4385] merge develop.
This commit is contained in:
commit
be8853ffee
|
@ -24,7 +24,7 @@ TDengine的安装非常简单,从下载到安装成功仅仅只要几秒钟。
|
|||
|
||||
## <a class="anchor" id="start"></a>轻松启动
|
||||
|
||||
安装成功后,用户可使用`systemctl`命令来启动TDengine的服务进程。
|
||||
安装成功后,用户可使用 `systemctl` 命令来启动 TDengine 的服务进程。
|
||||
|
||||
```bash
|
||||
$ systemctl start taosd
|
||||
|
@ -35,21 +35,22 @@ $ systemctl start taosd
|
|||
$ systemctl status taosd
|
||||
```
|
||||
|
||||
如果TDengine服务正常工作,那么您可以通过TDengine的命令行程序`taos`来访问并体验TDengine。
|
||||
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
|
||||
|
||||
**注意:**
|
||||
|
||||
- systemctl命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo
|
||||
- 为更好的获得产品反馈,改善产品,TDengine会采集基本的使用信息,但您可以修改系统配置文件taos.cfg里的配置参数telemetryReporting, 将其设为0,就可将其关闭。
|
||||
- TDengine采用FQDN(一般就是hostname)作为节点的ID,为保证正常运行,需要给运行taosd的服务器配置好hostname,在客户端应用运行的机器配置好DNS服务或hosts文件,保证FQDN能够解析。
|
||||
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
|
||||
- 为更好的获得产品反馈,改善产品,TDengine 会采集基本的使用信息,但您可以修改系统配置文件 taos.cfg 里的配置参数 telemetryReporting, 将其设为 0,就可将其关闭。
|
||||
- TDengine 采用 FQDN (一般就是 hostname )作为节点的 ID,为保证正常运行,需要给运行 taosd 的服务器配置好 hostname,在客户端应用运行的机器配置好 DNS 服务或 hosts 文件,保证 FQDN 能够解析。
|
||||
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
|
||||
|
||||
* TDengine 支持在使用[`systemd`](https://en.wikipedia.org/wiki/Systemd)做进程服务管理的linux系统上安装,用`which systemctl`命令来检测系统中是否存在`systemd`包:
|
||||
* TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
|
||||
|
||||
```bash
|
||||
$ which systemctl
|
||||
```
|
||||
|
||||
如果系统中不支持systemd,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
|
||||
如果系统中不支持 systemd,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
|
||||
|
||||
|
||||
## <a class="anchor" id="console"></a>TDengine命令行程序
|
||||
|
|
|
@ -129,7 +129,7 @@ taosd -C
|
|||
- blocks:每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache * blocks)。单位为块,默认值:4。(可通过 alter database 修改)
|
||||
- replica:副本个数,取值范围:1-3。单位为个,默认值:1。(可通过 alter database 修改)
|
||||
- precision:时间戳精度标识,ms表示毫秒,us表示微秒。默认值:ms。
|
||||
- cacheLast:是否在内存中缓存子表 last_row,0:关闭;1:开启。默认值:0。(可通过 alter database 修改)(从 2.0.11 版本开始支持此参数)
|
||||
- cacheLast:是否在内存中缓存子表的最近数据,0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值,3:同时打开缓存最近行和列功能,默认值:0。(可通过 alter database 修改)(从 2.0.11 版本开始支持此参数)
|
||||
|
||||
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
| TSDB_CODE_COM_OUT_OF_MEMORY | 0 | 0x0102 | "Out of memory" | -2147483390 |
|
||||
| TSDB_CODE_COM_INVALID_CFG_MSG | 0 | 0x0103 | "Invalid config message" | -2147483389 |
|
||||
| TSDB_CODE_COM_FILE_CORRUPTED | 0 | 0x0104 | "Data file corrupted" | -2147483388 |
|
||||
| TSDB_CODE_TSC_INVALID_SQL | 0 | 0x0200 | "Invalid SQL statement" | -2147483136 |
|
||||
| TSDB_CODE_TSC_INVALID_OPERATION | 0 | 0x0200 | "Invalid SQL statement" | -2147483136 |
|
||||
| TSDB_CODE_TSC_INVALID_QHANDLE | 0 | 0x0201 | "Invalid qhandle" | -2147483135 |
|
||||
| TSDB_CODE_TSC_INVALID_TIME_STAMP | 0 | 0x0202 | "Invalid combination of client/service time" | -2147483134 |
|
||||
| TSDB_CODE_TSC_INVALID_VALUE | 0 | 0x0203 | "Invalid value in client" | -2147483133 |
|
||||
|
|
|
@ -76,7 +76,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
|
|||
|
||||
4) 一条SQL 语句的最大长度为65480个字符;
|
||||
|
||||
5) 数据库还有更多与存储相关的配置参数,请参见 [服务端配置](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
|
||||
5) 数据库还有更多与存储相关的配置参数,请参见 [服务端配置](https://www.taosdata.com/cn/documentation/administrator#config) 章节。
|
||||
|
||||
- **显示系统当前参数**
|
||||
|
||||
|
@ -126,7 +126,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
|
|||
```mysql
|
||||
ALTER DATABASE db_name CACHELAST 0;
|
||||
```
|
||||
CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11 版本开始支持,修改后需要重启服务器生效。)
|
||||
CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11.0 版本开始支持。从 2.1.1.0 版本开始,修改此参数后无需重启服务器即可生效。)
|
||||
|
||||
**Tips**: 以上所有参数修改后都可以用show databases来确认是否修改成功。
|
||||
|
||||
|
@ -400,6 +400,11 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
|
|||
tb2_name (tb2_field1_name, ...) [USING stb2_name TAGS (tag_value2, ...)] VALUES (field1_value1, ...) (field1_value2, ...) ...;
|
||||
```
|
||||
以自动建表的方式,同时向表tb1_name和tb2_name中按列分别插入多条记录。
|
||||
从 2.0.20.5 版本开始,子表的列名可以不跟在子表名称后面,而是可以放在 TAGS 和 VALUES 之间,例如像下面这样写:
|
||||
```mysql
|
||||
INSERT INTO tb1_name [USING stb1_name TAGS (tag_value1, ...)] (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...;
|
||||
```
|
||||
注意:虽然两种写法都可以,但并不能在一条 SQL 语句中混用,否则会报语法错误。
|
||||
|
||||
**历史记录写入**:可使用IMPORT或者INSERT命令,IMPORT的语法,功能与INSERT完全一样。
|
||||
|
||||
|
|
|
@ -114,6 +114,25 @@ mkdir -p ${install_dir}/examples
|
|||
examples_dir="${top_dir}/tests/examples"
|
||||
cp -r ${examples_dir}/c ${install_dir}/examples
|
||||
if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
|
||||
if [ -d ${examples_dir}/JDBC/connectionPools/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/connectionPools/target
|
||||
fi
|
||||
if [ -d ${examples_dir}/JDBC/JDBCDemo/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/JDBCDemo/target
|
||||
fi
|
||||
if [ -d ${examples_dir}/JDBC/mybatisplus-demo/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/mybatisplus-demo/target
|
||||
fi
|
||||
if [ -d ${examples_dir}/JDBC/springbootdemo/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/springbootdemo/target
|
||||
fi
|
||||
if [ -d ${examples_dir}/JDBC/SpringJdbcTemplate/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/SpringJdbcTemplate/target
|
||||
fi
|
||||
if [ -d ${examples_dir}/JDBC/taosdemo/target ]; then
|
||||
rm -rf ${examples_dir}/JDBC/taosdemo/target
|
||||
fi
|
||||
|
||||
cp -r ${examples_dir}/JDBC ${install_dir}/examples
|
||||
cp -r ${examples_dir}/matlab ${install_dir}/examples
|
||||
cp -r ${examples_dir}/python ${install_dir}/examples
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
name: tdengine
|
||||
base: core18
|
||||
|
||||
version: '2.1.1.0'
|
||||
icon: snap/gui/t-dengine.svg
|
||||
summary: an open-source big data platform designed and optimized for IoT.
|
||||
|
|
|
@ -39,39 +39,29 @@ typedef struct SLocalDataSource {
|
|||
} SLocalDataSource;
|
||||
|
||||
typedef struct SLocalMerger {
|
||||
SLocalDataSource ** pLocalDataSrc;
|
||||
SLocalDataSource **pLocalDataSrc;
|
||||
int32_t numOfBuffer;
|
||||
int32_t numOfCompleted;
|
||||
int32_t numOfVnode;
|
||||
SLoserTreeInfo * pLoserTree;
|
||||
tFilePage * pResultBuf;
|
||||
int32_t nResultBufSize;
|
||||
tFilePage * pTempBuffer;
|
||||
struct SQLFunctionCtx *pCtx;
|
||||
int32_t rowSize; // size of each intermediate result.
|
||||
tOrderDescriptor * pDesc;
|
||||
SColumnModel * resColModel;
|
||||
SColumnModel* finalModel;
|
||||
tExtMemBuffer ** pExtMemBuffer; // disk-based buffer
|
||||
bool orderPrjOnSTable; // projection query on stable
|
||||
SLoserTreeInfo *pLoserTree;
|
||||
int32_t rowSize; // size of each intermediate result.
|
||||
tOrderDescriptor *pDesc;
|
||||
tExtMemBuffer **pExtMemBuffer; // disk-based buffer
|
||||
char *buf; // temp buffer
|
||||
} SLocalMerger;
|
||||
|
||||
typedef struct SRetrieveSupport {
|
||||
tExtMemBuffer ** pExtMemBuffer; // for build loser tree
|
||||
tOrderDescriptor *pOrderDescriptor;
|
||||
SColumnModel* pFinalColModel; // colModel for final result
|
||||
SColumnModel* pFFColModel;
|
||||
int32_t subqueryIndex; // index of current vnode in vnode list
|
||||
SSqlObj * pParentSql;
|
||||
tFilePage * localBuffer; // temp buffer, there is a buffer for each vnode to
|
||||
uint32_t numOfRetry; // record the number of retry times
|
||||
} SRetrieveSupport;
|
||||
|
||||
int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOrderDescriptor **pDesc,
|
||||
SColumnModel **pFinalModel, SColumnModel** pFFModel, uint32_t nBufferSize);
|
||||
int32_t tscLocalReducerEnvCreate(SQueryInfo* pQueryInfo, tExtMemBuffer ***pMemBuffer, int32_t numOfSub, tOrderDescriptor **pDesc, uint32_t nBufferSize, int64_t id);
|
||||
|
||||
void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDesc, SColumnModel *pFinalModel, SColumnModel* pFFModel,
|
||||
int32_t numOfVnodes);
|
||||
void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDesc, int32_t numOfVnodes);
|
||||
|
||||
int32_t saveToBuffer(tExtMemBuffer *pMemoryBuf, tOrderDescriptor *pDesc, tFilePage *pPage, void *data,
|
||||
int32_t numOfRows, int32_t orderType);
|
||||
|
@ -81,12 +71,10 @@ int32_t tscFlushTmpBuffer(tExtMemBuffer *pMemoryBuf, tOrderDescriptor *pDesc, tF
|
|||
/*
|
||||
* create local reducer to launch the second-stage reduce process at client site
|
||||
*/
|
||||
void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
|
||||
SColumnModel *finalModel, SColumnModel *pFFModel, SSqlObj* pSql);
|
||||
int32_t tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
|
||||
SQueryInfo *pQueryInfo, SLocalMerger **pMerger, int64_t id);
|
||||
|
||||
void tscDestroyLocalMerger(SSqlObj *pSql);
|
||||
|
||||
//int32_t tscDoLocalMerge(SSqlObj *pSql);
|
||||
void tscDestroyLocalMerger(SLocalMerger* pLocalMerger);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include "tsched.h"
|
||||
#include "exception.h"
|
||||
#include "os.h"
|
||||
#include "qExtbuffer.h"
|
||||
|
@ -36,6 +37,9 @@ extern "C" {
|
|||
#define UTIL_TABLE_IS_NORMAL_TABLE(metaInfo)\
|
||||
(!(UTIL_TABLE_IS_SUPER_TABLE(metaInfo) || UTIL_TABLE_IS_CHILD_TABLE(metaInfo)))
|
||||
|
||||
#define UTIL_TABLE_IS_TMP_TABLE(metaInfo) \
|
||||
(((metaInfo)->pTableMeta != NULL) && ((metaInfo)->pTableMeta->tableType == TSDB_TEMP_TABLE))
|
||||
|
||||
#pragma pack(push,1)
|
||||
// this struct is transfered as binary, padding two bytes to avoid
|
||||
// an 'uid' whose low bytes is 0xff being recoginized as NULL,
|
||||
|
@ -59,7 +63,7 @@ typedef struct SJoinSupporter {
|
|||
SArray* exprList;
|
||||
SFieldInfo fieldsInfo;
|
||||
STagCond tagCond;
|
||||
SSqlGroupbyExpr groupInfo; // group by info
|
||||
SGroupbyExpr groupInfo; // group by info
|
||||
struct STSBuf* pTSBuf; // the TSBuf struct that holds the compressed timestamp array
|
||||
FILE* f; // temporary file in order to create TSBuf
|
||||
char path[PATH_MAX]; // temporary file path, todo dynamic allocate memory
|
||||
|
@ -90,24 +94,14 @@ typedef struct SVgroupTableInfo {
|
|||
SArray *itemList; // SArray<STableIdInfo>
|
||||
} SVgroupTableInfo;
|
||||
|
||||
static FORCE_INLINE SQueryInfo* tscGetQueryInfo(SSqlCmd* pCmd, int32_t subClauseIndex) {
|
||||
assert(pCmd != NULL && subClauseIndex >= 0);
|
||||
if (pCmd->pQueryInfo == NULL || subClauseIndex >= pCmd->numOfClause) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return pCmd->pQueryInfo[subClauseIndex];
|
||||
}
|
||||
|
||||
int32_t converToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *len);
|
||||
|
||||
SQueryInfo* tscGetActiveQueryInfo(SSqlCmd* pCmd);
|
||||
|
||||
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, SName* name, STableMeta* pTableMeta, STableDataBlocks** dataBlocks);
|
||||
void tscDestroyDataBlock(STableDataBlocks* pDataBlock, bool removeMeta);
|
||||
void tscSortRemoveDataBlockDupRows(STableDataBlocks* dataBuf);
|
||||
|
||||
void tscDestroyBoundColumnInfo(SParsedDataColInfo* pColInfo);
|
||||
void doRetrieveSubqueryData(SSchedMsg *pMsg);
|
||||
|
||||
SParamInfo* tscAddParamToDataBlock(STableDataBlocks* pDataBlock, char type, uint8_t timePrec, int16_t bytes,
|
||||
uint32_t offset);
|
||||
|
@ -129,7 +123,7 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i
|
|||
*/
|
||||
bool tscIsPointInterpQuery(SQueryInfo* pQueryInfo);
|
||||
bool tscIsTWAQuery(SQueryInfo* pQueryInfo);
|
||||
bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo);
|
||||
bool tsIsArithmeticQueryOnAggResult(SQueryInfo* pQueryInfo);
|
||||
bool tscGroupbyColumn(SQueryInfo* pQueryInfo);
|
||||
bool tscIsTopBotQuery(SQueryInfo* pQueryInfo);
|
||||
bool hasTagValOutput(SQueryInfo* pQueryInfo);
|
||||
|
@ -138,13 +132,14 @@ bool isStabledev(SQueryInfo* pQueryInfo);
|
|||
bool isTsCompQuery(SQueryInfo* pQueryInfo);
|
||||
bool isSimpleAggregate(SQueryInfo* pQueryInfo);
|
||||
bool isBlockDistQuery(SQueryInfo* pQueryInfo);
|
||||
int32_t tscGetTopbotQueryParam(SQueryInfo* pQueryInfo);
|
||||
bool isSimpleAggregateRv(SQueryInfo* pQueryInfo);
|
||||
|
||||
bool tscNonOrderedProjectionQueryOnSTable(SQueryInfo *pQueryInfo, int32_t tableIndex);
|
||||
bool tscOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex);
|
||||
bool tscIsProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex);
|
||||
|
||||
bool tscIsProjectionQuery(SQueryInfo* pQueryInfo);
|
||||
bool tscHasColumnFilter(SQueryInfo* pQueryInfo);
|
||||
|
||||
bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex);
|
||||
bool tscQueryTags(SQueryInfo* pQueryInfo);
|
||||
|
@ -152,9 +147,9 @@ bool tscMultiRoundQuery(SQueryInfo* pQueryInfo, int32_t tableIndex);
|
|||
bool tscQueryBlockInfo(SQueryInfo* pQueryInfo);
|
||||
|
||||
SExprInfo* tscAddFuncInSelectClause(SQueryInfo* pQueryInfo, int32_t outputColIndex, int16_t functionId,
|
||||
SColumnIndex* pIndex, SSchema* pColSchema, int16_t colType);
|
||||
SColumnIndex* pIndex, SSchema* pColSchema, int16_t colType, int16_t colId);
|
||||
|
||||
int32_t tscSetTableFullName(STableMetaInfo* pTableMetaInfo, SStrToken* pzTableName, SSqlObj* pSql);
|
||||
int32_t tscSetTableFullName(SName* pName, SStrToken* pzTableName, SSqlObj* pSql);
|
||||
void tscClearInterpInfo(SQueryInfo* pQueryInfo);
|
||||
|
||||
bool tscIsInsertData(char* sqlstr);
|
||||
|
@ -173,36 +168,49 @@ void tscFieldInfoUpdateOffset(SQueryInfo* pQueryInfo);
|
|||
|
||||
int16_t tscFieldInfoGetOffset(SQueryInfo* pQueryInfo, int32_t index);
|
||||
void tscFieldInfoClear(SFieldInfo* pFieldInfo);
|
||||
void tscFieldInfoCopy(SFieldInfo* pFieldInfo, const SFieldInfo* pSrc, const SArray* pExprList);
|
||||
|
||||
static FORCE_INLINE int32_t tscNumOfFields(SQueryInfo* pQueryInfo) { return pQueryInfo->fieldsInfo.numOfOutput; }
|
||||
|
||||
int32_t tscFieldInfoCompare(const SFieldInfo* pFieldInfo1, const SFieldInfo* pFieldInfo2, int32_t *diffSize);
|
||||
int32_t tscFieldInfoSetSize(const SFieldInfo* pFieldInfo1, const SFieldInfo* pFieldInfo2);
|
||||
void tscInsertPrimaryTsSourceColumn(SQueryInfo* pQueryInfo, uint64_t uid);
|
||||
|
||||
int32_t tscFieldInfoSetSize(const SFieldInfo* pFieldInfo1, const SFieldInfo* pFieldInfo2);
|
||||
void addExprParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes);
|
||||
|
||||
int32_t tscGetResRowLength(SArray* pExprList);
|
||||
|
||||
SExprInfo* tscSqlExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
|
||||
SExprInfo* tscExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
|
||||
int16_t size, int16_t resColId, int16_t interSize, bool isTagCol);
|
||||
|
||||
SExprInfo* tscSqlExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
|
||||
SExprInfo* tscExprCreate(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
|
||||
int16_t size, int16_t resColId, int16_t interSize, int32_t colType);
|
||||
|
||||
void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes);
|
||||
|
||||
SExprInfo* tscExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
|
||||
int16_t size, int16_t resColId, int16_t interSize, bool isTagCol);
|
||||
|
||||
SExprInfo* tscSqlExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, int16_t srcColumnIndex, int16_t type,
|
||||
SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, int16_t srcColumnIndex, int16_t type,
|
||||
int16_t size);
|
||||
size_t tscSqlExprNumOfExprs(SQueryInfo* pQueryInfo);
|
||||
void tscInsertPrimaryTsSourceColumn(SQueryInfo* pQueryInfo, uint64_t uid);
|
||||
|
||||
SExprInfo* tscSqlExprGet(SQueryInfo* pQueryInfo, int32_t index);
|
||||
int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy);
|
||||
void tscSqlExprAssign(SExprInfo* dst, const SExprInfo* src);
|
||||
void tscSqlExprInfoDestroy(SArray* pExprInfo);
|
||||
size_t tscNumOfExprs(SQueryInfo* pQueryInfo);
|
||||
SExprInfo *tscExprGet(SQueryInfo* pQueryInfo, int32_t index);
|
||||
int32_t tscExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy);
|
||||
int32_t tscExprCopyAll(SArray* dst, const SArray* src, bool deepcopy);
|
||||
void tscExprAssign(SExprInfo* dst, const SExprInfo* src);
|
||||
void tscExprDestroy(SArray* pExprInfo);
|
||||
|
||||
int32_t createProjectionExpr(SQueryInfo* pQueryInfo, STableMetaInfo* pTableMetaInfo, SExprInfo*** pExpr, int32_t* num);
|
||||
|
||||
void clearAllTableMetaInfo(SQueryInfo* pQueryInfo, bool removeMeta);
|
||||
|
||||
SColumn* tscColumnClone(const SColumn* src);
|
||||
bool tscColumnExists(SArray* pColumnList, int32_t columnIndex, uint64_t uid);
|
||||
SColumn* tscColumnListInsert(SArray* pColumnList, int32_t columnIndex, uint64_t uid, SSchema* pSchema);
|
||||
void tscColumnListDestroy(SArray* pColList);
|
||||
void tscColumnListCopy(SArray* dst, const SArray* src, uint64_t tableUid);
|
||||
void tscColumnListCopyAll(SArray* dst, const SArray* src);
|
||||
|
||||
void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo);
|
||||
|
||||
|
@ -224,14 +232,14 @@ void tscGetSrcColumnInfo(SSrcColumnInfo* pColInfo, SQueryInfo* pQueryInfo);
|
|||
|
||||
bool tscShouldBeFreed(SSqlObj* pSql);
|
||||
|
||||
STableMetaInfo* tscGetTableMetaInfoFromCmd(SSqlCmd *pCmd, int32_t subClauseIndex, int32_t tableIndex);
|
||||
STableMetaInfo* tscGetTableMetaInfoFromCmd(SSqlCmd *pCmd, int32_t tableIndex);
|
||||
STableMetaInfo* tscGetMetaInfo(SQueryInfo *pQueryInfo, int32_t tableIndex);
|
||||
|
||||
void tscInitQueryInfo(SQueryInfo* pQueryInfo);
|
||||
void tscClearSubqueryInfo(SSqlCmd* pCmd);
|
||||
int32_t tscAddQueryInfo(SSqlCmd *pCmd);
|
||||
SQueryInfo *tscGetQueryInfo(SSqlCmd* pCmd, int32_t subClauseIndex);
|
||||
SQueryInfo *tscGetQueryInfoS(SSqlCmd *pCmd, int32_t subClauseIndex);
|
||||
SQueryInfo *tscGetQueryInfo(SSqlCmd* pCmd);
|
||||
SQueryInfo *tscGetQueryInfoS(SSqlCmd *pCmd);
|
||||
|
||||
void tscClearTableMetaInfo(STableMetaInfo* pTableMetaInfo);
|
||||
|
||||
|
@ -245,12 +253,11 @@ SArray* tscVgroupTableInfoDup(SArray* pVgroupTables);
|
|||
void tscRemoveVgroupTableGroup(SArray* pVgroupTable, int32_t index);
|
||||
void tscVgroupTableCopy(SVgroupTableInfo* info, SVgroupTableInfo* pInfo);
|
||||
|
||||
int tscGetSTableVgroupInfo(SSqlObj* pSql, int32_t clauseIndex);
|
||||
int tscGetSTableVgroupInfo(SSqlObj* pSql, SQueryInfo* pQueryInfo);
|
||||
int tscGetTableMeta(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo);
|
||||
int tscGetTableMetaEx(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, bool createIfNotExists);
|
||||
|
||||
void tscResetForNextRetrieve(SSqlRes* pRes);
|
||||
void tscDoQuery(SSqlObj* pSql);
|
||||
void executeQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo);
|
||||
void doExecuteQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo);
|
||||
|
||||
|
@ -281,7 +288,7 @@ void registerSqlObj(SSqlObj* pSql);
|
|||
SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t fp, void* param, int32_t cmd, SSqlObj* pPrevSql);
|
||||
void addGroupInfoForSubquery(SSqlObj* pParentObj, SSqlObj* pSql, int32_t subClauseIndex, int32_t tableIndex);
|
||||
|
||||
void doAddGroupColumnForSubquery(SQueryInfo* pQueryInfo, int32_t tagIndex);
|
||||
void doAddGroupColumnForSubquery(SQueryInfo* pQueryInfo, int32_t tagIndex, SSqlCmd* pCmd);
|
||||
|
||||
int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid);
|
||||
int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId);
|
||||
|
@ -297,6 +304,11 @@ void tscTryQueryNextVnode(SSqlObj *pSql, __async_cb_func_t fp);
|
|||
void tscAsyncQuerySingleRowForNextVnode(void *param, TAOS_RES *tres, int numOfRows);
|
||||
void tscTryQueryNextClause(SSqlObj* pSql, __async_cb_func_t fp);
|
||||
int tscSetMgmtEpSetFromCfg(const char *first, const char *second, SRpcCorEpSet *corEpSet);
|
||||
int32_t getMultiTableMetaFromMnode(SSqlObj *pSql, SArray* pNameList, SArray* pVgroupNameList, __async_cb_func_t fp);
|
||||
|
||||
int tscTransferTableNameList(SSqlObj *pSql, const char *pNameList, int32_t length, SArray* pNameArray);
|
||||
|
||||
bool subAndCheckDone(SSqlObj *pSql, SSqlObj *pParentSql, int idx);
|
||||
|
||||
bool tscSetSqlOwner(SSqlObj* pSql);
|
||||
void tscClearSqlOwner(SSqlObj* pSql);
|
||||
|
@ -311,10 +323,12 @@ CChildTableMeta* tscCreateChildMeta(STableMeta* pTableMeta);
|
|||
uint32_t tscGetTableMetaMaxSize();
|
||||
int32_t tscCreateTableMetaFromSTableMeta(STableMeta* pChild, const char* name, void* buf);
|
||||
STableMeta* tscTableMetaDup(STableMeta* pTableMeta);
|
||||
SVgroupsInfo* tscVgroupsInfoDup(SVgroupsInfo* pVgroupsInfo);
|
||||
|
||||
int32_t tscCreateQueryFromQueryInfo(SQueryInfo* pQueryInfo, SQueryAttr* pQueryAttr, void* addr);
|
||||
|
||||
void tsCreateSQLFunctionCtx(SQueryInfo* pQueryInfo, SQLFunctionCtx* pCtx, SSchema* pSchema);
|
||||
void* createQueryInfoFromQueryNode(SQueryInfo* pQueryInfo, SExprInfo* pExprs, STableGroupInfo* pTableGroupInfo, SOperatorInfo* pOperator, char* sql, void* addr, int32_t stage);
|
||||
void* createQInfoFromQueryNode(SQueryInfo* pQueryInfo, SExprInfo* pExprs, STableGroupInfo* pTableGroupInfo, SOperatorInfo* pOperator, char* sql, void* addr, int32_t stage);
|
||||
|
||||
void* malloc_throw(size_t size);
|
||||
void* calloc_throw(size_t nmemb, size_t size);
|
||||
|
|
|
@ -42,12 +42,6 @@ extern "C" {
|
|||
struct SSqlInfo;
|
||||
struct SLocalMerger;
|
||||
|
||||
// data source from sql string or from file
|
||||
enum {
|
||||
DATA_FROM_SQL_STRING = 1,
|
||||
DATA_FROM_DATA_FILE = 2,
|
||||
};
|
||||
|
||||
typedef void (*__async_cb_func_t)(void *param, TAOS_RES *tres, int32_t numOfRows);
|
||||
|
||||
typedef struct STableComInfo {
|
||||
|
@ -85,10 +79,10 @@ typedef struct STableMeta {
|
|||
} STableMeta;
|
||||
|
||||
typedef struct STableMetaInfo {
|
||||
STableMeta *pTableMeta; // table meta, cached in client side and acquired by name
|
||||
STableMeta *pTableMeta; // table meta, cached in client side and acquired by name
|
||||
uint32_t tableMetaSize;
|
||||
SVgroupsInfo *vgroupList;
|
||||
SArray *pVgroupTables; // SArray<SVgroupTableInfo>
|
||||
SVgroupsInfo *vgroupList;
|
||||
SArray *pVgroupTables; // SArray<SVgroupTableInfo>
|
||||
|
||||
/*
|
||||
* 1. keep the vgroup index during the multi-vnode super table projection query
|
||||
|
@ -137,8 +131,8 @@ typedef struct SJoinNode {
|
|||
} SJoinNode;
|
||||
|
||||
typedef struct SJoinInfo {
|
||||
bool hasJoin;
|
||||
SJoinNode* joinTables[TSDB_MAX_JOIN_TABLE_NUM];
|
||||
bool hasJoin;
|
||||
SJoinNode *joinTables[TSDB_MAX_JOIN_TABLE_NUM];
|
||||
} SJoinInfo;
|
||||
|
||||
typedef struct STagCond {
|
||||
|
@ -205,10 +199,11 @@ typedef struct SQueryInfo {
|
|||
SInterval interval; // tumble time window
|
||||
SSessionWindow sessionWindow; // session time window
|
||||
|
||||
SSqlGroupbyExpr groupbyExpr; // groupby tags info
|
||||
SGroupbyExpr groupbyExpr; // groupby tags info
|
||||
SArray * colList; // SArray<SColumn*>
|
||||
SFieldInfo fieldsInfo;
|
||||
SArray * exprList; // SArray<SExprInfo*>
|
||||
SArray * exprList1; // final exprlist in case of arithmetic expression exists
|
||||
SLimitVal limit;
|
||||
SLimitVal slimit;
|
||||
STagCond tagCond;
|
||||
|
@ -232,30 +227,50 @@ typedef struct SQueryInfo {
|
|||
int32_t bufLen;
|
||||
char* buf;
|
||||
SQInfo* pQInfo; // global merge operator
|
||||
SArray* pDSOperator; // data source operator
|
||||
SArray* pPhyOperator; // physical query execution plan
|
||||
SQueryAttr* pQueryAttr; // query object
|
||||
|
||||
struct SQueryInfo *sibling; // sibling
|
||||
SArray *pUpstream; // SArray<struct SQueryInfo>
|
||||
struct SQueryInfo *pDownstream;
|
||||
int32_t havingFieldNum;
|
||||
bool stableQuery;
|
||||
bool groupbyColumn;
|
||||
bool simpleAgg;
|
||||
bool arithmeticOnAgg;
|
||||
bool projectionQuery;
|
||||
bool hasFilter;
|
||||
bool onlyTagQuery;
|
||||
} SQueryInfo;
|
||||
|
||||
typedef struct {
|
||||
STableMeta *pTableMeta;
|
||||
SVgroupsInfo *pVgroupInfo;
|
||||
} STableMetaVgroupInfo;
|
||||
|
||||
typedef struct SInsertStatementParam {
|
||||
SName **pTableNameList; // all involved tableMeta list of current insert sql statement.
|
||||
int32_t numOfTables; // number of tables in table name list
|
||||
SHashObj *pTableBlockHashList; // data block for each table
|
||||
SArray *pDataBlocks; // SArray<STableDataBlocks*>. Merged submit block for each vgroup
|
||||
int8_t schemaAttached; // denote if submit block is built with table schema or not
|
||||
STagData tagData; // NOTE: pTagData->data is used as a variant length array
|
||||
|
||||
char msg[512]; // error message
|
||||
char *sql; // current sql statement position
|
||||
uint32_t insertType; // insert data from [file|sql statement| bound statement]
|
||||
} SInsertStatementParam;
|
||||
|
||||
// TODO extract sql parser supporter
|
||||
typedef struct {
|
||||
int command;
|
||||
uint8_t msgType;
|
||||
SInsertStatementParam insertParam;
|
||||
char reserve1[3]; // fix bus error on arm32
|
||||
bool autoCreated; // create table if it is not existed during retrieve table meta in mnode
|
||||
|
||||
union {
|
||||
int32_t count;
|
||||
int32_t numOfTablesInSubmit;
|
||||
};
|
||||
|
||||
uint32_t insertType; // TODO remove it
|
||||
char * curSql; // current sql, resume position of sql after parsing paused
|
||||
int8_t parseFinished;
|
||||
char reserve2[3]; // fix bus error on arm32
|
||||
|
||||
int16_t numOfCols;
|
||||
|
@ -264,25 +279,13 @@ typedef struct {
|
|||
char * payload;
|
||||
int32_t payloadLen;
|
||||
|
||||
SQueryInfo **pQueryInfo;
|
||||
int32_t numOfClause;
|
||||
int32_t clauseIndex; // index of multiple subclause query
|
||||
SQueryInfo *active; // current active query info
|
||||
|
||||
int32_t batchSize; // for parameter ('?') binding and batch processing
|
||||
SHashObj *pTableMetaMap; // local buffer to keep the queried table meta, before validating the AST
|
||||
SQueryInfo *pQueryInfo;
|
||||
SQueryInfo *active; // current active query info
|
||||
int32_t batchSize; // for parameter ('?') binding and batch processing
|
||||
int32_t numOfParams;
|
||||
|
||||
int8_t dataSourceType; // load data from file or not
|
||||
char reserve4[3]; // fix bus error on arm32
|
||||
int8_t submitSchema; // submit block is built with table schema
|
||||
char reserve5[3]; // fix bus error on arm32
|
||||
STagData tagData; // NOTE: pTagData->data is used as a variant length array
|
||||
|
||||
SName **pTableNameList; // all involved tableMeta list of current insert sql statement.
|
||||
int32_t numOfTables;
|
||||
|
||||
SHashObj *pTableBlockHashList; // data block for each table
|
||||
SArray *pDataBlocks; // SArray<STableDataBlocks*>. Merged submit block for each vgroup
|
||||
int32_t resColumnId;
|
||||
} SSqlCmd;
|
||||
|
||||
typedef struct SResRec {
|
||||
|
@ -443,7 +446,7 @@ int32_t tscCreateResPointerInfo(SSqlRes *pRes, SQueryInfo *pQueryInfo);
|
|||
void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo);
|
||||
void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBlock);
|
||||
|
||||
void handleDownstreamOperator(SSqlRes* pRes, SQueryInfo* pQueryInfo);
|
||||
void handleDownstreamOperator(SSqlObj** pSqlList, int32_t numOfUpstream, SQueryInfo* px, SSqlRes* pOutput);
|
||||
void destroyTableNameList(SSqlCmd* pCmd);
|
||||
|
||||
void tscResetSqlCmd(SSqlCmd *pCmd, bool removeMeta);
|
||||
|
@ -489,7 +492,7 @@ char *tscGetErrorMsgPayload(SSqlCmd *pCmd);
|
|||
int32_t tscInvalidSQLErrMsg(char *msg, const char *additionalInfo, const char *sql);
|
||||
int32_t tscSQLSyntaxErrMsg(char* msg, const char* additionalInfo, const char* sql);
|
||||
|
||||
int32_t tscToSQLCmd(SSqlObj *pSql, struct SSqlInfo *pInfo);
|
||||
int32_t tscValidateSqlInfo(SSqlObj *pSql, struct SSqlInfo *pInfo);
|
||||
|
||||
extern int32_t sentinel;
|
||||
extern SHashObj *tscVgroupMap;
|
||||
|
@ -505,7 +508,7 @@ extern int tscNumOfObj; // number of existed sqlObj in current process.
|
|||
extern int (*tscBuildMsg[TSDB_SQL_MAX])(SSqlObj *pSql, SSqlInfo *pInfo);
|
||||
|
||||
void tscBuildVgroupTableInfo(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, SArray* tables);
|
||||
int16_t getNewResColId(SQueryInfo* pQueryInfo);
|
||||
int16_t getNewResColId(SSqlCmd* pCmd);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -59,6 +59,7 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
|
|||
|
||||
tscDebugL("0x%"PRIx64" SQL: %s", pSql->self, pSql->sqlstr);
|
||||
pCmd->curSql = pSql->sqlstr;
|
||||
pCmd->resColumnId = TSDB_RES_COL_ID;
|
||||
|
||||
int32_t code = tsParseSql(pSql, true);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) return;
|
||||
|
@ -69,7 +70,7 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
|
|||
return;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
executeQuery(pSql, pQueryInfo);
|
||||
}
|
||||
|
||||
|
@ -127,7 +128,8 @@ static void tscAsyncFetchRowsProxy(void *param, TAOS_RES *tres, int numOfRows) {
|
|||
* all available virtual node has been checked already, now we need to check
|
||||
* for the next subclause queries
|
||||
*/
|
||||
if (pCmd->clauseIndex < pCmd->numOfClause - 1) {
|
||||
if (pCmd->active->sibling != NULL) {
|
||||
pCmd->active = pCmd->active->sibling;
|
||||
tscTryQueryNextClause(pSql, tscAsyncQueryRowsForNextVnode);
|
||||
return;
|
||||
}
|
||||
|
@ -220,6 +222,17 @@ void taos_fetch_rows_a(TAOS_RES *tres, __async_cb_func_t fp, void *param) {
|
|||
tscResetForNextRetrieve(pRes);
|
||||
|
||||
// handle the sub queries of join query
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
if (pQueryInfo->pUpstream != NULL && taosArrayGetSize(pQueryInfo->pUpstream) > 0) {
|
||||
SSchedMsg schedMsg = {0};
|
||||
schedMsg.fp = doRetrieveSubqueryData;
|
||||
schedMsg.ahandle = (void *)pSql;
|
||||
schedMsg.thandle = (void *)1;
|
||||
schedMsg.msg = 0;
|
||||
taosScheduleTask(tscQhandle, &schedMsg);
|
||||
return;
|
||||
}
|
||||
|
||||
if (pCmd->command == TSDB_SQL_TABLE_JOIN_RETRIEVE) {
|
||||
tscFetchDatablockForSubquery(pSql);
|
||||
} else if (pRes->completed) {
|
||||
|
@ -231,7 +244,8 @@ void taos_fetch_rows_a(TAOS_RES *tres, __async_cb_func_t fp, void *param) {
|
|||
* all available virtual nodes in current clause has been checked already, now try the
|
||||
* next one in the following union subclause
|
||||
*/
|
||||
if (pCmd->clauseIndex < pCmd->numOfClause - 1) {
|
||||
if (pCmd->active->sibling != NULL) {
|
||||
pCmd->active = pCmd->active->sibling; // todo refactor
|
||||
tscTryQueryNextClause(pSql, tscAsyncQueryRowsForNextVnode);
|
||||
return;
|
||||
}
|
||||
|
@ -255,7 +269,7 @@ void taos_fetch_rows_a(TAOS_RES *tres, __async_cb_func_t fp, void *param) {
|
|||
pCmd->command = (pCmd->command > TSDB_SQL_MGMT) ? TSDB_SQL_RETRIEVE : TSDB_SQL_FETCH;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo1 = tscGetActiveQueryInfo(&pSql->cmd);
|
||||
SQueryInfo* pQueryInfo1 = tscGetQueryInfo(&pSql->cmd);
|
||||
tscBuildAndSendRequest(pSql, pQueryInfo1);
|
||||
}
|
||||
}
|
||||
|
@ -317,26 +331,38 @@ static int32_t updateMetaBeforeRetryQuery(SSqlObj* pSql, STableMetaInfo* pTableM
|
|||
// update the pExpr info, colList info, number of table columns
|
||||
// TODO Re-parse this sql and issue the corresponding subquery as an alternative for this case.
|
||||
if (pSql->retryReason == TSDB_CODE_TDB_INVALID_TABLE_ID) {
|
||||
int32_t numOfExprs = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
|
||||
int32_t numOfExprs = (int32_t) tscNumOfExprs(pQueryInfo);
|
||||
int32_t numOfCols = tscGetNumOfColumns(pTableMetaInfo->pTableMeta);
|
||||
int32_t numOfTags = tscGetNumOfTags(pTableMetaInfo->pTableMeta);
|
||||
|
||||
SSchema *pSchema = tscGetTableSchema(pTableMetaInfo->pTableMeta);
|
||||
SSchema *pTagSchema = tscGetTableTagSchema(pTableMetaInfo->pTableMeta);
|
||||
|
||||
for (int32_t i = 0; i < numOfExprs; ++i) {
|
||||
SSqlExpr *pExpr = &(tscSqlExprGet(pQueryInfo, i)->base);
|
||||
SSqlExpr *pExpr = &(tscExprGet(pQueryInfo, i)->base);
|
||||
|
||||
// update the table uid
|
||||
pExpr->uid = pTableMetaInfo->pTableMeta->id.uid;
|
||||
|
||||
if (pExpr->colInfo.colIndex >= 0) {
|
||||
int32_t index = pExpr->colInfo.colIndex;
|
||||
|
||||
if ((TSDB_COL_IS_NORMAL_COL(pExpr->colInfo.flag) && index >= numOfCols) ||
|
||||
(TSDB_COL_IS_TAG(pExpr->colInfo.flag) && (index < numOfCols || index >= (numOfCols + numOfTags)))) {
|
||||
(TSDB_COL_IS_TAG(pExpr->colInfo.flag) && (index < 0 || index >= numOfTags))) {
|
||||
return pSql->retryReason;
|
||||
}
|
||||
|
||||
if ((pSchema[pExpr->colInfo.colIndex].colId != pExpr->colInfo.colId) &&
|
||||
strcasecmp(pExpr->colInfo.name, pSchema[pExpr->colInfo.colIndex].name) != 0) {
|
||||
return pSql->retryReason;
|
||||
if (TSDB_COL_IS_TAG(pExpr->colInfo.flag)) {
|
||||
if ((pTagSchema[pExpr->colInfo.colIndex].colId != pExpr->colInfo.colId) &&
|
||||
strcasecmp(pExpr->colInfo.name, pTagSchema[pExpr->colInfo.colIndex].name) != 0) {
|
||||
return pSql->retryReason;
|
||||
}
|
||||
} else if (TSDB_COL_IS_NORMAL_COL(pExpr->colInfo.flag)) {
|
||||
if ((pSchema[pExpr->colInfo.colIndex].colId != pExpr->colInfo.colId) &&
|
||||
strcasecmp(pExpr->colInfo.name, pSchema[pExpr->colInfo.colIndex].name) != 0) {
|
||||
return pSql->retryReason;
|
||||
}
|
||||
} else { // do nothing for udc
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -374,12 +400,12 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
|
|||
|
||||
tscDebug("0x%"PRIx64" get %s successfully", pSql->self, msg);
|
||||
if (pSql->pStream == NULL) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
// check if it is a sub-query of super table query first, if true, enter another routine
|
||||
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY|TSDB_QUERY_TYPE_SUBQUERY|TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) {
|
||||
tscDebug("0x%"PRIx64" update local table meta, continue to process sql and send the corresponding query", pSql->self);
|
||||
|
||||
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY | TSDB_QUERY_TYPE_SUBQUERY |
|
||||
TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) {
|
||||
tscDebug("0x%" PRIx64 " update cached table-meta, continue to process sql and send the corresponding query", pSql->self);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
||||
code = tscGetTableMeta(pSql, pTableMetaInfo);
|
||||
|
@ -401,42 +427,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
|
|||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else { // continue to process normal async query
|
||||
if (pCmd->parseFinished) {
|
||||
tscDebug("0x%"PRIx64" update local table meta, continue to process sql and send corresponding query", pSql->self);
|
||||
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
code = tscGetTableMeta(pSql, pTableMetaInfo);
|
||||
|
||||
assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
}
|
||||
|
||||
assert(pCmd->command != TSDB_SQL_INSERT);
|
||||
|
||||
if (pCmd->command == TSDB_SQL_SELECT) {
|
||||
tscDebug("0x%"PRIx64" redo parse sql string and proceed", pSql->self);
|
||||
pCmd->parseFinished = false;
|
||||
tscResetSqlCmd(pCmd, true);
|
||||
|
||||
code = tsParseSql(pSql, true);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
tscBuildAndSendRequest(pSql, NULL);
|
||||
} else { // in all other cases, simple retry
|
||||
tscBuildAndSendRequest(pSql, NULL);
|
||||
}
|
||||
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else {
|
||||
tscDebug("0x%"PRIx64" continue parse sql after get table meta", pSql->self);
|
||||
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_INSERT)) {
|
||||
tscDebug("0x%" PRIx64 " continue parse sql after get table-meta", pSql->self);
|
||||
|
||||
code = tsParseSql(pSql, false);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
|
@ -446,8 +438,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
|
|||
goto _error;
|
||||
}
|
||||
|
||||
if (pCmd->insertType == TSDB_QUERY_TYPE_STMT_INSERT) {
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
if (TSDB_QUERY_HAS_TYPE(pCmd->insertParam.insertType, TSDB_QUERY_TYPE_STMT_INSERT)) {
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
code = tscGetTableMeta(pSql, pTableMetaInfo);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
|
@ -457,59 +449,52 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
|
|||
}
|
||||
|
||||
(*pSql->fp)(pSql->param, pSql, code);
|
||||
} else if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_INSERT)) {
|
||||
if (pCmd->dataSourceType == DATA_FROM_DATA_FILE) {
|
||||
} else {
|
||||
if (TSDB_QUERY_HAS_TYPE(pCmd->insertParam.insertType, TSDB_QUERY_TYPE_FILE_INSERT)) {
|
||||
tscImportDataFromFile(pSql);
|
||||
} else {
|
||||
tscHandleMultivnodeInsert(pSql);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (pSql->retryReason != TSDB_CODE_SUCCESS) {
|
||||
tscDebug("0x%" PRIx64 " update cached table-meta, re-validate sql statement and send query again",
|
||||
pSql->self);
|
||||
tscResetSqlCmd(pCmd, false);
|
||||
pSql->retryReason = TSDB_CODE_SUCCESS;
|
||||
} else {
|
||||
SQueryInfo* pQueryInfo1 = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
executeQuery(pSql, pQueryInfo1);
|
||||
tscDebug("0x%" PRIx64 " cached table-meta, continue validate sql statement and send query", pSql->self);
|
||||
}
|
||||
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
code = tsParseSql(pSql, true);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SQueryInfo *pQueryInfo1 = tscGetQueryInfo(pCmd);
|
||||
executeQuery(pSql, pQueryInfo1);
|
||||
}
|
||||
}
|
||||
|
||||
} else { // stream computing
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
|
||||
code = tscGetTableMeta(pSql, pTableMetaInfo);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
} else { // stream computing
|
||||
tscDebug("0x%"PRIx64" stream:%p meta is updated, start new query, command:%d", pSql->self, pSql->pStream, pCmd->command);
|
||||
|
||||
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
|
||||
code = tscGetSTableVgroupInfo(pSql, pCmd->clauseIndex);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
return;
|
||||
} else if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
tscDebug("0x%"PRIx64" stream:%p meta is updated, start new query, command:%d", pSql->self, pSql->pStream, pSql->cmd.command);
|
||||
if (!pSql->cmd.parseFinished) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
if (tscNumOfExprs(pQueryInfo) == 0) {
|
||||
tsParseSql(pSql, false);
|
||||
}
|
||||
|
||||
(*pSql->fp)(pSql->param, pSql, code);
|
||||
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
// tscDoQuery(pSql);
|
||||
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
|
||||
return;
|
||||
|
||||
_error:
|
||||
|
|
|
@ -53,7 +53,7 @@ static int32_t tscSetValueToResObj(SSqlObj *pSql, int32_t rowLen) {
|
|||
SSqlRes *pRes = &pSql->res;
|
||||
|
||||
// one column for each row
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableMeta * pMeta = pTableMetaInfo->pTableMeta;
|
||||
|
@ -154,14 +154,14 @@ static int32_t tscBuildTableSchemaResultFields(SSqlObj *pSql, int32_t numOfCols,
|
|||
|
||||
pSql->cmd.numOfCols = numOfCols;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
pQueryInfo->order.order = TSDB_ORDER_ASC;
|
||||
|
||||
TAOS_FIELD f = {.type = TSDB_DATA_TYPE_BINARY, .bytes = (TSDB_COL_NAME_LEN - 1) + VARSTR_HEADER_SIZE};
|
||||
tstrncpy(f.name, "Field", sizeof(f.name));
|
||||
|
||||
SInternalField* pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY,
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY,
|
||||
(TSDB_COL_NAME_LEN - 1) + VARSTR_HEADER_SIZE, -1000, (TSDB_COL_NAME_LEN - 1), false);
|
||||
|
||||
rowLen += ((TSDB_COL_NAME_LEN - 1) + VARSTR_HEADER_SIZE);
|
||||
|
@ -171,7 +171,7 @@ static int32_t tscBuildTableSchemaResultFields(SSqlObj *pSql, int32_t numOfCols,
|
|||
tstrncpy(f.name, "Type", sizeof(f.name));
|
||||
|
||||
pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, (int16_t)(typeColLength + VARSTR_HEADER_SIZE),
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, (int16_t)(typeColLength + VARSTR_HEADER_SIZE),
|
||||
-1000, typeColLength, false);
|
||||
|
||||
rowLen += typeColLength + VARSTR_HEADER_SIZE;
|
||||
|
@ -181,7 +181,7 @@ static int32_t tscBuildTableSchemaResultFields(SSqlObj *pSql, int32_t numOfCols,
|
|||
tstrncpy(f.name, "Length", sizeof(f.name));
|
||||
|
||||
pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_INT, sizeof(int32_t),
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_INT, sizeof(int32_t),
|
||||
-1000, sizeof(int32_t), false);
|
||||
|
||||
rowLen += sizeof(int32_t);
|
||||
|
@ -191,7 +191,7 @@ static int32_t tscBuildTableSchemaResultFields(SSqlObj *pSql, int32_t numOfCols,
|
|||
tstrncpy(f.name, "Note", sizeof(f.name));
|
||||
|
||||
pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, (int16_t)(noteColLength + VARSTR_HEADER_SIZE),
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, (int16_t)(noteColLength + VARSTR_HEADER_SIZE),
|
||||
-1000, noteColLength, false);
|
||||
|
||||
rowLen += noteColLength + VARSTR_HEADER_SIZE;
|
||||
|
@ -199,7 +199,7 @@ static int32_t tscBuildTableSchemaResultFields(SSqlObj *pSql, int32_t numOfCols,
|
|||
}
|
||||
|
||||
static int32_t tscProcessDescribeTable(SSqlObj *pSql) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
assert(tscGetMetaInfo(pQueryInfo, 0)->pTableMeta != NULL);
|
||||
|
||||
|
@ -390,7 +390,7 @@ static int32_t tscSCreateBuildResultFields(SSqlObj *pSql, BuildType type, const
|
|||
SColumnIndex index = {0};
|
||||
pSql->cmd.numOfCols = 2;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
pQueryInfo->order.order = TSDB_ORDER_ASC;
|
||||
|
||||
TAOS_FIELD f;
|
||||
|
@ -405,7 +405,7 @@ static int32_t tscSCreateBuildResultFields(SSqlObj *pSql, BuildType type, const
|
|||
}
|
||||
|
||||
SInternalField* pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, f.bytes, -1000, f.bytes - VARSTR_HEADER_SIZE, false);
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY, f.bytes, -1000, f.bytes - VARSTR_HEADER_SIZE, false);
|
||||
|
||||
rowLen += f.bytes;
|
||||
|
||||
|
@ -418,7 +418,7 @@ static int32_t tscSCreateBuildResultFields(SSqlObj *pSql, BuildType type, const
|
|||
}
|
||||
|
||||
pInfo = tscFieldInfoAppend(&pQueryInfo->fieldsInfo, &f);
|
||||
pInfo->pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY,
|
||||
pInfo->pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &index, TSDB_DATA_TYPE_BINARY,
|
||||
(int16_t)(ddlLen + VARSTR_HEADER_SIZE), -1000, ddlLen, false);
|
||||
|
||||
rowLen += ddlLen + VARSTR_HEADER_SIZE;
|
||||
|
@ -428,7 +428,7 @@ static int32_t tscSCreateBuildResultFields(SSqlObj *pSql, BuildType type, const
|
|||
static int32_t tscSCreateSetValueToResObj(SSqlObj *pSql, int32_t rowLen, const char *tableName, const char *ddl) {
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
int32_t numOfRows = 1;
|
||||
if (strlen(ddl) == 0) {
|
||||
|
||||
|
@ -445,7 +445,7 @@ static int32_t tscSCreateSetValueToResObj(SSqlObj *pSql, int32_t rowLen, const c
|
|||
return 0;
|
||||
}
|
||||
static int32_t tscSCreateBuildResult(SSqlObj *pSql, BuildType type, const char *str, const char *result) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
int32_t rowLen = tscSCreateBuildResultFields(pSql, type, result);
|
||||
|
||||
tscFieldInfoUpdateOffset(pQueryInfo);
|
||||
|
@ -532,7 +532,7 @@ static int32_t tscGetTableTagColumnName(SSqlObj *pSql, char **result) {
|
|||
}
|
||||
buf[0] = 0;
|
||||
|
||||
STableMeta *pMeta = tscGetTableMetaInfoFromCmd(&pSql->cmd, 0, 0)->pTableMeta;
|
||||
STableMeta *pMeta = tscGetTableMetaInfoFromCmd(&pSql->cmd, 0)->pTableMeta;
|
||||
if (pMeta->tableType == TSDB_SUPER_TABLE || pMeta->tableType == TSDB_NORMAL_TABLE ||
|
||||
pMeta->tableType == TSDB_STREAM_TABLE) {
|
||||
free(buf);
|
||||
|
@ -553,7 +553,7 @@ static int32_t tscGetTableTagColumnName(SSqlObj *pSql, char **result) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
static int32_t tscRebuildDDLForSubTable(SSqlObj *pSql, const char *tableName, char *ddl) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableMeta * pMeta = pTableMetaInfo->pTableMeta;
|
||||
|
@ -607,7 +607,7 @@ static int32_t tscRebuildDDLForSubTable(SSqlObj *pSql, const char *tableName, ch
|
|||
}
|
||||
|
||||
static int32_t tscRebuildDDLForNormalTable(SSqlObj *pSql, const char *tableName, char *ddl) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableMeta * pMeta = pTableMetaInfo->pTableMeta;
|
||||
|
||||
|
@ -634,7 +634,7 @@ static int32_t tscRebuildDDLForNormalTable(SSqlObj *pSql, const char *tableName,
|
|||
}
|
||||
static int32_t tscRebuildDDLForSuperTable(SSqlObj *pSql, const char *tableName, char *ddl) {
|
||||
char *result = ddl;
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableMeta * pMeta = pTableMetaInfo->pTableMeta;
|
||||
|
||||
|
@ -675,7 +675,7 @@ static int32_t tscRebuildDDLForSuperTable(SSqlObj *pSql, const char *tableName,
|
|||
}
|
||||
|
||||
static int32_t tscProcessShowCreateTable(SSqlObj *pSql) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
assert(pTableMetaInfo->pTableMeta != NULL);
|
||||
|
||||
|
@ -704,7 +704,7 @@ static int32_t tscProcessShowCreateTable(SSqlObj *pSql) {
|
|||
}
|
||||
|
||||
static int32_t tscProcessShowCreateDatabase(SSqlObj *pSql) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
||||
|
@ -730,7 +730,7 @@ static int32_t tscProcessShowCreateDatabase(SSqlObj *pSql) {
|
|||
return TSDB_CODE_TSC_ACTION_IN_PROGRESS;
|
||||
}
|
||||
static int32_t tscProcessCurrentUser(SSqlObj *pSql) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
SSqlExpr* pExpr = taosArrayGetP(pQueryInfo->exprList, 0);
|
||||
pExpr->resBytes = TSDB_USER_LEN + TSDB_DATA_TYPE_BINARY;
|
||||
|
@ -757,7 +757,7 @@ static int32_t tscProcessCurrentDB(SSqlObj *pSql) {
|
|||
extractDBName(pSql->pTscObj->db, db);
|
||||
pthread_mutex_unlock(&pSql->pTscObj->mutex);
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, pSql->cmd.clauseIndex);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
SSqlExpr* pExpr = taosArrayGetP(pQueryInfo->exprList, 0);
|
||||
pExpr->resType = TSDB_DATA_TYPE_BINARY;
|
||||
|
@ -784,7 +784,7 @@ static int32_t tscProcessCurrentDB(SSqlObj *pSql) {
|
|||
|
||||
static int32_t tscProcessServerVer(SSqlObj *pSql) {
|
||||
const char* v = pSql->pTscObj->sversion;
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, pSql->cmd.clauseIndex);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
SSqlExpr* pExpr = taosArrayGetP(pQueryInfo->exprList, 0);
|
||||
pExpr->resType = TSDB_DATA_TYPE_BINARY;
|
||||
|
@ -807,7 +807,7 @@ static int32_t tscProcessServerVer(SSqlObj *pSql) {
|
|||
}
|
||||
|
||||
static int32_t tscProcessClientVer(SSqlObj *pSql) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
SSqlExpr* pExpr = taosArrayGetP(pQueryInfo->exprList, 0);
|
||||
pExpr->resType = TSDB_DATA_TYPE_BINARY;
|
||||
|
@ -859,7 +859,7 @@ static int32_t tscProcessServStatus(SSqlObj *pSql) {
|
|||
return pSql->res.code;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
SSqlExpr* pExpr = taosArrayGetP(pQueryInfo->exprList, 0);
|
||||
|
||||
int32_t val = 1;
|
||||
|
@ -873,7 +873,7 @@ void tscSetLocalQueryResult(SSqlObj *pSql, const char *val, const char *columnNa
|
|||
|
||||
pCmd->numOfCols = 1;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
pQueryInfo->order.order = TSDB_ORDER_ASC;
|
||||
|
||||
tscFieldInfoClear(&pQueryInfo->fieldsInfo);
|
||||
|
@ -928,7 +928,7 @@ int tscProcessLocalCmd(SSqlObj *pSql) {
|
|||
} else if (pCmd->command == TSDB_SQL_SERV_STATUS) {
|
||||
pRes->code = tscProcessServStatus(pSql);
|
||||
} else {
|
||||
pRes->code = TSDB_CODE_TSC_INVALID_SQL;
|
||||
pRes->code = TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
tscError("0x%"PRIx64" not support command:%d", pSql->self, pCmd->command);
|
||||
}
|
||||
|
||||
|
|
|
@ -19,8 +19,6 @@
|
|||
#include "texpr.h"
|
||||
#include "tlosertree.h"
|
||||
#include "tscLog.h"
|
||||
#include "tscUtil.h"
|
||||
#include "tschemautil.h"
|
||||
#include "tsclient.h"
|
||||
#include "qUtil.h"
|
||||
|
||||
|
@ -59,77 +57,25 @@ int32_t treeComparator(const void *pLeft, const void *pRight, void *param) {
|
|||
}
|
||||
}
|
||||
|
||||
// todo merge with vnode side function
|
||||
void tsCreateSQLFunctionCtx(SQueryInfo* pQueryInfo, SQLFunctionCtx* pCtx, SSchema* pSchema) {
|
||||
size_t size = tscSqlExprNumOfExprs(pQueryInfo);
|
||||
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, i);
|
||||
|
||||
pCtx[i].order = pQueryInfo->order.order;
|
||||
pCtx[i].functionId = pExpr->base.functionId;
|
||||
|
||||
pCtx[i].order = pQueryInfo->order.order;
|
||||
pCtx[i].functionId = pExpr->base.functionId;
|
||||
|
||||
// input data format comes from pModel
|
||||
pCtx[i].inputType = pSchema[i].type;
|
||||
pCtx[i].inputBytes = pSchema[i].bytes;
|
||||
|
||||
pCtx[i].outputBytes = pExpr->base.resBytes;
|
||||
pCtx[i].outputType = pExpr->base.resType;
|
||||
|
||||
// input buffer hold only one point data
|
||||
pCtx[i].size = 1;
|
||||
pCtx[i].hasNull = true;
|
||||
pCtx[i].currentStage = MERGE_STAGE;
|
||||
|
||||
// for top/bottom function, the output of timestamp is the first column
|
||||
int32_t functionId = pExpr->base.functionId;
|
||||
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) {
|
||||
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
|
||||
pCtx[i].param[2].i64 = pQueryInfo->order.order;
|
||||
pCtx[i].param[2].nType = TSDB_DATA_TYPE_BIGINT;
|
||||
pCtx[i].param[1].i64 = pQueryInfo->order.orderColId;
|
||||
pCtx[i].param[0].i64 = pExpr->base.param[0].i64; // top/bot parameter
|
||||
} else if (functionId == TSDB_FUNC_APERCT) {
|
||||
pCtx[i].param[0].i64 = pExpr->base.param[0].i64;
|
||||
pCtx[i].param[0].nType = pExpr->base.param[0].nType;
|
||||
} else if (functionId == TSDB_FUNC_BLKINFO) {
|
||||
pCtx[i].param[0].i64 = pExpr->base.param[0].i64;
|
||||
pCtx[i].param[0].nType = pExpr->base.param[0].nType;
|
||||
pCtx[i].numOfParams = 1;
|
||||
}
|
||||
|
||||
pCtx[i].interBufBytes = pExpr->base.interBytes;
|
||||
pCtx[i].stableQuery = true;
|
||||
}
|
||||
}
|
||||
|
||||
void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
|
||||
SColumnModel *finalmodel, SColumnModel *pFFModel, SSqlObj *pSql) {
|
||||
SSqlCmd* pCmd = &pSql->cmd;
|
||||
SSqlRes* pRes = &pSql->res;
|
||||
|
||||
int32_t tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
|
||||
SQueryInfo* pQueryInfo, SLocalMerger **pMerger, int64_t id) {
|
||||
if (pMemBuffer == NULL) {
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, pFFModel, numOfBuffer);
|
||||
tscError("pMemBuffer:%p is NULL", pMemBuffer);
|
||||
pRes->code = TSDB_CODE_TSC_APP_ERROR;
|
||||
return;
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
tscError("0x%"PRIx64" %p pMemBuffer is NULL", id, pMemBuffer);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
if (pDesc->pColumnModel == NULL) {
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, pFFModel, numOfBuffer);
|
||||
tscError("0x%"PRIx64" no local buffer or intermediate result format model", pSql->self);
|
||||
pRes->code = TSDB_CODE_TSC_APP_ERROR;
|
||||
return;
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
tscError("0x%"PRIx64" no local buffer or intermediate result format model", id);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
int32_t numOfFlush = 0;
|
||||
for (int32_t i = 0; i < numOfBuffer; ++i) {
|
||||
int32_t len = pMemBuffer[i]->fileMeta.flushoutData.nLength;
|
||||
if (len == 0) {
|
||||
tscDebug("0x%"PRIx64" no data retrieved from orderOfVnode:%d", pSql->self, i + 1);
|
||||
tscDebug("0x%"PRIx64" no data retrieved from orderOfVnode:%d", id, i + 1);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -137,41 +83,36 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
|
|||
}
|
||||
|
||||
if (numOfFlush == 0 || numOfBuffer == 0) {
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, pFFModel, numOfBuffer);
|
||||
pCmd->command = TSDB_SQL_RETRIEVE_EMPTY_RESULT; // no result, set the result empty
|
||||
tscDebug("0x%"PRIx64" retrieved no data", pSql->self);
|
||||
return;
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
tscDebug("0x%"PRIx64" no data to retrieve", id);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (pDesc->pColumnModel->capacity >= pMemBuffer[0]->pageSize) {
|
||||
tscError("0x%"PRIx64" Invalid value of buffer capacity %d and page size %d ", pSql->self, pDesc->pColumnModel->capacity,
|
||||
tscError("0x%"PRIx64" Invalid value of buffer capacity %d and page size %d ", id, pDesc->pColumnModel->capacity,
|
||||
pMemBuffer[0]->pageSize);
|
||||
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, pFFModel, numOfBuffer);
|
||||
pRes->code = TSDB_CODE_TSC_APP_ERROR;
|
||||
return;
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
size_t size = sizeof(SLocalMerger) + POINTER_BYTES * numOfFlush;
|
||||
*pMerger = (SLocalMerger *) calloc(1, sizeof(SLocalMerger));
|
||||
if ((*pMerger) == NULL) {
|
||||
tscError("0x%"PRIx64" failed to create local merge structure, out of memory", id);
|
||||
|
||||
SLocalMerger *pMerger = (SLocalMerger *) calloc(1, size);
|
||||
if (pMerger == NULL) {
|
||||
tscError("0x%"PRIx64" failed to create local merge structure, out of memory", pSql->self);
|
||||
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, pFFModel, numOfBuffer);
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
return;
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pMerger->pExtMemBuffer = pMemBuffer;
|
||||
pMerger->pLocalDataSrc = (SLocalDataSource **)&pMerger[1];
|
||||
assert(pMerger->pLocalDataSrc != NULL);
|
||||
(*pMerger)->pExtMemBuffer = pMemBuffer;
|
||||
(*pMerger)->pLocalDataSrc = calloc(numOfFlush, POINTER_BYTES);
|
||||
assert((*pMerger)->pLocalDataSrc != NULL);
|
||||
|
||||
pMerger->numOfBuffer = numOfFlush;
|
||||
pMerger->numOfVnode = numOfBuffer;
|
||||
(*pMerger)->numOfBuffer = numOfFlush;
|
||||
(*pMerger)->numOfVnode = numOfBuffer;
|
||||
|
||||
pMerger->pDesc = pDesc;
|
||||
tscDebug("0x%"PRIx64" the number of merged leaves is: %d", pSql->self, pMerger->numOfBuffer);
|
||||
(*pMerger)->pDesc = pDesc;
|
||||
tscDebug("0x%"PRIx64" the number of merged leaves is: %d", id, (*pMerger)->numOfBuffer);
|
||||
|
||||
int32_t idx = 0;
|
||||
for (int32_t i = 0; i < numOfBuffer; ++i) {
|
||||
|
@ -180,13 +121,12 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
|
|||
for (int32_t j = 0; j < numOfFlushoutInFile; ++j) {
|
||||
SLocalDataSource *ds = (SLocalDataSource *)malloc(sizeof(SLocalDataSource) + pMemBuffer[0]->pageSize);
|
||||
if (ds == NULL) {
|
||||
tscError("0x%"PRIx64" failed to create merge structure", pSql->self);
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
tscError("0x%"PRIx64" failed to create merge structure", id);
|
||||
tfree(pMerger);
|
||||
return;
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pMerger->pLocalDataSrc[idx] = ds;
|
||||
(*pMerger)->pLocalDataSrc[idx] = ds;
|
||||
|
||||
ds->pMemBuffer = pMemBuffer[i];
|
||||
ds->flushoutIdx = j;
|
||||
|
@ -194,12 +134,12 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
|
|||
ds->pageId = 0;
|
||||
ds->rowIdx = 0;
|
||||
|
||||
tscDebug("0x%"PRIx64" load data from disk into memory, orderOfVnode:%d, total:%d", pSql->self, i + 1, idx + 1);
|
||||
tscDebug("0x%"PRIx64" load data from disk into memory, orderOfVnode:%d, total:%d", id, i + 1, idx + 1);
|
||||
tExtMemBufferLoadData(pMemBuffer[i], &(ds->filePage), j, 0);
|
||||
#ifdef _DEBUG_VIEW
|
||||
printf("load data page into mem for build loser tree: %" PRIu64 " rows\n", ds->filePage.num);
|
||||
SSrcColumnInfo colInfo[256] = {0};
|
||||
SQueryInfo * pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo * pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
tscGetSrcColumnInfo(colInfo, pQueryInfo);
|
||||
|
||||
|
@ -208,7 +148,7 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
|
|||
#endif
|
||||
|
||||
if (ds->filePage.num == 0) { // no data in this flush, the index does not increase
|
||||
tscDebug("0x%"PRIx64" flush data is empty, ignore %d flush record", pSql->self, idx);
|
||||
tscDebug("0x%"PRIx64" flush data is empty, ignore %d flush record", id, idx);
|
||||
tfree(ds);
|
||||
continue;
|
||||
}
|
||||
|
@ -219,115 +159,54 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
|
|||
|
||||
// no data actually, no need to merge result.
|
||||
if (idx == 0) {
|
||||
tfree(pMerger);
|
||||
return;
|
||||
tscDebug("0x%"PRIx64" retrieved no data", id);
|
||||
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, numOfBuffer);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
pMerger->numOfBuffer = idx;
|
||||
(*pMerger)->numOfBuffer = idx;
|
||||
|
||||
SCompareParam *param = malloc(sizeof(SCompareParam));
|
||||
if (param == NULL) {
|
||||
tfree(pMerger);
|
||||
return;
|
||||
tfree((*pMerger));
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
param->pLocalData = pMerger->pLocalDataSrc;
|
||||
param->pDesc = pMerger->pDesc;
|
||||
param->num = pMerger->pLocalDataSrc[0]->pMemBuffer->numOfElemsPerPage;
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
param->pLocalData = (*pMerger)->pLocalDataSrc;
|
||||
param->pDesc = (*pMerger)->pDesc;
|
||||
param->num = (*pMerger)->pLocalDataSrc[0]->pMemBuffer->numOfElemsPerPage;
|
||||
|
||||
param->groupOrderType = pQueryInfo->groupbyExpr.orderType;
|
||||
pMerger->orderPrjOnSTable = tscOrderedProjectionQueryOnSTable(pQueryInfo, 0);
|
||||
|
||||
pRes->code = tLoserTreeCreate(&pMerger->pLoserTree, pMerger->numOfBuffer, param, treeComparator);
|
||||
if (pMerger->pLoserTree == NULL || pRes->code != 0) {
|
||||
int32_t code = tLoserTreeCreate(&(*pMerger)->pLoserTree, (*pMerger)->numOfBuffer, param, treeComparator);
|
||||
if ((*pMerger)->pLoserTree == NULL || code != TSDB_CODE_SUCCESS) {
|
||||
tfree(param);
|
||||
tfree(pMerger);
|
||||
return;
|
||||
tfree((*pMerger));
|
||||
return code;
|
||||
}
|
||||
|
||||
// the input data format follows the old format, but output in a new format.
|
||||
// so, all the input must be parsed as old format
|
||||
pMerger->pCtx = (SQLFunctionCtx *)calloc(tscSqlExprNumOfExprs(pQueryInfo), sizeof(SQLFunctionCtx));
|
||||
pMerger->rowSize = pMemBuffer[0]->nElemSize;
|
||||
(*pMerger)->rowSize = pMemBuffer[0]->nElemSize;
|
||||
|
||||
tscFieldInfoUpdateOffset(pQueryInfo);
|
||||
// todo fixed row size is larger than the minimum page size;
|
||||
assert((*pMerger)->rowSize <= pMemBuffer[0]->pageSize);
|
||||
|
||||
if (pMerger->rowSize > pMemBuffer[0]->pageSize) {
|
||||
assert(false); // todo fixed row size is larger than the minimum page size;
|
||||
}
|
||||
|
||||
// used to keep the latest input row
|
||||
pMerger->pTempBuffer = (tFilePage *)calloc(1, pMerger->rowSize + sizeof(tFilePage));
|
||||
|
||||
pMerger->nResultBufSize = pMemBuffer[0]->pageSize * 16;
|
||||
pMerger->pResultBuf = (tFilePage *)calloc(1, pMerger->nResultBufSize + sizeof(tFilePage));
|
||||
|
||||
pMerger->resColModel = finalmodel;
|
||||
pMerger->resColModel->capacity = pMerger->nResultBufSize;
|
||||
pMerger->finalModel = pFFModel;
|
||||
|
||||
if (finalmodel->rowSize > 0) {
|
||||
pMerger->resColModel->capacity /= finalmodel->rowSize;
|
||||
}
|
||||
|
||||
assert(finalmodel->rowSize > 0 && finalmodel->rowSize <= pMerger->rowSize);
|
||||
|
||||
if (pMerger->pTempBuffer == NULL || pMerger->pLoserTree == NULL) {
|
||||
tfree(pMerger->pTempBuffer);
|
||||
tfree(pMerger->pLoserTree);
|
||||
if ((*pMerger)->pLoserTree == NULL) {
|
||||
tfree((*pMerger)->pLoserTree);
|
||||
tfree(param);
|
||||
tfree(pMerger);
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
return;
|
||||
tfree((*pMerger));
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pMerger->pTempBuffer->num = 0;
|
||||
tscCreateResPointerInfo(pRes, pQueryInfo);
|
||||
|
||||
SSchema* pschema = calloc(pDesc->pColumnModel->numOfCols, sizeof(SSchema));
|
||||
for(int32_t i = 0; i < pDesc->pColumnModel->numOfCols; ++i) {
|
||||
pschema[i] = pDesc->pColumnModel->pFields[i].field;
|
||||
}
|
||||
|
||||
tsCreateSQLFunctionCtx(pQueryInfo, pMerger->pCtx, pschema);
|
||||
// setCtxInputOutputBuffer(pQueryInfo, pMerger->pCtx, pMerger, pDesc);
|
||||
|
||||
tfree(pschema);
|
||||
|
||||
int32_t maxBufSize = 0;
|
||||
for (int32_t k = 0; k < tscSqlExprNumOfExprs(pQueryInfo); ++k) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, k);
|
||||
if (maxBufSize < pExpr->base.resBytes && pExpr->base.functionId == TSDB_FUNC_TAG) {
|
||||
maxBufSize = pExpr->base.resBytes;
|
||||
}
|
||||
}
|
||||
|
||||
// we change the capacity of schema to denote that there is only one row in temp buffer
|
||||
pMerger->pDesc->pColumnModel->capacity = 1;
|
||||
|
||||
// restore the limitation value at the last stage
|
||||
if (tscOrderedProjectionQueryOnSTable(pQueryInfo, 0)) {
|
||||
pQueryInfo->limit.limit = pQueryInfo->clauseLimit;
|
||||
pQueryInfo->limit.offset = pQueryInfo->prjOffset;
|
||||
}
|
||||
|
||||
pRes->pLocalMerger = pMerger;
|
||||
pRes->numOfGroups = 0;
|
||||
// we change the capacity of schema to denote that there is only one row in temp buffer
|
||||
(*pMerger)->pDesc->pColumnModel->capacity = 1;
|
||||
|
||||
// STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
// STableComInfo tinfo = tscGetTableInfo(pTableMetaInfo->pTableMeta);
|
||||
|
||||
// TSKEY stime = (pQueryInfo->order.order == TSDB_ORDER_ASC)? pQueryInfo->window.skey : pQueryInfo->window.ekey;
|
||||
// int64_t revisedSTime = taosTimeTruncate(stime, &pQueryInfo->interval, tinfo.precision);
|
||||
|
||||
// if (pQueryInfo->fillType != TSDB_FILL_NONE) {
|
||||
// SFillColInfo* pFillCol = createFillColInfo(pQueryInfo);
|
||||
// pMerger->pFillInfo =
|
||||
// taosCreateFillInfo(pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols, 4096,
|
||||
// (int32_t)pQueryInfo->fieldsInfo.numOfOutput, pQueryInfo->interval.sliding,
|
||||
// pQueryInfo->interval.slidingUnit, tinfo.precision, pQueryInfo->fillType, pFillCol, pSql);
|
||||
// }
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t tscFlushTmpBufferImpl(tExtMemBuffer *pMemoryBuf, tOrderDescriptor *pDesc, tFilePage *pPage,
|
||||
|
@ -418,44 +297,32 @@ int32_t saveToBuffer(tExtMemBuffer *pMemoryBuf, tOrderDescriptor *pDesc, tFilePa
|
|||
return 0;
|
||||
}
|
||||
|
||||
void tscDestroyLocalMerger(SSqlObj *pSql) {
|
||||
if (pSql == NULL) {
|
||||
void tscDestroyLocalMerger(SLocalMerger* pLocalMerger) {
|
||||
if (pLocalMerger == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
SSqlRes *pRes = &(pSql->res);
|
||||
if (pRes->pLocalMerger == NULL) {
|
||||
return;
|
||||
for (int32_t i = 0; i < pLocalMerger->numOfBuffer; ++i) {
|
||||
tfree(pLocalMerger->pLocalDataSrc[i]);
|
||||
}
|
||||
|
||||
// there is no more result, so we release all allocated resource
|
||||
SLocalMerger *pLocalMerge = (SLocalMerger *)atomic_exchange_ptr(&pRes->pLocalMerger, NULL);
|
||||
tfree(pLocalMerge->pResultBuf);
|
||||
tfree(pLocalMerge->pCtx);
|
||||
pLocalMerger->numOfBuffer = 0;
|
||||
tscLocalReducerEnvDestroy(pLocalMerger->pExtMemBuffer, pLocalMerger->pDesc, pLocalMerger->numOfVnode);
|
||||
|
||||
if (pLocalMerge->pLoserTree) {
|
||||
tfree(pLocalMerge->pLoserTree->param);
|
||||
tfree(pLocalMerge->pLoserTree);
|
||||
pLocalMerger->numOfCompleted = 0;
|
||||
|
||||
if (pLocalMerger->pLoserTree) {
|
||||
tfree(pLocalMerger->pLoserTree->param);
|
||||
tfree(pLocalMerger->pLoserTree);
|
||||
}
|
||||
|
||||
tscLocalReducerEnvDestroy(pLocalMerge->pExtMemBuffer, pLocalMerge->pDesc, pLocalMerge->resColModel,
|
||||
pLocalMerge->finalModel, pLocalMerge->numOfVnode);
|
||||
for (int32_t i = 0; i < pLocalMerge->numOfBuffer; ++i) {
|
||||
tfree(pLocalMerge->pLocalDataSrc[i]);
|
||||
}
|
||||
|
||||
pLocalMerge->numOfBuffer = 0;
|
||||
pLocalMerge->numOfCompleted = 0;
|
||||
tfree(pLocalMerge->pTempBuffer);
|
||||
|
||||
free(pLocalMerge);
|
||||
|
||||
tscDebug("0x%"PRIx64" free local reducer finished", pSql->self);
|
||||
tfree(pLocalMerger->buf);
|
||||
tfree(pLocalMerger->pLocalDataSrc);
|
||||
free(pLocalMerger);
|
||||
}
|
||||
|
||||
static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCmd, SColumnModel *pModel) {
|
||||
int32_t numOfGroupByCols = 0;
|
||||
SQueryInfo *pQueryInfo = tscGetActiveQueryInfo(pCmd);
|
||||
static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SQueryInfo* pQueryInfo, SColumnModel *pModel) {
|
||||
int32_t numOfGroupByCols = 0;
|
||||
|
||||
if (pQueryInfo->groupbyExpr.numOfGroupCols > 0) {
|
||||
numOfGroupByCols = pQueryInfo->groupbyExpr.numOfGroupCols;
|
||||
|
@ -474,13 +341,13 @@ static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCm
|
|||
if (numOfGroupByCols > 0) {
|
||||
|
||||
if (pQueryInfo->groupbyExpr.numOfGroupCols > 0) {
|
||||
int32_t numOfInternalOutput = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
|
||||
int32_t numOfInternalOutput = (int32_t) tscNumOfExprs(pQueryInfo);
|
||||
|
||||
// the last "pQueryInfo->groupbyExpr.numOfGroupCols" columns are order-by columns
|
||||
for (int32_t i = 0; i < pQueryInfo->groupbyExpr.numOfGroupCols; ++i) {
|
||||
SColIndex* pColIndex = taosArrayGet(pQueryInfo->groupbyExpr.columnInfo, i);
|
||||
for(int32_t j = 0; j < numOfInternalOutput; ++j) {
|
||||
SExprInfo* pExprInfo = tscSqlExprGet(pQueryInfo, j);
|
||||
SExprInfo* pExprInfo = tscExprGet(pQueryInfo, j);
|
||||
|
||||
int32_t functionId = pExprInfo->base.functionId;
|
||||
if (pColIndex->colId == pExprInfo->base.colInfo.colId && (functionId == TSDB_FUNC_PRJ || functionId == TSDB_FUNC_TAG)) {
|
||||
|
@ -502,9 +369,9 @@ static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCm
|
|||
if (pQueryInfo->interval.interval != 0) {
|
||||
orderColIndexList[0] = PRIMARYKEY_TIMESTAMP_COL_INDEX;
|
||||
} else {
|
||||
size_t size = tscSqlExprNumOfExprs(pQueryInfo);
|
||||
size_t size = tscNumOfExprs(pQueryInfo);
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, i);
|
||||
SExprInfo *pExpr = tscExprGet(pQueryInfo, i);
|
||||
if (pExpr->base.functionId == TSDB_FUNC_PRJ && pExpr->base.colInfo.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
orderColIndexList[0] = i;
|
||||
}
|
||||
|
@ -525,37 +392,30 @@ static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCm
|
|||
}
|
||||
}
|
||||
|
||||
int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOrderDescriptor **pOrderDesc,
|
||||
SColumnModel **pFinalModel, SColumnModel** pFFModel, uint32_t nBufferSizes) {
|
||||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
|
||||
SSchema * pSchema = NULL;
|
||||
int32_t tscLocalReducerEnvCreate(SQueryInfo *pQueryInfo, tExtMemBuffer ***pMemBuffer, int32_t numOfSub,
|
||||
tOrderDescriptor **pOrderDesc, uint32_t nBufferSizes, int64_t id) {
|
||||
SSchema *pSchema = NULL;
|
||||
SColumnModel *pModel = NULL;
|
||||
*pFinalModel = NULL;
|
||||
|
||||
SQueryInfo * pQueryInfo = tscGetActiveQueryInfo(pCmd);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
||||
(*pMemBuffer) = (tExtMemBuffer **)malloc(POINTER_BYTES * pSql->subState.numOfSub);
|
||||
(*pMemBuffer) = (tExtMemBuffer **)malloc(POINTER_BYTES * numOfSub);
|
||||
if (*pMemBuffer == NULL) {
|
||||
tscError("0x%"PRIx64" failed to allocate memory", pSql->self);
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
return pRes->code;
|
||||
tscError("0x%"PRIx64" failed to allocate memory", id);
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
size_t size = tscSqlExprNumOfExprs(pQueryInfo);
|
||||
size_t size = tscNumOfExprs(pQueryInfo);
|
||||
|
||||
pSchema = (SSchema *)calloc(1, sizeof(SSchema) * size);
|
||||
if (pSchema == NULL) {
|
||||
tscError("0x%"PRIx64" failed to allocate memory", pSql->self);
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
return pRes->code;
|
||||
tscError("0x%"PRIx64" failed to allocate memory", id);
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
int32_t rlen = 0;
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, i);
|
||||
SExprInfo *pExpr = tscExprGet(pQueryInfo, i);
|
||||
|
||||
pSchema[i].bytes = pExpr->base.resBytes;
|
||||
pSchema[i].type = (int8_t)pExpr->base.resType;
|
||||
|
@ -570,6 +430,7 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
|
|||
}
|
||||
|
||||
pModel = createColumnModel(pSchema, (int32_t)size, capacity);
|
||||
tfree(pSchema);
|
||||
|
||||
int32_t pg = DEFAULT_PAGE_SIZE;
|
||||
int32_t overhead = sizeof(tFilePage);
|
||||
|
@ -577,95 +438,26 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
|
|||
pg *= 2;
|
||||
}
|
||||
|
||||
size_t numOfSubs = pSql->subState.numOfSub;
|
||||
assert(numOfSubs <= pTableMetaInfo->vgroupList->numOfVgroups);
|
||||
for (int32_t i = 0; i < numOfSubs; ++i) {
|
||||
assert(numOfSub <= pTableMetaInfo->vgroupList->numOfVgroups);
|
||||
for (int32_t i = 0; i < numOfSub; ++i) {
|
||||
(*pMemBuffer)[i] = createExtMemBuffer(nBufferSizes, rlen, pg, pModel);
|
||||
(*pMemBuffer)[i]->flushModel = MULTIPLE_APPEND_MODEL;
|
||||
}
|
||||
|
||||
if (createOrderDescriptor(pOrderDesc, pCmd, pModel) != TSDB_CODE_SUCCESS) {
|
||||
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
tfree(pSchema);
|
||||
return pRes->code;
|
||||
if (createOrderDescriptor(pOrderDesc, pQueryInfo, pModel) != TSDB_CODE_SUCCESS) {
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
// final result depends on the fields number
|
||||
memset(pSchema, 0, sizeof(SSchema) * size);
|
||||
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, i);
|
||||
|
||||
SSchema p1 = {0};
|
||||
if (pExpr->base.colInfo.colIndex == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
p1 = *tGetTbnameColumnSchema();
|
||||
} else if (TSDB_COL_IS_UD_COL(pExpr->base.colInfo.flag)) {
|
||||
p1.bytes = pExpr->base.resBytes;
|
||||
p1.type = (uint8_t) pExpr->base.resType;
|
||||
tstrncpy(p1.name, pExpr->base.aliasName, tListLen(p1.name));
|
||||
} else {
|
||||
p1 = *tscGetTableColumnSchema(pTableMetaInfo->pTableMeta, pExpr->base.colInfo.colIndex);
|
||||
}
|
||||
|
||||
int32_t inter = 0;
|
||||
int16_t type = -1;
|
||||
int16_t bytes = 0;
|
||||
|
||||
// the final result size and type in the same as query on single table.
|
||||
// so here, set the flag to be false;
|
||||
int32_t functionId = pExpr->base.functionId;
|
||||
if (functionId >= TSDB_FUNC_TS && functionId <= TSDB_FUNC_DIFF) {
|
||||
type = pModel->pFields[i].field.type;
|
||||
bytes = pModel->pFields[i].field.bytes;
|
||||
} else {
|
||||
if (functionId == TSDB_FUNC_FIRST_DST) {
|
||||
functionId = TSDB_FUNC_FIRST;
|
||||
} else if (functionId == TSDB_FUNC_LAST_DST) {
|
||||
functionId = TSDB_FUNC_LAST;
|
||||
} else if (functionId == TSDB_FUNC_STDDEV_DST) {
|
||||
functionId = TSDB_FUNC_STDDEV;
|
||||
}
|
||||
|
||||
int32_t ret = getResultDataInfo(p1.type, p1.bytes, functionId, 0, &type, &bytes, &inter, 0, false);
|
||||
assert(ret == TSDB_CODE_SUCCESS);
|
||||
}
|
||||
|
||||
pSchema[i].type = (uint8_t)type;
|
||||
pSchema[i].bytes = bytes;
|
||||
strcpy(pSchema[i].name, pModel->pFields[i].field.name);
|
||||
}
|
||||
|
||||
*pFinalModel = createColumnModel(pSchema, (int32_t)size, capacity);
|
||||
|
||||
memset(pSchema, 0, sizeof(SSchema) * size);
|
||||
size = tscNumOfFields(pQueryInfo);
|
||||
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
SInternalField* pField = tscFieldInfoGetInternalField(&pQueryInfo->fieldsInfo, i);
|
||||
pSchema[i].bytes = pField->field.bytes;
|
||||
pSchema[i].type = pField->field.type;
|
||||
tstrncpy(pSchema[i].name, pField->field.name, tListLen(pSchema[i].name));
|
||||
}
|
||||
|
||||
*pFFModel = createColumnModel(pSchema, (int32_t) size, capacity);
|
||||
|
||||
tfree(pSchema);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param pMemBuffer
|
||||
* @param pDesc
|
||||
* @param pFinalModel
|
||||
* @param numOfVnodes
|
||||
*/
|
||||
void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDesc, SColumnModel *pFinalModel, SColumnModel *pFFModel,
|
||||
int32_t numOfVnodes) {
|
||||
destroyColumnModel(pFinalModel);
|
||||
destroyColumnModel(pFFModel);
|
||||
|
||||
void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDesc, int32_t numOfVnodes) {
|
||||
tOrderDescDestroy(pDesc);
|
||||
|
||||
for (int32_t i = 0; i < numOfVnodes; ++i) {
|
||||
pMemBuffer[i] = destoryExtMemBuffer(pMemBuffer[i]);
|
||||
}
|
||||
|
@ -877,10 +669,12 @@ static bool isAllSourcesCompleted(SLocalMerger *pLocalMerge) {
|
|||
return (pLocalMerge->numOfBuffer == pLocalMerge->numOfCompleted);
|
||||
}
|
||||
|
||||
void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen) {
|
||||
SSqlRes *pRes = &pObj->res;
|
||||
void tscInitResObjForLocalQuery(SSqlObj *pSql, int32_t numOfRes, int32_t rowLen) {
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
if (pRes->pLocalMerger != NULL) {
|
||||
tscDestroyLocalMerger(pObj);
|
||||
tscDestroyLocalMerger(pRes->pLocalMerger);
|
||||
pRes->pLocalMerger = NULL;
|
||||
tscDebug("0x%"PRIx64" free local reducer finished", pSql->self);
|
||||
}
|
||||
|
||||
pRes->qId = 1; // hack to pass the safety check in fetch_row function
|
||||
|
@ -891,14 +685,12 @@ void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen)
|
|||
pRes->pLocalMerger = (SLocalMerger *)calloc(1, sizeof(SLocalMerger));
|
||||
|
||||
/*
|
||||
* we need one additional byte space
|
||||
* the sprintf function needs one additional space to put '\0' at the end of string
|
||||
* One more byte space is required, since the sprintf function needs one additional space to put '\0' at
|
||||
* the end of string
|
||||
*/
|
||||
size_t allocSize = numOfRes * rowLen + sizeof(tFilePage) + 1;
|
||||
pRes->pLocalMerger->pResultBuf = (tFilePage *)calloc(1, allocSize);
|
||||
|
||||
pRes->pLocalMerger->pResultBuf->num = numOfRes;
|
||||
pRes->data = pRes->pLocalMerger->pResultBuf->data;
|
||||
size_t size = numOfRes * rowLen + 1;
|
||||
pRes->pLocalMerger->buf = calloc(1, size);
|
||||
pRes->data = pRes->pLocalMerger->buf;
|
||||
}
|
||||
|
||||
int32_t doArithmeticCalculate(SQueryInfo* pQueryInfo, tFilePage* pOutput, int32_t rowSize, int32_t finalRowSize) {
|
||||
|
@ -910,12 +702,12 @@ int32_t doArithmeticCalculate(SQueryInfo* pQueryInfo, tFilePage* pOutput, int32_
|
|||
|
||||
// todo refactor
|
||||
arithSup.offset = 0;
|
||||
arithSup.numOfCols = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
|
||||
arithSup.numOfCols = (int32_t) tscNumOfExprs(pQueryInfo);
|
||||
arithSup.exprList = pQueryInfo->exprList;
|
||||
arithSup.data = calloc(arithSup.numOfCols, POINTER_BYTES);
|
||||
|
||||
for(int32_t k = 0; k < arithSup.numOfCols; ++k) {
|
||||
SExprInfo* pExpr = tscSqlExprGet(pQueryInfo, k);
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, k);
|
||||
arithSup.data[k] = (pOutput->data + pOutput->num* pExpr->base.offset);
|
||||
}
|
||||
|
||||
|
@ -944,8 +736,8 @@ int32_t doArithmeticCalculate(SQueryInfo* pQueryInfo, tFilePage* pOutput, int32_
|
|||
return offset;
|
||||
}
|
||||
|
||||
#define COLMODEL_GET_VAL(data, schema, allrow, rowId, colId) \
|
||||
(data + (schema)->pFields[colId].offset * (allrow) + (rowId) * (schema)->pFields[colId].field.bytes)
|
||||
#define COLMODEL_GET_VAL(data, schema, rowId, colId) \
|
||||
(data + (schema)->pFields[colId].offset * ((schema)->capacity) + (rowId) * (schema)->pFields[colId].field.bytes)
|
||||
|
||||
static void appendOneRowToDataBlock(SSDataBlock *pBlock, char *buf, SColumnModel *pModel, int32_t rowIndex,
|
||||
int32_t maxRows) {
|
||||
|
@ -953,7 +745,7 @@ static void appendOneRowToDataBlock(SSDataBlock *pBlock, char *buf, SColumnModel
|
|||
SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, i);
|
||||
char* p = pColInfo->pData + pBlock->info.rows * pColInfo->info.bytes;
|
||||
|
||||
char *src = COLMODEL_GET_VAL(buf, pModel, maxRows, rowIndex, i);
|
||||
char *src = COLMODEL_GET_VAL(buf, pModel, rowIndex, i);
|
||||
memmove(p, src, pColInfo->info.bytes);
|
||||
}
|
||||
|
||||
|
@ -970,8 +762,6 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
|
|||
|
||||
SLocalMerger *pMerger = pInfo->pMerge;
|
||||
SLoserTreeInfo *pTree = pMerger->pLoserTree;
|
||||
SColumnModel *pModel = pMerger->pDesc->pColumnModel;
|
||||
tFilePage *tmpBuffer = pMerger->pTempBuffer;
|
||||
|
||||
pInfo->binfo.pRes->info.rows = 0;
|
||||
|
||||
|
@ -984,7 +774,7 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
|
|||
printf("chosen data in pTree[0] = %d\n", pTree->pNode[0].index);
|
||||
#endif
|
||||
|
||||
assert((pTree->pNode[0].index < pMerger->numOfBuffer) && (pTree->pNode[0].index >= 0) && tmpBuffer->num == 0);
|
||||
assert((pTree->pNode[0].index < pMerger->numOfBuffer) && (pTree->pNode[0].index >= 0));
|
||||
|
||||
// chosen from loser tree
|
||||
SLocalDataSource *pOneDataSrc = pMerger->pLocalDataSrc[pTree->pNode[0].index];
|
||||
|
@ -997,11 +787,10 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
|
|||
SColIndex * pIndex = taosArrayGet(pInfo->orderColumnList, i);
|
||||
SColumnInfoData *pColInfo = taosArrayGet(pInfo->binfo.pRes->pDataBlock, pIndex->colIndex);
|
||||
|
||||
char *newRow =
|
||||
COLMODEL_GET_VAL(pOneDataSrc->filePage.data, pModel, pOneDataSrc->pMemBuffer->pColumnModel->capacity,
|
||||
pOneDataSrc->rowIdx, pIndex->colIndex);
|
||||
char *newRow = COLMODEL_GET_VAL(pOneDataSrc->filePage.data, pOneDataSrc->pMemBuffer->pColumnModel,
|
||||
pOneDataSrc->rowIdx, pIndex->colIndex);
|
||||
|
||||
char * data = pInfo->prevRow[i];
|
||||
char *data = pInfo->prevRow[i];
|
||||
int32_t ret = columnValueAscendingComparator(data, newRow, pColInfo->info.type, pColInfo->info.bytes);
|
||||
if (ret == 0) {
|
||||
continue;
|
||||
|
@ -1020,9 +809,8 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
|
|||
SColIndex * pIndex = taosArrayGet(pInfo->orderColumnList, i);
|
||||
SColumnInfoData *pColInfo = taosArrayGet(pInfo->binfo.pRes->pDataBlock, pIndex->colIndex);
|
||||
|
||||
char *curCol =
|
||||
COLMODEL_GET_VAL(pOneDataSrc->filePage.data, pModel, pOneDataSrc->pMemBuffer->pColumnModel->capacity,
|
||||
pOneDataSrc->rowIdx, pIndex->colIndex);
|
||||
char *curCol = COLMODEL_GET_VAL(pOneDataSrc->filePage.data, pOneDataSrc->pMemBuffer->pColumnModel,
|
||||
pOneDataSrc->rowIdx, pIndex->colIndex);
|
||||
memcpy(pInfo->prevRow[i], curCol, pColInfo->info.bytes);
|
||||
}
|
||||
|
||||
|
@ -1033,7 +821,8 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
|
|||
return pInfo->binfo.pRes;
|
||||
}
|
||||
|
||||
appendOneRowToDataBlock(pInfo->binfo.pRes, pOneDataSrc->filePage.data, pModel, pOneDataSrc->rowIdx, pOneDataSrc->pMemBuffer->pColumnModel->capacity);
|
||||
appendOneRowToDataBlock(pInfo->binfo.pRes, pOneDataSrc->filePage.data, pOneDataSrc->pMemBuffer->pColumnModel,
|
||||
pOneDataSrc->rowIdx, pOneDataSrc->pMemBuffer->pColumnModel->capacity);
|
||||
|
||||
#if defined(_DEBUG_VIEW)
|
||||
printf("chosen row:\t");
|
||||
|
@ -1082,7 +871,7 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
|
|||
}
|
||||
|
||||
SMultiwayMergeInfo *pAggInfo = pOperator->info;
|
||||
SOperatorInfo *upstream = pOperator->upstream;
|
||||
SOperatorInfo *upstream = pOperator->upstream[0];
|
||||
|
||||
*newgroup = false;
|
||||
bool handleData = false;
|
||||
|
@ -1166,7 +955,6 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
|
|||
if (pInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP && pRes->info.rows > 0) {
|
||||
STimeWindow* w = &pRes->info.window;
|
||||
|
||||
// TODO in case of desc order, swap it
|
||||
w->skey = *(int64_t*)pInfoData->pData;
|
||||
w->ekey = *(int64_t*)(((char*)pInfoData->pData) + TSDB_KEYSIZE * (pRes->info.rows - 1));
|
||||
|
||||
|
@ -1186,7 +974,7 @@ static SSDataBlock* skipGroupBlock(SOperatorInfo* pOperator, bool* newgroup) {
|
|||
|
||||
SSDataBlock* pBlock = NULL;
|
||||
if (pInfo->currentGroupOffset == 0) {
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
@ -1194,7 +982,7 @@ static SSDataBlock* skipGroupBlock(SOperatorInfo* pOperator, bool* newgroup) {
|
|||
|
||||
if (*newgroup == false && pInfo->limit.limit > 0 && pInfo->rowsTotal >= pInfo->limit.limit) {
|
||||
while ((*newgroup) == false) { // ignore the remain blocks
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
@ -1206,7 +994,7 @@ static SSDataBlock* skipGroupBlock(SOperatorInfo* pOperator, bool* newgroup) {
|
|||
return pBlock;
|
||||
}
|
||||
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
@ -1220,7 +1008,7 @@ static SSDataBlock* skipGroupBlock(SOperatorInfo* pOperator, bool* newgroup) {
|
|||
}
|
||||
|
||||
while ((*newgroup) == false) {
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
|
|
@ -107,7 +107,7 @@ int tsParseTime(SStrToken *pToken, int64_t *time, char **next, char *error, int1
|
|||
}
|
||||
|
||||
if (parseAbsoluteDuration(valueToken.z, valueToken.n, &interval) != TSDB_CODE_SUCCESS) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (timePrec == TSDB_TIME_PRECISION_MILLI) {
|
||||
|
@ -441,7 +441,7 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, SSqlCmd *pCmd, int1
|
|||
*str += index;
|
||||
|
||||
if (sToken.type == TK_QUESTION) {
|
||||
if (pCmd->insertType != TSDB_QUERY_TYPE_STMT_INSERT) {
|
||||
if (pCmd->insertParam.insertType != TSDB_QUERY_TYPE_STMT_INSERT) {
|
||||
return tscSQLSyntaxErrMsg(pCmd->payload, "? only allowed in binding insertion", *str);
|
||||
}
|
||||
|
||||
|
@ -647,7 +647,7 @@ static int32_t tsSetBlockInfo(SSubmitBlk *pBlocks, const STableMeta *pTableMeta,
|
|||
pBlocks->sversion = pTableMeta->sversion;
|
||||
|
||||
if (pBlocks->numOfRows + numOfRows >= INT16_MAX) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
} else {
|
||||
pBlocks->numOfRows += numOfRows;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -708,7 +708,7 @@ static int32_t doParseInsertStatement(SSqlCmd* pCmd, char **str, STableDataBlock
|
|||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
code = TSDB_CODE_TSC_INVALID_SQL;
|
||||
code = TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
char tmpTokenBuf[16*1024] = {0}; // used for deleting Escape character: \\, \', \"
|
||||
|
||||
int32_t numOfRows = 0;
|
||||
|
@ -747,12 +747,10 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
const int32_t STABLE_INDEX = 1;
|
||||
|
||||
SSqlCmd * pCmd = &pSql->cmd;
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
char *sql = *sqlstr;
|
||||
|
||||
pSql->cmd.autoCreated = false;
|
||||
|
||||
// get the token of specified table
|
||||
index = 0;
|
||||
tableToken = tStrGetToken(sql, &index, false);
|
||||
|
@ -786,7 +784,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
}
|
||||
|
||||
if (numOfColList == 0 && (*boundColumn) != NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, TABLE_INDEX);
|
||||
|
@ -802,7 +800,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
}
|
||||
|
||||
STableMetaInfo *pSTableMetaInfo = tscGetMetaInfo(pQueryInfo, STABLE_INDEX);
|
||||
code = tscSetTableFullName(pSTableMetaInfo, &sToken, pSql);
|
||||
code = tscSetTableFullName(&pSTableMetaInfo->name, &sToken, pSql);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
@ -879,7 +877,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
if (TK_ILLEGAL == sToken.type) {
|
||||
tdDestroyKVRowBuilder(&kvRowBuilder);
|
||||
tscDestroyBoundColumnInfo(&spd);
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (sToken.n == 0 || sToken.type == TK_RP) {
|
||||
|
@ -961,7 +959,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
}
|
||||
|
||||
if (numOfColsAfterTags == 0 && (*boundColumn) != NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
sToken = tStrGetToken(sql, &index, false);
|
||||
|
@ -973,13 +971,13 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
return tscInvalidSQLErrMsg(pCmd->payload, "invalid table name", *sqlstr);
|
||||
}
|
||||
|
||||
int32_t ret = tscSetTableFullName(pTableMetaInfo, &tableToken, pSql);
|
||||
int32_t ret = tscSetTableFullName(&pTableMetaInfo->name, &tableToken, pSql);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (sql == NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
code = tscGetTableMetaEx(pSql, pTableMetaInfo, true);
|
||||
|
@ -991,7 +989,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
|
|||
sql = sToken.z;
|
||||
|
||||
if (sql == NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
code = tscGetTableMetaEx(pSql, pTableMetaInfo, false);
|
||||
|
@ -1015,12 +1013,17 @@ int validateTableName(char *tblName, int len, SStrToken* psTblToken) {
|
|||
return tscValidateName(psTblToken);
|
||||
}
|
||||
|
||||
static int32_t validateDataSource(SSqlCmd *pCmd, int8_t type, const char *sql) {
|
||||
if (pCmd->dataSourceType != 0 && pCmd->dataSourceType != type) {
|
||||
return tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES and FILE are not allowed to mix up", sql);
|
||||
static int32_t validateDataSource(SSqlCmd *pCmd, int32_t type, const char *sql) {
|
||||
uint32_t *insertType = &pCmd->insertParam.insertType;
|
||||
if (*insertType == TSDB_QUERY_TYPE_STMT_INSERT && type == TSDB_QUERY_TYPE_INSERT) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
pCmd->dataSourceType = type;
|
||||
if ((*insertType) != 0 && (*insertType) != type) {
|
||||
return tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES and FILE are not allowed to mixed up", sql);
|
||||
}
|
||||
|
||||
*insertType = type;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -1090,7 +1093,6 @@ static int32_t parseBoundColumns(SSqlCmd* pCmd, SParsedDataColInfo* pColInfo, SS
|
|||
|
||||
_clean:
|
||||
pCmd->curSql = NULL;
|
||||
pCmd->parseFinished = 1;
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -1106,7 +1108,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
int32_t totalNum = 0;
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
assert(pQueryInfo != NULL);
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = (pQueryInfo->numOfTables == 0)? tscAddEmptyMetaInfo(pQueryInfo):tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
@ -1120,9 +1122,9 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
return code;
|
||||
}
|
||||
|
||||
if (NULL == pCmd->pTableBlockHashList) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (NULL == pCmd->pTableBlockHashList) {
|
||||
if (NULL == pCmd->insertParam.pTableBlockHashList) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (NULL == pCmd->insertParam.pTableBlockHashList) {
|
||||
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
goto _clean;
|
||||
}
|
||||
|
@ -1130,7 +1132,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
str = pCmd->curSql;
|
||||
}
|
||||
|
||||
tscDebug("0x%"PRIx64" create data block list hashList:%p", pSql->self, pCmd->pTableBlockHashList);
|
||||
tscDebug("0x%"PRIx64" create data block list hashList:%p", pSql->self, pCmd->insertParam.pTableBlockHashList);
|
||||
|
||||
while (1) {
|
||||
int32_t index = 0;
|
||||
|
@ -1142,7 +1144,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
* if the data is from the data file, no data has been generated yet. So, there no data to
|
||||
* merge or submit, save the file path and parse the file in other routines.
|
||||
*/
|
||||
if (pCmd->dataSourceType == DATA_FROM_DATA_FILE) {
|
||||
if (TSDB_QUERY_HAS_TYPE(pCmd->insertParam.insertType, TSDB_QUERY_TYPE_FILE_INSERT)) {
|
||||
goto _clean;
|
||||
}
|
||||
|
||||
|
@ -1151,7 +1153,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
* Otherwise, create the first submit block and submit to virtual node.
|
||||
*/
|
||||
if (totalNum == 0) {
|
||||
code = TSDB_CODE_TSC_INVALID_SQL;
|
||||
code = TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
goto _clean;
|
||||
} else {
|
||||
break;
|
||||
|
@ -1168,7 +1170,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
goto _clean;
|
||||
}
|
||||
|
||||
if ((code = tscSetTableFullName(pTableMetaInfo, &sTblToken, pSql)) != TSDB_CODE_SUCCESS) {
|
||||
if ((code = tscSetTableFullName(&pTableMetaInfo->name, &sTblToken, pSql)) != TSDB_CODE_SUCCESS) {
|
||||
goto _clean;
|
||||
}
|
||||
|
||||
|
@ -1203,7 +1205,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
|
||||
STableComInfo tinfo = tscGetTableInfo(pTableMetaInfo->pTableMeta);
|
||||
if (sToken.type == TK_FILE) {
|
||||
if (validateDataSource(pCmd, DATA_FROM_DATA_FILE, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
if (validateDataSource(pCmd, TSDB_QUERY_TYPE_FILE_INSERT, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
goto _clean;
|
||||
}
|
||||
|
||||
|
@ -1236,12 +1238,12 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
if (bindedColumns == NULL) {
|
||||
STableMeta *pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
|
||||
if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
if (validateDataSource(pCmd, TSDB_QUERY_TYPE_INSERT, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
goto _clean;
|
||||
}
|
||||
|
||||
STableDataBlocks *dataBuf = NULL;
|
||||
int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
|
||||
int32_t ret = tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
|
||||
sizeof(SSubmitBlk), tinfo.rowSize, &pTableMetaInfo->name, pTableMeta,
|
||||
&dataBuf, NULL);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
|
@ -1254,14 +1256,14 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
}
|
||||
} else { // bindedColumns != NULL
|
||||
// insert into tablename(col1, col2,..., coln) values(v1, v2,... vn);
|
||||
STableMeta *pTableMeta = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0)->pTableMeta;
|
||||
STableMeta *pTableMeta = tscGetTableMetaInfoFromCmd(pCmd, 0)->pTableMeta;
|
||||
|
||||
if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
if (validateDataSource(pCmd, TSDB_QUERY_TYPE_INSERT, sToken.z) != TSDB_CODE_SUCCESS) {
|
||||
goto _clean;
|
||||
}
|
||||
|
||||
STableDataBlocks *dataBuf = NULL;
|
||||
int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
|
||||
int32_t ret = tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
|
||||
sizeof(SSubmitBlk), tinfo.rowSize, &pTableMetaInfo->name, pTableMeta,
|
||||
&dataBuf, NULL);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
|
@ -1297,7 +1299,8 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
goto _clean;
|
||||
}
|
||||
|
||||
if ((pCmd->insertType != TSDB_QUERY_TYPE_STMT_INSERT) && taosHashGetSize(pCmd->pTableBlockHashList) > 0) { // merge according to vgId
|
||||
// merge according to vgId
|
||||
if (!TSDB_QUERY_HAS_TYPE(pCmd->insertParam.insertType, TSDB_QUERY_TYPE_STMT_INSERT) && taosHashGetSize(pCmd->insertParam.pTableBlockHashList) > 0) {
|
||||
if ((code = tscMergeTableDataBlocks(pSql, true)) != TSDB_CODE_SUCCESS) {
|
||||
goto _clean;
|
||||
}
|
||||
|
@ -1308,7 +1311,6 @@ int tsParseInsertSql(SSqlObj *pSql) {
|
|||
|
||||
_clean:
|
||||
pCmd->curSql = NULL;
|
||||
pCmd->parseFinished = 1;
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -1326,9 +1328,8 @@ int tsInsertInitialCheck(SSqlObj *pSql) {
|
|||
pCmd->count = 0;
|
||||
pCmd->command = TSDB_SQL_INSERT;
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfoS(pCmd, pCmd->clauseIndex);
|
||||
|
||||
TSDB_QUERY_SET_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_INSERT | pCmd->insertType);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfoS(pCmd);
|
||||
TSDB_QUERY_SET_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_INSERT);
|
||||
|
||||
sToken = tStrGetToken(pSql->sqlstr, &index, false);
|
||||
if (sToken.type != TK_INTO) {
|
||||
|
@ -1343,11 +1344,11 @@ int tsParseSql(SSqlObj *pSql, bool initial) {
|
|||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
SSqlCmd* pCmd = &pSql->cmd;
|
||||
|
||||
if ((!pCmd->parseFinished) && (!initial)) {
|
||||
if (!initial) {
|
||||
tscDebug("0x%"PRIx64" resume to parse sql: %s", pSql->self, pCmd->curSql);
|
||||
}
|
||||
|
||||
ret = tscAllocPayload(&pSql->cmd, TSDB_DEFAULT_PAYLOAD_SIZE);
|
||||
ret = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
|
||||
if (TSDB_CODE_SUCCESS != ret) {
|
||||
return ret;
|
||||
}
|
||||
|
@ -1357,31 +1358,32 @@ int tsParseSql(SSqlObj *pSql, bool initial) {
|
|||
return ret;
|
||||
}
|
||||
|
||||
// make a backup as tsParseInsertSql may modify the string
|
||||
char* sqlstr = strdup(pSql->sqlstr);
|
||||
ret = tsParseInsertSql(pSql);
|
||||
if ((sqlstr == NULL) || (pSql->parseRetry >= 1) ||
|
||||
(ret != TSDB_CODE_TSC_SQL_SYNTAX_ERROR && ret != TSDB_CODE_TSC_INVALID_SQL)) {
|
||||
free(sqlstr);
|
||||
} else {
|
||||
assert(ret == TSDB_CODE_SUCCESS || ret == TSDB_CODE_TSC_ACTION_IN_PROGRESS || ret == TSDB_CODE_TSC_SQL_SYNTAX_ERROR || ret == TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
if (pSql->parseRetry < 1 && (ret == TSDB_CODE_TSC_SQL_SYNTAX_ERROR || ret == TSDB_CODE_TSC_INVALID_OPERATION)) {
|
||||
tscDebug("0x%"PRIx64 " parse insert sql statement failed, code:%s, clear meta cache and retry ", pSql->self, tstrerror(ret));
|
||||
|
||||
tscResetSqlCmd(pCmd, true);
|
||||
free(pSql->sqlstr);
|
||||
pSql->sqlstr = sqlstr;
|
||||
pSql->parseRetry++;
|
||||
|
||||
if ((ret = tsInsertInitialCheck(pSql)) == TSDB_CODE_SUCCESS) {
|
||||
ret = tsParseInsertSql(pSql);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
SSqlInfo SQLInfo = qSqlParse(pSql->sqlstr);
|
||||
ret = tscToSQLCmd(pSql, &SQLInfo);
|
||||
if (ret == TSDB_CODE_TSC_INVALID_SQL && pSql->parseRetry == 0 && SQLInfo.type == TSDB_SQL_NULL) {
|
||||
SSqlInfo sqlInfo = qSqlParse(pSql->sqlstr);
|
||||
ret = tscValidateSqlInfo(pSql, &sqlInfo);
|
||||
if (ret == TSDB_CODE_TSC_INVALID_OPERATION && pSql->parseRetry < 1 && sqlInfo.type == TSDB_SQL_SELECT) {
|
||||
tscDebug("0x%"PRIx64 " parse query sql statement failed, code:%s, clear meta cache and retry ", pSql->self, tstrerror(ret));
|
||||
|
||||
tscResetSqlCmd(pCmd, true);
|
||||
pSql->parseRetry++;
|
||||
ret = tscToSQLCmd(pSql, &SQLInfo);
|
||||
|
||||
ret = tscValidateSqlInfo(pSql, &sqlInfo);
|
||||
}
|
||||
|
||||
SqlInfoDestroy(&SQLInfo);
|
||||
SqlInfoDestroy(&sqlInfo);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1398,8 +1400,7 @@ static int doPackSendDataBlock(SSqlObj *pSql, int32_t numOfRows, STableDataBlock
|
|||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
pSql->res.numOfRows = 0;
|
||||
|
||||
assert(pCmd->numOfClause == 1);
|
||||
STableMeta *pTableMeta = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0)->pTableMeta;
|
||||
STableMeta *pTableMeta = tscGetTableMetaInfoFromCmd(pCmd, 0)->pTableMeta;
|
||||
|
||||
SSubmitBlk *pBlocks = (SSubmitBlk *)(pTableDataBlocks->pData);
|
||||
code = tsSetBlockInfo(pBlocks, pTableMeta, numOfRows);
|
||||
|
@ -1411,7 +1412,7 @@ static int doPackSendDataBlock(SSqlObj *pSql, int32_t numOfRows, STableDataBlock
|
|||
return code;
|
||||
}
|
||||
|
||||
STableDataBlocks *pDataBlock = taosArrayGetP(pCmd->pDataBlocks, 0);
|
||||
STableDataBlocks *pDataBlock = taosArrayGetP(pCmd->insertParam.pDataBlocks, 0);
|
||||
if ((code = tscCopyDataBlockToPayload(pSql, pDataBlock)) != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
@ -1461,17 +1462,17 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
|
|||
// accumulate the total submit records
|
||||
pParentSql->res.numOfRows += pSql->res.numOfRows;
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
STableMeta * pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
STableComInfo tinfo = tscGetTableInfo(pTableMeta);
|
||||
|
||||
destroyTableNameList(pCmd);
|
||||
|
||||
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
|
||||
pCmd->insertParam.pDataBlocks = tscDestroyBlockArrayList(pCmd->insertParam.pDataBlocks);
|
||||
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
|
@ -1479,7 +1480,7 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
|
|||
|
||||
STableDataBlocks *pTableDataBlock = NULL;
|
||||
int32_t ret =
|
||||
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tinfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pTableDataBlock, NULL);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
pParentSql->res.code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
|
@ -1561,8 +1562,8 @@ void tscImportDataFromFile(SSqlObj *pSql) {
|
|||
return;
|
||||
}
|
||||
|
||||
assert(pCmd->dataSourceType == DATA_FROM_DATA_FILE && strlen(pCmd->payload) != 0);
|
||||
pCmd->active = pCmd->pQueryInfo[0];
|
||||
assert(TSDB_QUERY_HAS_TYPE(pCmd->insertParam.insertType, TSDB_QUERY_TYPE_FILE_INSERT) && strlen(pCmd->payload) != 0);
|
||||
pCmd->active = pCmd->pQueryInfo;
|
||||
|
||||
SImportFileSupport *pSupporter = calloc(1, sizeof(SImportFileSupport));
|
||||
SSqlObj *pNew = createSubqueryObj(pSql, 0, parseFileSendDataBlock, pSupporter, TSDB_SQL_INSERT, NULL);
|
||||
|
|
|
@ -312,7 +312,7 @@ static int fillColumnsNull(STableDataBlocks* pBlock, int32_t rowNum) {
|
|||
int32_t fillTablesColumnsNull(SSqlObj* pSql) {
|
||||
SSqlCmd* pCmd = &pSql->cmd;
|
||||
|
||||
STableDataBlocks** p = taosHashIterate(pCmd->pTableBlockHashList, NULL);
|
||||
STableDataBlocks** p = taosHashIterate(pCmd->insertParam.pTableBlockHashList, NULL);
|
||||
|
||||
STableDataBlocks* pOneTableBlock = *p;
|
||||
while(pOneTableBlock) {
|
||||
|
@ -321,7 +321,7 @@ int32_t fillTablesColumnsNull(SSqlObj* pSql) {
|
|||
fillColumnsNull(pOneTableBlock, pBlocks->numOfRows);
|
||||
}
|
||||
|
||||
p = taosHashIterate(pCmd->pTableBlockHashList, p);
|
||||
p = taosHashIterate(pCmd->insertParam.pTableBlockHashList, p);
|
||||
if (p == NULL) {
|
||||
break;
|
||||
}
|
||||
|
@ -844,12 +844,12 @@ static int insertStmtBindParam(STscStmt* stmt, TAOS_BIND* bind) {
|
|||
STableDataBlocks* pBlock = NULL;
|
||||
|
||||
if (pStmt->multiTbInsert) {
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
tscError("0x%"PRIx64" Table block hash list is empty", pStmt->pSql->self);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->pTableBlockHashList, (const char*)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid));
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->insertParam.pTableBlockHashList, (const char*)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid));
|
||||
if (t1 == NULL) {
|
||||
tscError("0x%"PRIx64" no table data block in hash list, uid:%" PRId64 , pStmt->pSql->self, pStmt->mtb.currentUid);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
|
@ -857,15 +857,15 @@ static int insertStmtBindParam(STscStmt* stmt, TAOS_BIND* bind) {
|
|||
|
||||
pBlock = *t1;
|
||||
} else {
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
|
||||
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
}
|
||||
|
||||
int32_t ret =
|
||||
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
|
||||
if (ret != 0) {
|
||||
return ret;
|
||||
|
@ -908,12 +908,12 @@ static int insertStmtBindParamBatch(STscStmt* stmt, TAOS_MULTI_BIND* bind, int c
|
|||
STableDataBlocks* pBlock = NULL;
|
||||
|
||||
if (pStmt->multiTbInsert) {
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
tscError("0x%"PRIx64" Table block hash list is empty", pStmt->pSql->self);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->pTableBlockHashList, (const char*)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid));
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->insertParam.pTableBlockHashList, (const char*)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid));
|
||||
if (t1 == NULL) {
|
||||
tscError("0x%"PRIx64" no table data block in hash list, uid:%" PRId64 , pStmt->pSql->self, pStmt->mtb.currentUid);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
|
@ -921,15 +921,15 @@ static int insertStmtBindParamBatch(STscStmt* stmt, TAOS_MULTI_BIND* bind, int c
|
|||
|
||||
pBlock = *t1;
|
||||
} else {
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
|
||||
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
}
|
||||
|
||||
int32_t ret =
|
||||
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
|
||||
if (ret != 0) {
|
||||
return ret;
|
||||
|
@ -995,12 +995,11 @@ static int insertStmtUpdateBatch(STscStmt* stmt) {
|
|||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
||||
assert(pCmd->numOfClause == 1);
|
||||
if (taosHashGetSize(pCmd->pTableBlockHashList) == 0) {
|
||||
if (taosHashGetSize(pCmd->insertParam.pTableBlockHashList) == 0) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->pTableBlockHashList, (const char*)&stmt->mtb.currentUid, sizeof(stmt->mtb.currentUid));
|
||||
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pCmd->insertParam.pTableBlockHashList, (const char*)&stmt->mtb.currentUid, sizeof(stmt->mtb.currentUid));
|
||||
if (t1 == NULL) {
|
||||
tscError("0x%"PRIx64" no table data block in hash list, uid:%" PRId64 , pSql->self, stmt->mtb.currentUid);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
|
@ -1036,9 +1035,9 @@ static int insertStmtReset(STscStmt* pStmt) {
|
|||
if (pCmd->batchSize > 2) {
|
||||
int32_t alloced = (pCmd->batchSize + 1) / 2;
|
||||
|
||||
size_t size = taosArrayGetSize(pCmd->pDataBlocks);
|
||||
size_t size = taosArrayGetSize(pCmd->insertParam.pDataBlocks);
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
STableDataBlocks* pBlock = taosArrayGetP(pCmd->pDataBlocks, i);
|
||||
STableDataBlocks* pBlock = taosArrayGetP(pCmd->insertParam.pDataBlocks, i);
|
||||
|
||||
uint32_t totalDataSize = pBlock->size - sizeof(SSubmitBlk);
|
||||
pBlock->size = sizeof(SSubmitBlk) + totalDataSize / alloced;
|
||||
|
@ -1049,7 +1048,7 @@ static int insertStmtReset(STscStmt* pStmt) {
|
|||
}
|
||||
pCmd->batchSize = 0;
|
||||
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
pTableMetaInfo->vgroupIndex = 0;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -1060,22 +1059,21 @@ static int insertStmtExecute(STscStmt* stmt) {
|
|||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
|
||||
assert(pCmd->numOfClause == 1);
|
||||
if (taosHashGetSize(pCmd->pTableBlockHashList) == 0) {
|
||||
if (taosHashGetSize(pCmd->insertParam.pTableBlockHashList) == 0) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
|
||||
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
}
|
||||
|
||||
STableDataBlocks* pBlock = NULL;
|
||||
|
||||
int32_t ret =
|
||||
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
|
||||
assert(ret == 0);
|
||||
pBlock->size = sizeof(SSubmitBlk) + pCmd->batchSize * pBlock->rowSize;
|
||||
|
@ -1092,7 +1090,7 @@ static int insertStmtExecute(STscStmt* stmt) {
|
|||
return code;
|
||||
}
|
||||
|
||||
STableDataBlocks* pDataBlock = taosArrayGetP(pCmd->pDataBlocks, 0);
|
||||
STableDataBlocks* pDataBlock = taosArrayGetP(pCmd->insertParam.pDataBlocks, 0);
|
||||
code = tscCopyDataBlockToPayload(stmt->pSql, pDataBlock);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
|
@ -1110,15 +1108,15 @@ static int insertStmtExecute(STscStmt* stmt) {
|
|||
|
||||
// data block reset
|
||||
pCmd->batchSize = 0;
|
||||
for(int32_t i = 0; i < pCmd->numOfTables; ++i) {
|
||||
if (pCmd->pTableNameList && pCmd->pTableNameList[i]) {
|
||||
tfree(pCmd->pTableNameList[i]);
|
||||
for(int32_t i = 0; i < pCmd->insertParam.numOfTables; ++i) {
|
||||
if (pCmd->insertParam.pTableNameList && pCmd->insertParam.pTableNameList[i]) {
|
||||
tfree(pCmd->insertParam.pTableNameList[i]);
|
||||
}
|
||||
}
|
||||
|
||||
pCmd->numOfTables = 0;
|
||||
tfree(pCmd->pTableNameList);
|
||||
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
|
||||
pCmd->insertParam.numOfTables = 0;
|
||||
tfree(pCmd->insertParam.pTableNameList);
|
||||
pCmd->insertParam.pDataBlocks = tscDestroyBlockArrayList(pCmd->insertParam.pDataBlocks);
|
||||
|
||||
return pSql->res.code;
|
||||
}
|
||||
|
@ -1126,21 +1124,21 @@ static int insertStmtExecute(STscStmt* stmt) {
|
|||
static void insertBatchClean(STscStmt* pStmt) {
|
||||
SSqlCmd *pCmd = &pStmt->pSql->cmd;
|
||||
SSqlObj *pSql = pStmt->pSql;
|
||||
int32_t size = taosHashGetSize(pCmd->pTableBlockHashList);
|
||||
int32_t size = taosHashGetSize(pCmd->insertParam.pTableBlockHashList);
|
||||
|
||||
// data block reset
|
||||
pCmd->batchSize = 0;
|
||||
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
if (pCmd->pTableNameList && pCmd->pTableNameList[i]) {
|
||||
tfree(pCmd->pTableNameList[i]);
|
||||
if (pCmd->insertParam.pTableNameList && pCmd->insertParam.pTableNameList[i]) {
|
||||
tfree(pCmd->insertParam.pTableNameList[i]);
|
||||
}
|
||||
}
|
||||
|
||||
tfree(pCmd->pTableNameList);
|
||||
tfree(pCmd->insertParam.pTableNameList);
|
||||
|
||||
/*
|
||||
STableDataBlocks** p = taosHashIterate(pCmd->pTableBlockHashList, NULL);
|
||||
STableDataBlocks** p = taosHashIterate(pCmd->insertParam.pTableBlockHashList, NULL);
|
||||
|
||||
STableDataBlocks* pOneTableBlock = *p;
|
||||
|
||||
|
@ -1151,7 +1149,7 @@ static void insertBatchClean(STscStmt* pStmt) {
|
|||
|
||||
pBlocks->numOfRows = 0;
|
||||
|
||||
p = taosHashIterate(pCmd->pTableBlockHashList, p);
|
||||
p = taosHashIterate(pCmd->insertParam.pTableBlockHashList, p);
|
||||
if (p == NULL) {
|
||||
break;
|
||||
}
|
||||
|
@ -1160,10 +1158,10 @@ static void insertBatchClean(STscStmt* pStmt) {
|
|||
}
|
||||
*/
|
||||
|
||||
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
|
||||
pCmd->numOfTables = 0;
|
||||
pCmd->insertParam.pDataBlocks = tscDestroyBlockArrayList(pCmd->insertParam.pDataBlocks);
|
||||
pCmd->insertParam.numOfTables = 0;
|
||||
|
||||
taosHashEmpty(pCmd->pTableBlockHashList);
|
||||
taosHashEmpty(pCmd->insertParam.pTableBlockHashList);
|
||||
tscFreeSqlResult(pSql);
|
||||
tscFreeSubobj(pSql);
|
||||
tfree(pSql->pSubs);
|
||||
|
@ -1180,7 +1178,7 @@ static int insertBatchStmtExecute(STscStmt* pStmt) {
|
|||
|
||||
pStmt->pSql->retry = pStmt->pSql->maxRetry + 1; //no retry
|
||||
|
||||
if (taosHashGetSize(pStmt->pSql->cmd.pTableBlockHashList) <= 0) { // merge according to vgId
|
||||
if (taosHashGetSize(pStmt->pSql->cmd.insertParam.pTableBlockHashList) <= 0) { // merge according to vgId
|
||||
tscError("0x%"PRIx64" no data block to insert", pStmt->pSql->self);
|
||||
return TSDB_CODE_TSC_APP_ERROR;
|
||||
}
|
||||
|
@ -1216,9 +1214,8 @@ int stmtParseInsertTbTags(SSqlObj* pSql, STscStmt* pStmt) {
|
|||
|
||||
int32_t index = 0;
|
||||
SStrToken sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
|
||||
if (sToken.n == 0) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (sToken.n == 1 && sToken.type == TK_QUESTION) {
|
||||
|
@ -1241,23 +1238,23 @@ int stmtParseInsertTbTags(SSqlObj* pSql, STscStmt* pStmt) {
|
|||
}
|
||||
|
||||
if (sToken.n <= 0 || sToken.type != TK_USING) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
if (sToken.n <= 0 || ((sToken.type != TK_ID) && (sToken.type != TK_STRING))) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
pStmt->mtb.stbname = sToken;
|
||||
|
||||
sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
if (sToken.n <= 0 || sToken.type != TK_TAGS) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
if (sToken.n <= 0 || sToken.type != TK_LP) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
pStmt->mtb.tags = taosArrayInit(4, sizeof(SStrToken));
|
||||
|
@ -1267,7 +1264,7 @@ int stmtParseInsertTbTags(SSqlObj* pSql, STscStmt* pStmt) {
|
|||
while (loopCont) {
|
||||
sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
if (sToken.n <= 0) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
switch (sToken.type) {
|
||||
|
@ -1275,7 +1272,7 @@ int stmtParseInsertTbTags(SSqlObj* pSql, STscStmt* pStmt) {
|
|||
loopCont = 0;
|
||||
break;
|
||||
case TK_VALUES:
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
case TK_QUESTION:
|
||||
pStmt->mtb.tagSet = false; //continue
|
||||
default:
|
||||
|
@ -1285,12 +1282,12 @@ int stmtParseInsertTbTags(SSqlObj* pSql, STscStmt* pStmt) {
|
|||
}
|
||||
|
||||
if (taosArrayGetSize(pStmt->mtb.tags) <= 0) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
sToken = tStrGetToken(pCmd->curSql, &index, false);
|
||||
if (sToken.n <= 0 || sToken.type != TK_VALUES) {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
pStmt->mtb.values = sToken;
|
||||
|
@ -1404,6 +1401,7 @@ TAOS_STMT* taos_stmt_init(TAOS* taos) {
|
|||
pStmt->taos = pObj;
|
||||
|
||||
SSqlObj* pSql = calloc(1, sizeof(SSqlObj));
|
||||
|
||||
if (pSql == NULL) {
|
||||
free(pStmt);
|
||||
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
|
@ -1446,7 +1444,7 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
|
|||
pSql->fp = waitForQueryRsp;
|
||||
pSql->fetchFp = waitForQueryRsp;
|
||||
|
||||
pCmd->insertType = TSDB_QUERY_TYPE_STMT_INSERT;
|
||||
pCmd->insertParam.insertType = TSDB_QUERY_TYPE_STMT_INSERT;
|
||||
|
||||
if (TSDB_CODE_SUCCESS != tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE)) {
|
||||
tscError("%p failed to malloc payload buffer", pSql);
|
||||
|
@ -1500,8 +1498,6 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
|
|||
return normalStmtPrepare(pStmt);
|
||||
}
|
||||
|
||||
|
||||
|
||||
int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags) {
|
||||
STscStmt* pStmt = (STscStmt*)stmt;
|
||||
SSqlObj* pSql = pStmt->pSql;
|
||||
|
@ -1544,7 +1540,7 @@ int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags
|
|||
SSubmitBlk* pBlk = (SSubmitBlk*) (*t1)->pData;
|
||||
pCmd->batchSize = pBlk->numOfRows;
|
||||
|
||||
taosHashPut(pCmd->pTableBlockHashList, (void *)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid), (void*)t1, POINTER_BYTES);
|
||||
taosHashPut(pCmd->insertParam.pTableBlockHashList, (void *)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid), (void*)t1, POINTER_BYTES);
|
||||
|
||||
tscDebug("0x%"PRIx64" table:%s is already prepared, uid:%" PRIu64, pSql->self, name, pStmt->mtb.currentUid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1569,15 +1565,14 @@ int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags
|
|||
|
||||
tscDebug("0x%"PRIx64" SQL: %s", pSql->self, pSql->sqlstr);
|
||||
|
||||
pSql->cmd.parseFinished = 0;
|
||||
pSql->cmd.numOfParams = 0;
|
||||
pSql->cmd.batchSize = 0;
|
||||
|
||||
if (taosHashGetSize(pCmd->pTableBlockHashList) > 0) {
|
||||
SHashObj* hashList = pCmd->pTableBlockHashList;
|
||||
pCmd->pTableBlockHashList = NULL;
|
||||
if (taosHashGetSize(pCmd->insertParam.pTableBlockHashList) > 0) {
|
||||
SHashObj* hashList = pCmd->insertParam.pTableBlockHashList;
|
||||
pCmd->insertParam.pTableBlockHashList = NULL;
|
||||
tscResetSqlCmd(pCmd, true);
|
||||
pCmd->pTableBlockHashList = hashList;
|
||||
pCmd->insertParam.pTableBlockHashList = hashList;
|
||||
}
|
||||
|
||||
int32_t code = tsParseSql(pStmt->pSql, true);
|
||||
|
@ -1589,10 +1584,11 @@ int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags
|
|||
}
|
||||
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
|
||||
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
STableDataBlocks* pBlock = NULL;
|
||||
code = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
code = tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
|
@ -1605,7 +1601,6 @@ int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags
|
|||
pStmt->mtb.tbNum++;
|
||||
|
||||
taosHashPut(pStmt->mtb.pTableBlockHashList, (void *)&pStmt->mtb.currentUid, sizeof(pStmt->mtb.currentUid), (void*)&pBlock, POINTER_BYTES);
|
||||
|
||||
taosHashPut(pStmt->mtb.pTableHash, name, strlen(name), (char*) &pTableMeta->id.uid, sizeof(pTableMeta->id.uid));
|
||||
|
||||
tscDebug("0x%"PRIx64" table:%s is prepared, uid:%" PRIx64, pSql->self, name, pStmt->mtb.currentUid);
|
||||
|
@ -1636,8 +1631,8 @@ int taos_stmt_close(TAOS_STMT* stmt) {
|
|||
if (pStmt->multiTbInsert) {
|
||||
taosHashCleanup(pStmt->mtb.pTableHash);
|
||||
pStmt->mtb.pTableBlockHashList = tscDestroyBlockHashTable(pStmt->mtb.pTableBlockHashList, true);
|
||||
taosHashCleanup(pStmt->pSql->cmd.pTableBlockHashList);
|
||||
pStmt->pSql->cmd.pTableBlockHashList = NULL;
|
||||
taosHashCleanup(pStmt->pSql->cmd.insertParam.pTableBlockHashList);
|
||||
pStmt->pSql->cmd.insertParam.pTableBlockHashList = NULL;
|
||||
taosArrayDestroy(pStmt->mtb.tags);
|
||||
}
|
||||
}
|
||||
|
@ -1803,9 +1798,10 @@ int taos_stmt_execute(TAOS_STMT* stmt) {
|
|||
ret = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
} else {
|
||||
if (pStmt->pSql != NULL) {
|
||||
taos_free_result(pStmt->pSql);
|
||||
tscFreeSqlObj(pStmt->pSql);
|
||||
pStmt->pSql = NULL;
|
||||
}
|
||||
|
||||
pStmt->pSql = taos_query((TAOS*)pStmt->taos, sql);
|
||||
ret = taos_errno(pStmt->pSql);
|
||||
free(sql);
|
||||
|
@ -1875,16 +1871,16 @@ int taos_stmt_get_param(TAOS_STMT *stmt, int idx, int *type, int *bytes) {
|
|||
|
||||
if (pStmt->isInsert) {
|
||||
SSqlCmd* pCmd = &pStmt->pSql->cmd;
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
if (pCmd->pTableBlockHashList == NULL) {
|
||||
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
if (pCmd->insertParam.pTableBlockHashList == NULL) {
|
||||
pCmd->insertParam.pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
|
||||
}
|
||||
|
||||
STableDataBlocks* pBlock = NULL;
|
||||
|
||||
int32_t ret =
|
||||
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
tscGetDataBlockFromList(pCmd->insertParam.pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
|
||||
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
|
||||
if (ret != 0) {
|
||||
// todo handle error
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -373,11 +373,15 @@ int taos_num_fields(TAOS_RES *res) {
|
|||
if (pSql == NULL || pSql->signature != pSql) return 0;
|
||||
|
||||
int32_t num = 0;
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
if (pQueryInfo == NULL) {
|
||||
return num;
|
||||
}
|
||||
|
||||
while(pQueryInfo->pDownstream != NULL) {
|
||||
pQueryInfo = pQueryInfo->pDownstream;
|
||||
}
|
||||
|
||||
size_t numOfCols = tscNumOfFields(pQueryInfo);
|
||||
for(int32_t i = 0; i < numOfCols; ++i) {
|
||||
SInternalField* pInfo = taosArrayGet(pQueryInfo->fieldsInfo.internalField, i);
|
||||
|
@ -408,7 +412,7 @@ TAOS_FIELD *taos_fetch_fields(TAOS_RES *res) {
|
|||
SSqlRes *pRes = &pSql->res;
|
||||
if (pSql == NULL || pSql->signature != pSql) return 0;
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
if (pQueryInfo == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
@ -560,7 +564,7 @@ static bool tscKillQueryInDnode(SSqlObj* pSql) {
|
|||
return true;
|
||||
}
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
if ((pQueryInfo == NULL) || tscIsTwoStageSTableQuery(pQueryInfo, 0)) {
|
||||
return true;
|
||||
|
@ -614,7 +618,7 @@ int taos_errno(TAOS_RES *tres) {
|
|||
* why the sql is invalid
|
||||
*/
|
||||
static bool hasAdditionalErrorInfo(int32_t code, SSqlCmd *pCmd) {
|
||||
if (code != TSDB_CODE_TSC_INVALID_SQL
|
||||
if (code != TSDB_CODE_TSC_INVALID_OPERATION
|
||||
&& code != TSDB_CODE_TSC_SQL_SYNTAX_ERROR) {
|
||||
return false;
|
||||
}
|
||||
|
@ -673,7 +677,7 @@ char *taos_get_client_info() { return version; }
|
|||
static void tscKillSTableQuery(SSqlObj *pSql) {
|
||||
SSqlCmd* pCmd = &pSql->cmd;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
if (!tscIsTwoStageSTableQuery(pQueryInfo, 0)) {
|
||||
return;
|
||||
|
@ -724,7 +728,7 @@ void taos_stop_query(TAOS_RES *res) {
|
|||
// set the error code for master pSqlObj firstly
|
||||
pSql->res.code = TSDB_CODE_TSC_QUERY_CANCELLED;
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, pCmd->clauseIndex);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
|
||||
if (tscIsTwoStageSTableQuery(pQueryInfo, 0)) {
|
||||
assert(pSql->rpcRid <= 0);
|
||||
|
@ -754,7 +758,7 @@ bool taos_is_null(TAOS_RES *res, int32_t row, int32_t col) {
|
|||
return true;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
if (pQueryInfo == NULL) {
|
||||
return true;
|
||||
}
|
||||
|
@ -829,9 +833,9 @@ int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields)
|
|||
case TSDB_DATA_TYPE_NCHAR: {
|
||||
int32_t charLen = varDataLen((char*)row[i] - VARSTR_HEADER_SIZE);
|
||||
if (fields[i].type == TSDB_DATA_TYPE_BINARY) {
|
||||
assert(charLen <= fields[i].bytes);
|
||||
assert(charLen <= fields[i].bytes && charLen >= 0);
|
||||
} else {
|
||||
assert(charLen <= fields[i].bytes * TSDB_NCHAR_SIZE);
|
||||
assert(charLen <= fields[i].bytes * TSDB_NCHAR_SIZE && charLen >= 0);
|
||||
}
|
||||
|
||||
memcpy(str + len, row[i], charLen);
|
||||
|
@ -868,15 +872,11 @@ int taos_validate_sql(TAOS *taos, const char *sql) {
|
|||
|
||||
SSqlObj* pSql = calloc(1, sizeof(SSqlObj));
|
||||
|
||||
pSql->pTscObj = taos;
|
||||
pSql->pTscObj = taos;
|
||||
pSql->signature = pSql;
|
||||
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
|
||||
pRes->numOfTotal = 0;
|
||||
pRes->numOfClauseTotal = 0;
|
||||
|
||||
pCmd->resColumnId = TSDB_RES_COL_ID;
|
||||
|
||||
tscDebug("0x%"PRIx64" Valid SQL: %s pObj:%p", pSql->self, sql, pObj);
|
||||
|
||||
|
@ -896,10 +896,10 @@ int taos_validate_sql(TAOS *taos, const char *sql) {
|
|||
|
||||
strtolower(pSql->sqlstr, sql);
|
||||
|
||||
pCmd->curSql = NULL;
|
||||
if (NULL != pCmd->pTableBlockHashList) {
|
||||
taosHashCleanup(pCmd->pTableBlockHashList);
|
||||
pCmd->pTableBlockHashList = NULL;
|
||||
// pCmd->curSql = NULL;
|
||||
if (NULL != pCmd->insertParam.pTableBlockHashList) {
|
||||
taosHashCleanup(pCmd->insertParam.pTableBlockHashList);
|
||||
pCmd->insertParam.pTableBlockHashList = NULL;
|
||||
}
|
||||
|
||||
pSql->fp = asyncCallback;
|
||||
|
@ -921,90 +921,19 @@ int taos_validate_sql(TAOS *taos, const char *sql) {
|
|||
return code;
|
||||
}
|
||||
|
||||
static int tscParseTblNameList(SSqlObj *pSql, const char *tblNameList, int32_t tblListLen) {
|
||||
// must before clean the sqlcmd object
|
||||
tscResetSqlCmd(&pSql->cmd, false);
|
||||
|
||||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
|
||||
pCmd->command = TSDB_SQL_MULTI_META;
|
||||
pCmd->count = 0;
|
||||
|
||||
int code = TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
char *str = (char *)tblNameList;
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfoS(pCmd, pCmd->clauseIndex);
|
||||
if (pQueryInfo == NULL) {
|
||||
pSql->res.code = terrno;
|
||||
return terrno;
|
||||
void loadMultiTableMetaCallback(void *param, TAOS_RES *res, int code) {
|
||||
SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, (int64_t)param);
|
||||
if (pSql == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscAddEmptyMetaInfo(pQueryInfo);
|
||||
taosReleaseRef(tscObjRef, pSql->self);
|
||||
pSql->res.code = code;
|
||||
tsem_post(&pSql->rspSem);
|
||||
}
|
||||
|
||||
if ((code = tscAllocPayload(pCmd, tblListLen + 16)) != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
||||
char *nextStr;
|
||||
char tblName[TSDB_TABLE_FNAME_LEN];
|
||||
int payloadLen = 0;
|
||||
char *pMsg = pCmd->payload;
|
||||
while (1) {
|
||||
nextStr = strchr(str, ',');
|
||||
if (nextStr == NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(tblName, str, nextStr - str);
|
||||
int32_t len = (int32_t)(nextStr - str);
|
||||
tblName[len] = '\0';
|
||||
|
||||
str = nextStr + 1;
|
||||
len = (int32_t)strtrim(tblName);
|
||||
|
||||
SStrToken sToken = {.n = len, .type = TK_ID, .z = tblName};
|
||||
tGetToken(tblName, &sToken.type);
|
||||
|
||||
// Check if the table name available or not
|
||||
if (tscValidateName(&sToken) != TSDB_CODE_SUCCESS) {
|
||||
code = TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
sprintf(pCmd->payload, "table name is invalid");
|
||||
return code;
|
||||
}
|
||||
|
||||
if ((code = tscSetTableFullName(pTableMetaInfo, &sToken, pSql)) != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
||||
if (++pCmd->count > TSDB_MULTI_TABLEMETA_MAX_NUM) {
|
||||
code = TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
sprintf(pCmd->payload, "tables over the max number");
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t xlen = tNameLen(&pTableMetaInfo->name);
|
||||
if (payloadLen + xlen + 128 >= pCmd->allocSize) {
|
||||
char *pNewMem = realloc(pCmd->payload, pCmd->allocSize + tblListLen);
|
||||
if (pNewMem == NULL) {
|
||||
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
sprintf(pCmd->payload, "failed to allocate memory");
|
||||
return code;
|
||||
}
|
||||
|
||||
pCmd->payload = pNewMem;
|
||||
pCmd->allocSize = pCmd->allocSize + tblListLen;
|
||||
pMsg = pCmd->payload;
|
||||
}
|
||||
|
||||
char n[TSDB_TABLE_FNAME_LEN] = {0};
|
||||
tNameExtractFullName(&pTableMetaInfo->name, n);
|
||||
payloadLen += sprintf(pMsg + payloadLen, "%s,", n);
|
||||
}
|
||||
|
||||
*(pMsg + payloadLen) = '\0';
|
||||
pCmd->payloadLen = payloadLen + 1;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
static void freeElem(void* p) {
|
||||
tfree(*(char**)p);
|
||||
}
|
||||
|
||||
int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
||||
|
@ -1020,38 +949,28 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
|||
pSql->pTscObj = taos;
|
||||
pSql->signature = pSql;
|
||||
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
pSql->fp = NULL; // todo set the correct callback function pointer
|
||||
pSql->cmd.pTableMetaMap = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_NO_LOCK);
|
||||
|
||||
pRes->code = 0;
|
||||
pRes->numOfTotal = 0; // the number of getting table meta from server
|
||||
pRes->numOfClauseTotal = 0;
|
||||
|
||||
assert(pSql->fp == NULL);
|
||||
tscDebug("0x%"PRIx64" tableNameList: %s pObj:%p", pSql->self, tableNameList, pObj);
|
||||
|
||||
int32_t tblListLen = (int32_t)strlen(tableNameList);
|
||||
if (tblListLen > MAX_TABLE_NAME_LENGTH) {
|
||||
tscError("0x%"PRIx64" tableNameList too long, length:%d, maximum allowed:%d", pSql->self, tblListLen, MAX_TABLE_NAME_LENGTH);
|
||||
int32_t length = (int32_t)strlen(tableNameList);
|
||||
if (length > MAX_TABLE_NAME_LENGTH) {
|
||||
tscError("0x%"PRIx64" tableNameList too long, length:%d, maximum allowed:%d", pSql->self, length, MAX_TABLE_NAME_LENGTH);
|
||||
tscFreeSqlObj(pSql);
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
char *str = calloc(1, tblListLen + 1);
|
||||
char *str = calloc(1, length + 1);
|
||||
if (str == NULL) {
|
||||
tscError("0x%"PRIx64" failed to malloc sql string buffer", pSql->self);
|
||||
tscError("0x%"PRIx64" failed to allocate sql string buffer", pSql->self);
|
||||
tscFreeSqlObj(pSql);
|
||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
strtolower(str, tableNameList);
|
||||
int32_t code = (uint8_t) tscParseTblNameList(pSql, str, tblListLen);
|
||||
SArray* plist = taosArrayInit(4, POINTER_BYTES);
|
||||
SArray* vgroupList = taosArrayInit(4, POINTER_BYTES);
|
||||
|
||||
/*
|
||||
* set the qhandle to 0 before return in order to erase the qhandle value assigned in the previous successful query.
|
||||
* If qhandle is NOT set 0, the function of taos_free_result() will send message to server by calling tscBuildAndSendRequest()
|
||||
* to free connection, which may cause segment fault, when the parse phrase is not even successfully executed.
|
||||
*/
|
||||
pRes->qId = 0;
|
||||
int32_t code = (uint8_t) tscTransferTableNameList(pSql, str, length, plist);
|
||||
free(str);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -1059,12 +978,23 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
|||
return code;
|
||||
}
|
||||
|
||||
tscDoQuery(pSql);
|
||||
registerSqlObj(pSql);
|
||||
tscDebug("0x%"PRIx64" load multiple table meta, tableNameList: %s pObj:%p", pSql->self, tableNameList, pObj);
|
||||
|
||||
tscDebug("0x%"PRIx64" load multi-table meta result:%d %s pObj:%p", pSql->self, pRes->code, taos_errstr(pSql), pObj);
|
||||
if ((code = pRes->code) != TSDB_CODE_SUCCESS) {
|
||||
tscFreeSqlObj(pSql);
|
||||
code = getMultiTableMetaFromMnode(pSql, plist, vgroupList, loadMultiTableMetaCallback);
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
code = TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
taosArrayDestroyEx(plist, freeElem);
|
||||
taosArrayDestroyEx(vgroupList, freeElem);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
tscFreeRegisteredSqlObj(pSql);
|
||||
return code;
|
||||
}
|
||||
|
||||
tsem_wait(&pSql->rspSem);
|
||||
tscFreeRegisteredSqlObj(pSql);
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -37,7 +37,7 @@ static int64_t getDelayValueAfterTimewindowClosed(SSqlStream* pStream, int64_t l
|
|||
|
||||
static bool isProjectStream(SQueryInfo* pQueryInfo) {
|
||||
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
|
||||
SExprInfo *pExpr = tscSqlExprGet(pQueryInfo, i);
|
||||
SExprInfo *pExpr = tscExprGet(pQueryInfo, i);
|
||||
if (pExpr->base.functionId != TSDB_FUNC_PRJ) {
|
||||
return false;
|
||||
}
|
||||
|
@ -89,12 +89,12 @@ static void doLaunchQuery(void* param, TAOS_RES* tres, int32_t code) {
|
|||
return;
|
||||
}
|
||||
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
||||
code = tscGetTableMeta(pSql, pTableMetaInfo);
|
||||
if (code == 0 && UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
|
||||
code = tscGetSTableVgroupInfo(pSql, 0);
|
||||
code = tscGetSTableVgroupInfo(pSql, pQueryInfo);
|
||||
}
|
||||
|
||||
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
|
||||
|
@ -138,7 +138,7 @@ static void tscProcessStreamTimer(void *handle, void *tmrId) {
|
|||
|
||||
pStream->numOfRes = 0; // reset the numOfRes.
|
||||
SSqlObj *pSql = pStream->pSql;
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
tscDebug("0x%"PRIx64" timer launch query", pSql->self);
|
||||
|
||||
if (pStream->isProject) {
|
||||
|
@ -197,7 +197,7 @@ static void tscProcessStreamQueryCallback(void *param, TAOS_RES *tres, int numOf
|
|||
tscError("0x%"PRIx64" stream:%p, query data failed, code:0x%08x, retry in %" PRId64 "ms", pStream->pSql->self,
|
||||
pStream, numOfRows, retryDelay);
|
||||
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pStream->pSql->cmd, 0, 0);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pStream->pSql->cmd, 0);
|
||||
|
||||
char name[TSDB_TABLE_FNAME_LEN] = {0};
|
||||
tNameExtractFullName(&pTableMetaInfo->name, name);
|
||||
|
@ -224,7 +224,7 @@ static void tscProcessStreamQueryCallback(void *param, TAOS_RES *tres, int numOf
|
|||
static void tscStreamFillTimeGap(SSqlStream* pStream, TSKEY ts) {
|
||||
#if 0
|
||||
SSqlObj * pSql = pStream->pSql;
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
if (pQueryInfo->fillType != TSDB_FILL_SET_VALUE && pQueryInfo->fillType != TSDB_FILL_NULL) {
|
||||
return;
|
||||
|
@ -273,7 +273,7 @@ static void tscProcessStreamRetrieveResult(void *param, TAOS_RES *res, int numOf
|
|||
return;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
STableMetaInfo *pTableMetaInfo = pQueryInfo->pTableMetaInfo[0];
|
||||
|
||||
if (numOfRows > 0) { // when reaching here the first execution of stream computing is successful.
|
||||
|
@ -444,7 +444,7 @@ static int32_t tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
|
|||
int64_t minIntervalTime =
|
||||
(pStream->precision == TSDB_TIME_PRECISION_MICRO) ? tsMinIntervalTime * 1000L : tsMinIntervalTime;
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
if (!pStream->isProject && pQueryInfo->interval.interval == 0) {
|
||||
sprintf(pSql->cmd.payload, "the interval value is 0");
|
||||
|
@ -494,7 +494,7 @@ static int32_t tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
|
|||
}
|
||||
|
||||
static int64_t tscGetStreamStartTimestamp(SSqlObj *pSql, SSqlStream *pStream, int64_t stime) {
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(&pSql->cmd);
|
||||
|
||||
if (pStream->isProject) {
|
||||
// no data in table, flush all data till now to destination meter, 10sec delay
|
||||
|
@ -556,7 +556,7 @@ static void tscCreateStream(void *param, TAOS_RES *res, int code) {
|
|||
return;
|
||||
}
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
STableComInfo tinfo = tscGetTableInfo(pTableMetaInfo->pTableMeta);
|
||||
|
||||
|
@ -614,31 +614,32 @@ TAOS_STREAM *taos_open_stream(TAOS *taos, const char *sqlstr, void (*fp)(void *p
|
|||
return NULL;
|
||||
}
|
||||
|
||||
pStream->stime = stime;
|
||||
pStream->fp = fp;
|
||||
pStream->stime = stime;
|
||||
pStream->fp = fp;
|
||||
pStream->callback = callback;
|
||||
pStream->param = param;
|
||||
pStream->pSql = pSql;
|
||||
pSql->pStream = pStream;
|
||||
pSql->param = pStream;
|
||||
pSql->maxRetry = TSDB_MAX_REPLICA;
|
||||
pStream->param = param;
|
||||
pStream->pSql = pSql;
|
||||
|
||||
pSql->sqlstr = calloc(1, strlen(sqlstr) + 1);
|
||||
pSql->pStream = pStream;
|
||||
pSql->param = pStream;
|
||||
pSql->maxRetry = TSDB_MAX_REPLICA;
|
||||
pSql->sqlstr = calloc(1, strlen(sqlstr) + 1);
|
||||
if (pSql->sqlstr == NULL) {
|
||||
tscError("0x%"PRIx64" failed to malloc sql string buffer", pSql->self);
|
||||
tscFreeSqlObj(pSql);
|
||||
free(pStream);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
strtolower(pSql->sqlstr, sqlstr);
|
||||
pSql->fp = tscCreateStream;
|
||||
pSql->fetchFp = tscCreateStream;
|
||||
pSql->cmd.resColumnId = TSDB_RES_COL_ID;
|
||||
|
||||
tsem_init(&pSql->rspSem, 0, 0);
|
||||
registerSqlObj(pSql);
|
||||
|
||||
tscDebugL("0x%"PRIx64" SQL: %s", pSql->self, pSql->sqlstr);
|
||||
tsem_init(&pSql->rspSem, 0, 0);
|
||||
|
||||
pSql->fp = tscCreateStream;
|
||||
pSql->fetchFp = tscCreateStream;
|
||||
|
||||
int32_t code = tsParseSql(pSql, true);
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
|
|
|
@ -151,6 +151,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
|
|||
strtolower(pSql->sqlstr, pSql->sqlstr);
|
||||
pRes->qId = 0;
|
||||
pRes->numOfRows = 1;
|
||||
pCmd->resColumnId = TSDB_RES_COL_ID;
|
||||
|
||||
code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -173,7 +174,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
|
|||
|
||||
if (pSql->cmd.command != TSDB_SQL_SELECT && pSql->cmd.command != TSDB_SQL_RETRIEVE_EMPTY_RESULT) {
|
||||
line = __LINE__;
|
||||
code = TSDB_CODE_TSC_INVALID_SQL;
|
||||
code = TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
|
@ -215,7 +216,7 @@ static void tscProcessSubscriptionTimer(void *handle, void *tmrId) {
|
|||
taosTmrReset(tscProcessSubscriptionTimer, pSub->interval, pSub, tscTmr, &pSub->pTimer);
|
||||
}
|
||||
|
||||
|
||||
//TODO refactor: extract table list name not simply from the sql
|
||||
static SArray* getTableList( SSqlObj* pSql ) {
|
||||
const char* p = strstr( pSql->sqlstr, " from " );
|
||||
assert(p != NULL); // we are sure this is a 'select' statement
|
||||
|
@ -224,11 +225,11 @@ static SArray* getTableList( SSqlObj* pSql ) {
|
|||
|
||||
SSqlObj* pNew = taos_query(pSql->pTscObj, sql);
|
||||
if (pNew == NULL) {
|
||||
tscError("0x%"PRIx64"failed to retrieve table id: cannot create new sql object.", pSql->self);
|
||||
tscError("0x%"PRIx64" failed to retrieve table id: cannot create new sql object.", pSql->self);
|
||||
return NULL;
|
||||
|
||||
} else if (taos_errno(pNew) != TSDB_CODE_SUCCESS) {
|
||||
tscError("0x%"PRIx64"failed to retrieve table id,error: %s", pSql->self, tstrerror(taos_errno(pNew)));
|
||||
tscError("0x%"PRIx64" failed to retrieve table id,error: %s", pSql->self, tstrerror(taos_errno(pNew)));
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -266,7 +267,7 @@ static int tscUpdateSubscription(STscObj* pObj, SSub* pSub) {
|
|||
|
||||
pSub->lastSyncTime = taosGetTimestampMs();
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
if (UTIL_TABLE_IS_NORMAL_TABLE(pTableMetaInfo)) {
|
||||
STableMeta * pTableMeta = pTableMetaInfo->pTableMeta;
|
||||
SSubscriptionProgress target = {.uid = pTableMeta->id.uid, .key = 0};
|
||||
|
@ -284,7 +285,7 @@ static int tscUpdateSubscription(STscObj* pObj, SSub* pSub) {
|
|||
}
|
||||
size_t numOfTables = taosArrayGetSize(tables);
|
||||
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
SArray* progress = taosArrayInit(numOfTables, sizeof(SSubscriptionProgress));
|
||||
for( size_t i = 0; i < numOfTables; i++ ) {
|
||||
STidTags* tt = taosArrayGet( tables, i );
|
||||
|
@ -304,7 +305,7 @@ static int tscUpdateSubscription(STscObj* pObj, SSub* pSub) {
|
|||
}
|
||||
taosArrayDestroy(tables);
|
||||
|
||||
TSDB_QUERY_SET_TYPE(tscGetQueryInfo(pCmd, 0)->type, TSDB_QUERY_TYPE_MULTITABLE_QUERY);
|
||||
TSDB_QUERY_SET_TYPE(tscGetQueryInfo(pCmd)->type, TSDB_QUERY_TYPE_MULTITABLE_QUERY);
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -503,8 +504,8 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) {
|
|||
SSqlObj *pSql = pSub->pSql;
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd, 0);
|
||||
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0);
|
||||
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
|
||||
if (taosArrayGetSize(pSub->progress) > 0) { // fix crash in single table subscription
|
||||
|
||||
size_t size = taosArrayGetSize(pSub->progress);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -234,6 +234,7 @@ typedef struct SDataCol {
|
|||
int len; // column data length
|
||||
VarDataOffsetT *dataOff; // For binary and nchar data, the offset in the data column
|
||||
void * pData; // Actual data pointer
|
||||
TSKEY ts; // only used in last NULL column
|
||||
} SDataCol;
|
||||
|
||||
static FORCE_INLINE void dataColReset(SDataCol *pDataCol) { pDataCol->len = 0; }
|
||||
|
|
|
@ -44,8 +44,8 @@ typedef struct SResPair {
|
|||
// the structure for sql function in select clause
|
||||
typedef struct SSqlExpr {
|
||||
char aliasName[TSDB_COL_NAME_LEN]; // as aliasName
|
||||
char token[TSDB_COL_NAME_LEN]; // original token
|
||||
SColIndex colInfo;
|
||||
|
||||
uint64_t uid; // refactor use the pointer
|
||||
|
||||
int16_t functionId; // function id in aAgg array
|
||||
|
@ -92,8 +92,6 @@ size_t tableIdPrefix(const char* name, char* prefix, int32_t len);
|
|||
|
||||
void extractTableNameFromToken(SStrToken *pToken, SStrToken* pTable);
|
||||
|
||||
//SSchema tGetTbnameColumnSchema();
|
||||
|
||||
SSchema tGetBlockDistColumnSchema();
|
||||
|
||||
SSchema tGetUserSpecifiedColumnSchema(tVariant* pVal, SStrToken* exprStr, const char* name);
|
||||
|
|
|
@ -2569,6 +2569,7 @@ _arithmetic_operator_fn_t getArithmeticOperatorFn(int32_t arithmeticOptr) {
|
|||
case TSDB_BINARY_OP_REMAINDER:
|
||||
return vectorRemainder;
|
||||
default:
|
||||
assert(0);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <texpr.h>
|
||||
#include "os.h"
|
||||
|
||||
#include "texpr.h"
|
||||
|
@ -465,27 +466,29 @@ tExprNode* exprTreeFromTableName(const char* tbnameCond) {
|
|||
return expr;
|
||||
}
|
||||
|
||||
tExprNode* exprdup(tExprNode* pTree) {
|
||||
if (pTree == NULL) {
|
||||
tExprNode* exprdup(tExprNode* pNode) {
|
||||
if (pNode == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tExprNode* pNode = calloc(1, sizeof(tExprNode));
|
||||
if (pTree->nodeType == TSQL_NODE_EXPR) {
|
||||
tExprNode* pLeft = exprdup(pTree->_node.pLeft);
|
||||
tExprNode* pRight = exprdup(pTree->_node.pRight);
|
||||
tExprNode* pCloned = calloc(1, sizeof(tExprNode));
|
||||
if (pNode->nodeType == TSQL_NODE_EXPR) {
|
||||
tExprNode* pLeft = exprdup(pNode->_node.pLeft);
|
||||
tExprNode* pRight = exprdup(pNode->_node.pRight);
|
||||
|
||||
pNode->nodeType = TSQL_NODE_EXPR;
|
||||
pNode->_node.pLeft = pLeft;
|
||||
pNode->_node.pRight = pRight;
|
||||
} else if (pTree->nodeType == TSQL_NODE_VALUE) {
|
||||
pNode->pVal = calloc(1, sizeof(tVariant));
|
||||
tVariantAssign(pNode->pVal, pTree->pVal);
|
||||
} else if (pTree->nodeType == TSQL_NODE_COL) {
|
||||
pNode->pSchema = calloc(1, sizeof(SSchema));
|
||||
*pNode->pSchema = *pTree->pSchema;
|
||||
pCloned->_node.pLeft = pLeft;
|
||||
pCloned->_node.pRight = pRight;
|
||||
pCloned->_node.optr = pNode->_node.optr;
|
||||
pCloned->_node.hasPK = pNode->_node.hasPK;
|
||||
} else if (pNode->nodeType == TSQL_NODE_VALUE) {
|
||||
pCloned->pVal = calloc(1, sizeof(tVariant));
|
||||
tVariantAssign(pCloned->pVal, pNode->pVal);
|
||||
} else if (pNode->nodeType == TSQL_NODE_COL) {
|
||||
pCloned->pSchema = calloc(1, sizeof(SSchema));
|
||||
*pCloned->pSchema = *pNode->pSchema;
|
||||
}
|
||||
|
||||
return pNode;
|
||||
pCloned->nodeType = pNode->nodeType;
|
||||
return pCloned;
|
||||
}
|
||||
|
||||
|
|
|
@ -219,14 +219,20 @@ static int32_t dnodeInitStorage() {
|
|||
|
||||
if (tsCompactMnodeWal == 1) {
|
||||
sprintf(tsMnodeTmpDir, "%s/mnode_tmp", tsDataDir);
|
||||
tfsRmdir(tsMnodeTmpDir);
|
||||
if (taosDirExist(tsMnodeTmpDir)) {
|
||||
dError("mnode_tmp dir already exist in %s,quit compact job", tsMnodeTmpDir);
|
||||
return -1;
|
||||
}
|
||||
if (dnodeCreateDir(tsMnodeTmpDir) < 0) {
|
||||
dError("failed to create dir: %s, reason: %s", tsMnodeTmpDir, strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
sprintf(tsMnodeBakDir, "%s/mnode_bak", tsDataDir);
|
||||
//tfsRmdir(tsMnodeBakDir);
|
||||
if (taosDirExist(tsMnodeBakDir)) {
|
||||
dError("mnode_bak dir already exist in %s,quit compact job", tsMnodeBakDir);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
//TODO(dengyihao): no need to init here
|
||||
if (dnodeCreateDir(tsMnodeDir) < 0) {
|
||||
|
|
|
@ -33,6 +33,8 @@ extern "C" {
|
|||
#endif
|
||||
|
||||
#define TSWINDOW_INITIALIZER ((STimeWindow) {INT64_MIN, INT64_MAX})
|
||||
#define TSWINDOW_DESC_INITIALIZER ((STimeWindow) {INT64_MAX, INT64_MIN})
|
||||
|
||||
#define TSKEY_INITIAL_VAL INT64_MIN
|
||||
|
||||
// Bytes for each type.
|
||||
|
@ -298,7 +300,7 @@ do { \
|
|||
#define TSDB_DEFAULT_DB_UPDATE_OPTION 0
|
||||
|
||||
#define TSDB_MIN_DB_CACHE_LAST_ROW 0
|
||||
#define TSDB_MAX_DB_CACHE_LAST_ROW 1
|
||||
#define TSDB_MAX_DB_CACHE_LAST_ROW 3
|
||||
#define TSDB_DEFAULT_CACHE_LAST_ROW 0
|
||||
|
||||
#define TSDB_MIN_FSYNC_PERIOD 0
|
||||
|
@ -345,6 +347,7 @@ do { \
|
|||
#define TSDB_QUERY_TYPE_TAG_FILTER_QUERY 0x400u
|
||||
#define TSDB_QUERY_TYPE_INSERT 0x100u // insert type
|
||||
#define TSDB_QUERY_TYPE_MULTITABLE_QUERY 0x200u
|
||||
#define TSDB_QUERY_TYPE_FILE_INSERT 0x400u // insert data from file
|
||||
#define TSDB_QUERY_TYPE_STMT_INSERT 0x800u // stmt insert type
|
||||
|
||||
#define TSDB_QUERY_HAS_TYPE(x, _type) (((x) & (_type)) != 0)
|
||||
|
|
|
@ -74,7 +74,7 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_REF_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x010A) //"Ref is not there")
|
||||
|
||||
//client
|
||||
#define TSDB_CODE_TSC_INVALID_SQL TAOS_DEF_ERROR_CODE(0, 0x0200) //"Invalid SQL statement")
|
||||
#define TSDB_CODE_TSC_INVALID_OPERATION TAOS_DEF_ERROR_CODE(0, 0x0200) //"Invalid Operation")
|
||||
#define TSDB_CODE_TSC_INVALID_QHANDLE TAOS_DEF_ERROR_CODE(0, 0x0201) //"Invalid qhandle")
|
||||
#define TSDB_CODE_TSC_INVALID_TIME_STAMP TAOS_DEF_ERROR_CODE(0, 0x0202) //"Invalid combination of client/service time")
|
||||
#define TSDB_CODE_TSC_INVALID_VALUE TAOS_DEF_ERROR_CODE(0, 0x0203) //"Invalid value in client")
|
||||
|
@ -219,6 +219,7 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_VND_NO_WRITE_AUTH TAOS_DEF_ERROR_CODE(0, 0x0512) //"Database write operation denied")
|
||||
#define TSDB_CODE_VND_IS_SYNCING TAOS_DEF_ERROR_CODE(0, 0x0513) //"Database is syncing")
|
||||
#define TSDB_CODE_VND_INVALID_TSDB_STATE TAOS_DEF_ERROR_CODE(0, 0x0514) //"Invalid tsdb state")
|
||||
#define TSDB_CODE_VND_IS_CLOSING TAOS_DEF_ERROR_CODE(0, 0x0515) //"Database is closing")
|
||||
|
||||
// tsdb
|
||||
#define TSDB_CODE_TDB_INVALID_TABLE_ID TAOS_DEF_ERROR_CODE(0, 0x0600) //"Invalid table ID")
|
||||
|
|
|
@ -84,7 +84,7 @@ TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_DROP_TABLE, "drop-table" )
|
|||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_ALTER_TABLE, "alter-table" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_TABLE_META, "table-meta" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_STABLE_VGROUP, "stable-vgroup" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_TABLES_META, "tables-meta" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_TABLES_META, "multiTable-meta" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_ALTER_STREAM, "alter-stream" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_SHOW, "show" )
|
||||
TAOS_DEFINE_MESSAGE_TYPE( TSDB_MSG_TYPE_CM_RETRIEVE, "retrieve" )
|
||||
|
@ -294,6 +294,8 @@ typedef struct {
|
|||
|
||||
typedef struct {
|
||||
char name[TSDB_TABLE_FNAME_LEN];
|
||||
// if user specify DROP STABLE, this flag will be set. And an error will be returned if it is not a super table
|
||||
int8_t supertable;
|
||||
int8_t igNotExists;
|
||||
} SCMDropTableMsg;
|
||||
|
||||
|
@ -703,8 +705,9 @@ typedef struct {
|
|||
} STableInfoMsg;
|
||||
|
||||
typedef struct {
|
||||
int32_t numOfVgroups;
|
||||
int32_t numOfTables;
|
||||
char tableIds[];
|
||||
char tableNames[];
|
||||
} SMultiTableInfoMsg;
|
||||
|
||||
typedef struct SSTableVgroupMsg {
|
||||
|
@ -753,8 +756,9 @@ typedef struct STableMetaMsg {
|
|||
|
||||
typedef struct SMultiTableMeta {
|
||||
int32_t numOfTables;
|
||||
int32_t numOfVgroup;
|
||||
int32_t contLen;
|
||||
char metas[];
|
||||
char meta[];
|
||||
} SMultiTableMeta;
|
||||
|
||||
typedef struct {
|
||||
|
|
|
@ -69,9 +69,13 @@ typedef struct {
|
|||
int8_t precision;
|
||||
int8_t compression;
|
||||
int8_t update;
|
||||
int8_t cacheLastRow;
|
||||
int8_t cacheLastRow; // 0:no cache, 1: cache last row, 2: cache last NULL column 3: 1&2
|
||||
} STsdbCfg;
|
||||
|
||||
#define CACHE_NO_LAST(c) ((c)->cacheLastRow == 0)
|
||||
#define CACHE_LAST_ROW(c) (((c)->cacheLastRow & 1) > 0)
|
||||
#define CACHE_LAST_NULL_COLUMN(c) (((c)->cacheLastRow & 2) > 0)
|
||||
|
||||
// --------- TSDB REPOSITORY USAGE STATISTICS
|
||||
typedef struct {
|
||||
int64_t totalStorage; // total bytes occupie
|
||||
|
@ -261,6 +265,12 @@ TsdbQueryHandleT *tsdbQueryTables(STsdbRepo *tsdb, STsdbQueryCond *pCond, STable
|
|||
TsdbQueryHandleT tsdbQueryLastRow(STsdbRepo *tsdb, STsdbQueryCond *pCond, STableGroupInfo *tableInfo, uint64_t qId,
|
||||
SMemRef *pRef);
|
||||
|
||||
|
||||
TsdbQueryHandleT tsdbQueryCacheLast(STsdbRepo *tsdb, STsdbQueryCond *pCond, STableGroupInfo *groupList, uint64_t qId, SMemRef* pMemRef);
|
||||
|
||||
bool isTsdbCacheLastRow(TsdbQueryHandleT* pQueryHandle);
|
||||
|
||||
|
||||
/**
|
||||
* get the queried table object list
|
||||
* @param pHandle
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -719,13 +719,13 @@ static int32_t sdbProcessWrite(void *wparam, void *hparam, int32_t qtype, void *
|
|||
if (action == SDB_ACTION_INSERT) {
|
||||
return sdbPerformInsertAction(pHead, pTable);
|
||||
} else if (action == SDB_ACTION_DELETE) {
|
||||
if (qtype == TAOS_QTYPE_FWD) {
|
||||
//if (qtype == TAOS_QTYPE_FWD) {
|
||||
// Drop database/stable may take a long time and cause a timeout, so we confirm first then reput it into queue
|
||||
sdbWriteFwdToQueue(1, hparam, TAOS_QTYPE_QUERY, unused);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
} else {
|
||||
// sdbWriteFwdToQueue(1, hparam, TAOS_QTYPE_QUERY, unused);
|
||||
// return TSDB_CODE_SUCCESS;
|
||||
//} else {
|
||||
return sdbPerformDeleteAction(pHead, pTable);
|
||||
}
|
||||
//}
|
||||
} else if (action == SDB_ACTION_UPDATE) {
|
||||
return sdbPerformUpdateAction(pHead, pTable);
|
||||
} else {
|
||||
|
|
|
@ -966,6 +966,11 @@ static int32_t mnodeProcessDropTableMsg(SMnodeMsg *pMsg) {
|
|||
pMsg->rpcMsg.ahandle, pDrop->name, pSTable->uid, pSTable->numOfTables, taosHashGetSize(pSTable->vgHash));
|
||||
return mnodeProcessDropSuperTableMsg(pMsg);
|
||||
} else {
|
||||
// user specify the "DROP STABLE" sql statement, but it is actually a normal table, return error msg.
|
||||
if (pDrop->supertable) {
|
||||
return TSDB_CODE_MND_INVALID_TABLE_TYPE;
|
||||
}
|
||||
|
||||
SCTableObj *pCTable = (SCTableObj *)pMsg->pTable;
|
||||
mInfo("msg:%p, app:%p table:%s, start to drop ctable, vgId:%d tid:%d uid:%" PRIu64, pMsg, pMsg->rpcMsg.ahandle,
|
||||
pDrop->name, pCTable->vgId, pCTable->tid, pCTable->uid);
|
||||
|
@ -1189,8 +1194,8 @@ static int32_t mnodeFindSuperTableTagIndex(SSTableObj *pStable, const char *tagN
|
|||
|
||||
static int32_t mnodeAddSuperTableTagCb(SMnodeMsg *pMsg, int32_t code) {
|
||||
SSTableObj *pStable = (SSTableObj *)pMsg->pTable;
|
||||
mLInfo("msg:%p, app:%p stable %s, add tag result:%s", pMsg, pMsg->rpcMsg.ahandle, pStable->info.tableId,
|
||||
tstrerror(code));
|
||||
mLInfo("msg:%p, app:%p stable %s, add tag result:%s, numOfTags:%d", pMsg, pMsg->rpcMsg.ahandle, pStable->info.tableId,
|
||||
tstrerror(code), pStable->numOfTags);
|
||||
|
||||
return code;
|
||||
}
|
||||
|
@ -1674,12 +1679,9 @@ static int32_t mnodeSetSchemaFromSuperTable(SSchema *pSchema, SSTableObj *pTable
|
|||
return (pTable->numOfColumns + pTable->numOfTags) * sizeof(SSchema);
|
||||
}
|
||||
|
||||
static int32_t mnodeGetSuperTableMeta(SMnodeMsg *pMsg) {
|
||||
static int32_t mnodeDoGetSuperTableMeta(SMnodeMsg *pMsg, STableMetaMsg* pMeta) {
|
||||
SSTableObj *pTable = (SSTableObj *)pMsg->pTable;
|
||||
STableMetaMsg *pMeta = rpcMallocCont(sizeof(STableMetaMsg) + sizeof(SSchema) * (TSDB_MAX_TAGS + TSDB_MAX_COLUMNS + 16));
|
||||
if (pMeta == NULL) {
|
||||
return TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pMeta->uid = htobe64(pTable->uid);
|
||||
pMeta->sversion = htons(pTable->sversion);
|
||||
pMeta->tversion = htons(pTable->tversion);
|
||||
|
@ -1690,6 +1692,18 @@ static int32_t mnodeGetSuperTableMeta(SMnodeMsg *pMsg) {
|
|||
pMeta->contLen = sizeof(STableMetaMsg) + mnodeSetSchemaFromSuperTable(pMeta->schema, pTable);
|
||||
tstrncpy(pMeta->tableFname, pTable->info.tableId, sizeof(pMeta->tableFname));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t mnodeGetSuperTableMeta(SMnodeMsg *pMsg) {
|
||||
SSTableObj *pTable = (SSTableObj *)pMsg->pTable;
|
||||
STableMetaMsg *pMeta = rpcMallocCont(sizeof(STableMetaMsg) + sizeof(SSchema) * (TSDB_MAX_TAGS + TSDB_MAX_COLUMNS + 16));
|
||||
if (pMeta == NULL) {
|
||||
return TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
mnodeDoGetSuperTableMeta(pMsg, pMeta);
|
||||
|
||||
pMsg->rpcRsp.len = pMeta->contLen;
|
||||
pMeta->contLen = htons(pMeta->contLen);
|
||||
|
||||
|
@ -1700,11 +1714,7 @@ static int32_t mnodeGetSuperTableMeta(SMnodeMsg *pMsg) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t mnodeProcessSuperTableVgroupMsg(SMnodeMsg *pMsg) {
|
||||
SSTableVgroupMsg *pInfo = pMsg->rpcMsg.pCont;
|
||||
int32_t numOfTable = htonl(pInfo->numOfTables);
|
||||
|
||||
// reserve space
|
||||
static int32_t calculateVgroupMsgLength(SSTableVgroupMsg* pInfo, int32_t numOfTable) {
|
||||
int32_t contLen = sizeof(SSTableVgroupRspMsg) + 32 * sizeof(SVgroupMsg) + sizeof(SVgroupsMsg);
|
||||
for (int32_t i = 0; i < numOfTable; ++i) {
|
||||
char *stableName = (char *)pInfo + sizeof(SSTableVgroupMsg) + (TSDB_TABLE_FNAME_LEN)*i;
|
||||
|
@ -1716,6 +1726,75 @@ static int32_t mnodeProcessSuperTableVgroupMsg(SMnodeMsg *pMsg) {
|
|||
mnodeDecTableRef(pTable);
|
||||
}
|
||||
|
||||
return contLen;
|
||||
}
|
||||
|
||||
static char* serializeVgroupInfo(SSTableObj *pTable, char* name, char* msg, SMnodeMsg* pMsgBody, void* handle) {
|
||||
SName sn = {0};
|
||||
tNameFromString(&sn, name, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
|
||||
const char* tableName = tNameGetTableName(&sn);
|
||||
|
||||
strncpy(msg, tableName, TSDB_TABLE_NAME_LEN);
|
||||
msg += TSDB_TABLE_NAME_LEN;
|
||||
|
||||
if (pTable->vgHash == NULL) {
|
||||
mDebug("msg:%p, app:%p stable:%s, no vgroup exist while get stable vgroup info", pMsgBody, handle, name);
|
||||
mnodeDecTableRef(pTable);
|
||||
|
||||
// even this super table has no corresponding table, still return
|
||||
SVgroupsMsg *pVgroupMsg = (SVgroupsMsg *)msg;
|
||||
pVgroupMsg->numOfVgroups = 0;
|
||||
|
||||
msg += sizeof(SVgroupsMsg);
|
||||
} else {
|
||||
SVgroupsMsg *pVgroupMsg = (SVgroupsMsg *)msg;
|
||||
mDebug("msg:%p, app:%p stable:%s, hash:%p sizeOfVgList:%d will be returned", pMsgBody, handle,
|
||||
pTable->info.tableId, pTable->vgHash, taosHashGetSize(pTable->vgHash));
|
||||
|
||||
int32_t *pVgId = taosHashIterate(pTable->vgHash, NULL);
|
||||
int32_t vgSize = 0;
|
||||
while (pVgId) {
|
||||
SVgObj *pVgroup = mnodeGetVgroup(*pVgId);
|
||||
pVgId = taosHashIterate(pTable->vgHash, pVgId);
|
||||
if (pVgroup == NULL) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pVgroupMsg->vgroups[vgSize].vgId = htonl(pVgroup->vgId);
|
||||
pVgroupMsg->vgroups[vgSize].numOfEps = 0;
|
||||
|
||||
for (int32_t vn = 0; vn < pVgroup->numOfVnodes; ++vn) {
|
||||
SDnodeObj *pDnode = pVgroup->vnodeGid[vn].pDnode;
|
||||
if (pDnode == NULL) break;
|
||||
|
||||
tstrncpy(pVgroupMsg->vgroups[vgSize].epAddr[vn].fqdn, pDnode->dnodeFqdn, TSDB_FQDN_LEN);
|
||||
pVgroupMsg->vgroups[vgSize].epAddr[vn].port = htons(pDnode->dnodePort);
|
||||
|
||||
pVgroupMsg->vgroups[vgSize].numOfEps++;
|
||||
}
|
||||
|
||||
vgSize++;
|
||||
mnodeDecVgroupRef(pVgroup);
|
||||
}
|
||||
|
||||
taosHashCancelIterate(pTable->vgHash, pVgId);
|
||||
mnodeDecTableRef(pTable);
|
||||
|
||||
pVgroupMsg->numOfVgroups = htonl(vgSize);
|
||||
|
||||
// one table is done, try the next table
|
||||
msg += sizeof(SVgroupsMsg) + vgSize * sizeof(SVgroupMsg);
|
||||
}
|
||||
|
||||
return msg;
|
||||
}
|
||||
|
||||
static int32_t mnodeProcessSuperTableVgroupMsg(SMnodeMsg *pMsg) {
|
||||
SSTableVgroupMsg *pInfo = pMsg->rpcMsg.pCont;
|
||||
int32_t numOfTable = htonl(pInfo->numOfTables);
|
||||
|
||||
// calculate the required space.
|
||||
int32_t contLen = calculateVgroupMsgLength(pInfo, numOfTable);
|
||||
SSTableVgroupRspMsg *pRsp = rpcMallocCont(contLen);
|
||||
if (pRsp == NULL) {
|
||||
return TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
|
@ -1726,62 +1805,16 @@ static int32_t mnodeProcessSuperTableVgroupMsg(SMnodeMsg *pMsg) {
|
|||
|
||||
for (int32_t i = 0; i < numOfTable; ++i) {
|
||||
char *stableName = (char *)pInfo + sizeof(SSTableVgroupMsg) + (TSDB_TABLE_FNAME_LEN)*i;
|
||||
|
||||
SSTableObj *pTable = mnodeGetSuperTable(stableName);
|
||||
if (pTable == NULL) {
|
||||
mError("msg:%p, app:%p stable:%s, not exist while get stable vgroup info", pMsg, pMsg->rpcMsg.ahandle, stableName);
|
||||
mnodeDecTableRef(pTable);
|
||||
continue;
|
||||
}
|
||||
if (pTable->vgHash == NULL) {
|
||||
mDebug("msg:%p, app:%p stable:%s, no vgroup exist while get stable vgroup info", pMsg, pMsg->rpcMsg.ahandle,
|
||||
stableName);
|
||||
mnodeDecTableRef(pTable);
|
||||
|
||||
// even this super table has no corresponding table, still return
|
||||
pRsp->numOfTables++;
|
||||
|
||||
SVgroupsMsg *pVgroupMsg = (SVgroupsMsg *)msg;
|
||||
pVgroupMsg->numOfVgroups = 0;
|
||||
|
||||
msg += sizeof(SVgroupsMsg);
|
||||
} else {
|
||||
SVgroupsMsg *pVgroupMsg = (SVgroupsMsg *)msg;
|
||||
mDebug("msg:%p, app:%p stable:%s, hash:%p sizeOfVgList:%d will be returned", pMsg, pMsg->rpcMsg.ahandle,
|
||||
pTable->info.tableId, pTable->vgHash, taosHashGetSize(pTable->vgHash));
|
||||
|
||||
int32_t *pVgId = taosHashIterate(pTable->vgHash, NULL);
|
||||
int32_t vgSize = 0;
|
||||
while (pVgId) {
|
||||
SVgObj *pVgroup = mnodeGetVgroup(*pVgId);
|
||||
pVgId = taosHashIterate(pTable->vgHash, pVgId);
|
||||
if (pVgroup == NULL) continue;
|
||||
|
||||
pVgroupMsg->vgroups[vgSize].vgId = htonl(pVgroup->vgId);
|
||||
pVgroupMsg->vgroups[vgSize].numOfEps = 0;
|
||||
|
||||
for (int32_t vn = 0; vn < pVgroup->numOfVnodes; ++vn) {
|
||||
SDnodeObj *pDnode = pVgroup->vnodeGid[vn].pDnode;
|
||||
if (pDnode == NULL) break;
|
||||
|
||||
tstrncpy(pVgroupMsg->vgroups[vgSize].epAddr[vn].fqdn, pDnode->dnodeFqdn, TSDB_FQDN_LEN);
|
||||
pVgroupMsg->vgroups[vgSize].epAddr[vn].port = htons(pDnode->dnodePort);
|
||||
|
||||
pVgroupMsg->vgroups[vgSize].numOfEps++;
|
||||
}
|
||||
|
||||
vgSize++;
|
||||
mnodeDecVgroupRef(pVgroup);
|
||||
}
|
||||
|
||||
taosHashCancelIterate(pTable->vgHash, pVgId);
|
||||
mnodeDecTableRef(pTable);
|
||||
|
||||
pVgroupMsg->numOfVgroups = htonl(vgSize);
|
||||
|
||||
// one table is done, try the next table
|
||||
msg += sizeof(SVgroupsMsg) + vgSize * sizeof(SVgroupMsg);
|
||||
pRsp->numOfTables++;
|
||||
}
|
||||
msg = serializeVgroupInfo(pTable, stableName, msg, pMsg, pMsg->rpcMsg.ahandle);
|
||||
pRsp->numOfTables++;
|
||||
}
|
||||
|
||||
if (pRsp->numOfTables != numOfTable) {
|
||||
|
@ -2415,9 +2448,9 @@ static int32_t mnodeDoGetChildTableMeta(SMnodeMsg *pMsg, STableMetaMsg *pMeta) {
|
|||
pMeta->vgroup.numOfEps++;
|
||||
mnodeDecDnodeRef(pDnode);
|
||||
}
|
||||
pMeta->vgroup.vgId = htonl(pMsg->pVgroup->vgId);
|
||||
|
||||
mDebug("msg:%p, app:%p table:%s, uid:%" PRIu64 " table meta is retrieved, vgId:%d sid:%d", pMsg, pMsg->rpcMsg.ahandle,
|
||||
pMeta->vgroup.vgId = htonl(pMsg->pVgroup->vgId);
|
||||
mDebug("msg:%p, app:%p table:%s, uid:%" PRIu64 " table meta is retrieved, vgId:%d tid:%d", pMsg, pMsg->rpcMsg.ahandle,
|
||||
pTable->info.tableId, pTable->uid, pTable->vgId, pTable->tid);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -2811,56 +2844,128 @@ static void mnodeProcessAlterTableRsp(SRpcMsg *rpcMsg) {
|
|||
|
||||
static int32_t mnodeProcessMultiTableMetaMsg(SMnodeMsg *pMsg) {
|
||||
SMultiTableInfoMsg *pInfo = pMsg->rpcMsg.pCont;
|
||||
pInfo->numOfTables = htonl(pInfo->numOfTables);
|
||||
|
||||
int32_t totalMallocLen = 4 * 1024 * 1024; // first malloc 4 MB, subsequent reallocation as twice
|
||||
SMultiTableMeta *pMultiMeta = rpcMallocCont(totalMallocLen);
|
||||
pInfo->numOfTables = htonl(pInfo->numOfTables);
|
||||
pInfo->numOfVgroups = htonl(pInfo->numOfVgroups);
|
||||
|
||||
int32_t contLen = pMsg->rpcMsg.contLen - sizeof(SMultiTableInfoMsg);
|
||||
|
||||
int32_t num = 0;
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
char* str = strndup(pInfo->tableNames, contLen);
|
||||
char** nameList = strsplit(str, ",", &num);
|
||||
SArray* pList = taosArrayInit(4, POINTER_BYTES);
|
||||
SMultiTableMeta *pMultiMeta = NULL;
|
||||
|
||||
if (num != pInfo->numOfTables + pInfo->numOfVgroups) {
|
||||
mError("msg:%p, app:%p, failed to get multi-tableMeta, msg inconsistent", pMsg, pMsg->rpcMsg.ahandle);
|
||||
code = TSDB_CODE_MND_INVALID_TABLE_NAME;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
// first malloc 80KB, subsequent reallocation will expand the size as twice of the original size
|
||||
int32_t totalMallocLen = sizeof(STableMetaMsg) + sizeof(SSchema) * (TSDB_MAX_TAGS + TSDB_MAX_COLUMNS + 16);
|
||||
pMultiMeta = rpcMallocCont(totalMallocLen);
|
||||
if (pMultiMeta == NULL) {
|
||||
return TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
code = TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
pMultiMeta->contLen = sizeof(SMultiTableMeta);
|
||||
pMultiMeta->numOfTables = 0;
|
||||
|
||||
for (int32_t t = 0; t < pInfo->numOfTables; ++t) {
|
||||
char * tableId = (char *)(pInfo->tableIds + t * TSDB_TABLE_FNAME_LEN);
|
||||
SCTableObj *pTable = mnodeGetChildTable(tableId);
|
||||
if (pTable == NULL) continue;
|
||||
int32_t t = 0;
|
||||
for (; t < pInfo->numOfTables; ++t) {
|
||||
char *fullName = nameList[t];
|
||||
|
||||
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableName(tableId);
|
||||
if (pMsg->pDb == NULL || pMsg->pDb->status != TSDB_DB_STATUS_READY) {
|
||||
mnodeDecTableRef(pTable);
|
||||
continue;
|
||||
pMsg->pVgroup = NULL;
|
||||
pMsg->pTable = mnodeGetTable(fullName);
|
||||
if (pMsg->pTable == NULL) {
|
||||
mError("msg:%p, app:%p table:%s, failed to get table meta, table not exist", pMsg, pMsg->rpcMsg.ahandle, fullName);
|
||||
code = TSDB_CODE_MND_INVALID_TABLE_NAME;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
int availLen = totalMallocLen - pMultiMeta->contLen;
|
||||
if (availLen <= sizeof(STableMetaMsg) + sizeof(SSchema) * (TSDB_MAX_TAGS + TSDB_MAX_COLUMNS + 16)) {
|
||||
if (pMsg->pDb == NULL) {
|
||||
pMsg->pDb = mnodeGetDbByTableName(fullName);
|
||||
}
|
||||
|
||||
if (pMsg->pDb == NULL || pMsg->pDb->status != TSDB_DB_STATUS_READY) {
|
||||
mnodeDecTableRef(pMsg->pTable);
|
||||
code = TSDB_CODE_APP_NOT_READY;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
int remain = totalMallocLen - pMultiMeta->contLen;
|
||||
if (remain <= sizeof(STableMetaMsg) + sizeof(SSchema) * (TSDB_MAX_TAGS + TSDB_MAX_COLUMNS + 16)) {
|
||||
totalMallocLen *= 2;
|
||||
pMultiMeta = rpcReallocCont(pMultiMeta, totalMallocLen);
|
||||
if (pMultiMeta == NULL) {
|
||||
mnodeDecTableRef(pTable);
|
||||
return TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
} else {
|
||||
t--;
|
||||
mnodeDecTableRef(pTable);
|
||||
continue;
|
||||
mnodeDecTableRef(pMsg->pTable);
|
||||
code = TSDB_CODE_MND_OUT_OF_MEMORY;
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
STableMetaMsg *pMeta = (STableMetaMsg *)(pMultiMeta->metas + pMultiMeta->contLen);
|
||||
int32_t code = mnodeDoGetChildTableMeta(pMsg, pMeta);
|
||||
STableMetaMsg *pMeta = (STableMetaMsg *)((char*) pMultiMeta + pMultiMeta->contLen);
|
||||
|
||||
if (pMsg->pTable->type == TSDB_SUPER_TABLE) {
|
||||
code = mnodeDoGetSuperTableMeta(pMsg, pMeta);
|
||||
taosArrayPush(pList, &fullName);// keep the full name for each super table for retrieve vgroup list
|
||||
} else {
|
||||
code = mnodeDoGetChildTableMeta(pMsg, pMeta);
|
||||
}
|
||||
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
pMultiMeta->numOfTables ++;
|
||||
pMultiMeta->contLen += pMeta->contLen;
|
||||
}
|
||||
|
||||
mnodeDecTableRef(pTable);
|
||||
mnodeDecTableRef(pMsg->pTable);
|
||||
assert(((SCTableObj*)pMsg->pTable)->refCount >= 1);
|
||||
pMsg->pTable = NULL;
|
||||
}
|
||||
|
||||
char* msg = (char*) pMultiMeta + pMultiMeta->contLen;
|
||||
|
||||
// add the additional super table names that needs the vgroup info
|
||||
for(;t < num; ++t) {
|
||||
taosArrayPush(pList, &nameList[t]);
|
||||
}
|
||||
|
||||
// add the pVgroupList into the pList
|
||||
int32_t numOfVgroupList = (int32_t) taosArrayGetSize(pList);
|
||||
pMultiMeta->numOfVgroup = htonl(numOfVgroupList);
|
||||
|
||||
for(int32_t i = 0; i < numOfVgroupList; ++i) {
|
||||
char* name = taosArrayGetP(pList, i);
|
||||
|
||||
SSTableObj *pTable = mnodeGetSuperTable(name);
|
||||
if (pTable == NULL) {
|
||||
mError("msg:%p, app:%p stable:%s, not exist while get stable vgroup info", pMsg, pMsg->rpcMsg.ahandle, name);
|
||||
code = TSDB_CODE_MND_INVALID_TABLE_NAME;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
msg = serializeVgroupInfo(pTable, name, msg, pMsg, pMsg->rpcMsg.ahandle);
|
||||
}
|
||||
|
||||
pMultiMeta->contLen = (int32_t) (msg - (char*) pMultiMeta);
|
||||
|
||||
pMultiMeta->numOfTables = htonl(pMultiMeta->numOfTables);
|
||||
pMsg->rpcRsp.rsp = pMultiMeta;
|
||||
pMsg->rpcRsp.len = pMultiMeta->contLen;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
_error:
|
||||
tfree(str);
|
||||
tfree(nameList);
|
||||
rpcFreeCont(pMultiMeta);
|
||||
taosArrayDestroy(pList);
|
||||
pMsg->pTable = NULL;
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t mnodeGetShowTableMeta(STableMetaMsg *pMeta, SShowObj *pShow, void *pConn) {
|
||||
|
|
|
@ -121,7 +121,7 @@ static int32_t mnodeVgroupActionDelete(SSdbRow *pRow) {
|
|||
SVgObj *pVgroup = pRow->pObj;
|
||||
|
||||
if (pVgroup->pDb == NULL) {
|
||||
mError("vgId:%d, db:%s is not exist while insert into hash", pVgroup->vgId, pVgroup->dbName);
|
||||
mError("vgId:%d, db:%s is not exist while delete from hash", pVgroup->vgId, pVgroup->dbName);
|
||||
return TSDB_CODE_MND_VGROUP_NOT_EXIST;
|
||||
}
|
||||
|
||||
|
|
|
@ -21,6 +21,7 @@ extern "C" {
|
|||
#endif
|
||||
|
||||
void taosRemoveDir(char *rootDir);
|
||||
bool taosDirExist(const char* dirname);
|
||||
int32_t taosMkDir(const char *pathname, mode_t mode);
|
||||
void taosRemoveOldLogFiles(char *rootDir, int32_t keepDays);
|
||||
int32_t taosRename(char *oldName, char *newName);
|
||||
|
|
|
@ -45,6 +45,10 @@ void taosRemoveDir(char *rootDir) {
|
|||
uInfo("dir:%s is removed", rootDir);
|
||||
}
|
||||
|
||||
bool taosDirExist(const char* dirname) {
|
||||
return access(dirname, F_OK) == 0;
|
||||
}
|
||||
|
||||
int taosMkDir(const char *path, mode_t mode) {
|
||||
int code = mkdir(path, 0755);
|
||||
if (code < 0 && errno == EEXIST) code = 0;
|
||||
|
|
|
@ -87,12 +87,12 @@ static int32_t (*parseLocaltimeFp[]) (char* timestr, int64_t* time, int32_t time
|
|||
|
||||
int32_t taosGetTimestampSec() { return (int32_t)time(NULL); }
|
||||
|
||||
int32_t taosParseTime(char* timestr, int64_t* time, int32_t len, int32_t timePrec, int8_t daylight) {
|
||||
int32_t taosParseTime(char* timestr, int64_t* time, int32_t len, int32_t timePrec, int8_t day_light) {
|
||||
/* parse datatime string in with tz */
|
||||
if (strnchr(timestr, 'T', len, false) != NULL) {
|
||||
return parseTimeWithTz(timestr, time, timePrec);
|
||||
} else {
|
||||
return (*parseLocaltimeFp[daylight])(timestr, time, timePrec);
|
||||
return (*parseLocaltimeFp[day_light])(timestr, time, timePrec);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -165,7 +165,7 @@ void httpSendTaosdInvalidSqlErrorResp(HttpContext *pContext, char *errMsg) {
|
|||
}
|
||||
}
|
||||
|
||||
httpSendErrorRespImp(pContext, httpCode, "Bad Request", TSDB_CODE_TSC_INVALID_SQL & 0XFFFF, temp);
|
||||
httpSendErrorRespImp(pContext, httpCode, "Bad Request", TSDB_CODE_TSC_INVALID_OPERATION & 0XFFFF, temp);
|
||||
}
|
||||
|
||||
void httpSendSuccResp(HttpContext *pContext, char *desc) {
|
||||
|
|
|
@ -263,7 +263,7 @@ void httpProcessSingleSqlCallBackImp(void *param, TAOS_RES *result, int32_t code
|
|||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
SSqlObj *pObj = (SSqlObj *)result;
|
||||
if (code == TSDB_CODE_TSC_INVALID_SQL) {
|
||||
if (code == TSDB_CODE_TSC_INVALID_OPERATION) {
|
||||
terrno = code;
|
||||
httpError("context:%p, fd:%d, user:%s, query error, code:%s, sqlObj:%p, error:%s", pContext, pContext->fd,
|
||||
pContext->user, tstrerror(code), pObj, taos_errstr(pObj));
|
||||
|
|
|
@ -70,13 +70,13 @@ typedef struct SResultRowPool {
|
|||
SArray* pData; // SArray<void*>
|
||||
} SResultRowPool;
|
||||
|
||||
typedef struct SSqlGroupbyExpr {
|
||||
typedef struct SGroupbyExpr {
|
||||
int16_t tableIndex;
|
||||
SArray* columnInfo; // SArray<SColIndex>, group by columns information
|
||||
int16_t numOfGroupCols;
|
||||
int16_t numOfGroupCols; // todo remove it
|
||||
int16_t orderIndex; // order by column index
|
||||
int16_t orderType; // order by type: asc/desc
|
||||
} SSqlGroupbyExpr;
|
||||
} SGroupbyExpr;
|
||||
|
||||
typedef struct SResultRow {
|
||||
int32_t pageId; // pageId & rowId is the position of current result in disk-based output buffer
|
||||
|
@ -216,7 +216,7 @@ typedef struct SQueryAttr {
|
|||
int32_t intermediateResultRowSize; // intermediate result row size, in case of top-k query.
|
||||
int32_t maxTableColumnWidth;
|
||||
int32_t tagLen; // tag value length of current query
|
||||
SSqlGroupbyExpr* pGroupbyExpr;
|
||||
SGroupbyExpr* pGroupbyExpr;
|
||||
|
||||
SExprInfo* pExpr1;
|
||||
SExprInfo* pExpr2;
|
||||
|
@ -302,6 +302,7 @@ enum OPERATOR_TYPE_E {
|
|||
OP_GlobalAggregate = 18, // global merge for the multi-way data sources.
|
||||
OP_Filter = 19,
|
||||
OP_Distinct = 20,
|
||||
OP_Join = 21,
|
||||
};
|
||||
|
||||
typedef struct SOperatorInfo {
|
||||
|
@ -314,7 +315,8 @@ typedef struct SOperatorInfo {
|
|||
SExprInfo *pExpr;
|
||||
SQueryRuntimeEnv *pRuntimeEnv;
|
||||
|
||||
struct SOperatorInfo *upstream;
|
||||
struct SOperatorInfo **upstream; // upstream pointer list
|
||||
int32_t numOfUpstream; // number of upstream. The value is always ONE expect for join operator
|
||||
__operator_fn_t exec;
|
||||
__optr_cleanup_fn_t cleanup;
|
||||
} SOperatorInfo;
|
||||
|
@ -362,7 +364,7 @@ typedef struct SQueryParam {
|
|||
|
||||
SColIndex *pGroupColIndex;
|
||||
SColumnInfo *pTagColumnInfo;
|
||||
SSqlGroupbyExpr *pGroupbyExpr;
|
||||
SGroupbyExpr *pGroupbyExpr;
|
||||
int32_t tableScanOperator;
|
||||
SArray *pOperator;
|
||||
} SQueryParam;
|
||||
|
@ -494,6 +496,8 @@ typedef struct SMultiwayMergeInfo {
|
|||
bool groupMix;
|
||||
} SMultiwayMergeInfo;
|
||||
|
||||
void appendUpstream(SOperatorInfo* p, SOperatorInfo* pUpstream);
|
||||
|
||||
SOperatorInfo* createDataBlocksOptScanInfo(void* pTsdbQueryHandle, SQueryRuntimeEnv* pRuntimeEnv, int32_t repeatTime, int32_t reverseTime);
|
||||
SOperatorInfo* createTableScanOperator(void* pTsdbQueryHandle, SQueryRuntimeEnv* pRuntimeEnv, int32_t repeatTime);
|
||||
SOperatorInfo* createTableSeqScanOperator(void* pTsdbQueryHandle, SQueryRuntimeEnv* pRuntimeEnv);
|
||||
|
@ -514,12 +518,20 @@ SOperatorInfo* createMultiwaySortOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SEx
|
|||
int32_t numOfRows, void* merger, bool groupMix);
|
||||
SOperatorInfo* createGlobalAggregateOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorInfo* upstream, SExprInfo* pExpr, int32_t numOfOutput, void* param);
|
||||
SOperatorInfo* createSLimitOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorInfo* upstream, SExprInfo* pExpr, int32_t numOfOutput, void* merger);
|
||||
SOperatorInfo* createFilterOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorInfo* upstream, SExprInfo* pExpr, int32_t numOfOutput);
|
||||
SOperatorInfo* createFilterOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorInfo* upstream, SExprInfo* pExpr,
|
||||
int32_t numOfOutput, SColumnInfo* pCols, int32_t numOfFilter);
|
||||
|
||||
SOperatorInfo* createJoinOperatorInfo(SOperatorInfo** pUpstream, int32_t numOfUpstream, SSchema* pSchema, int32_t numOfOutput);
|
||||
|
||||
SSDataBlock* doGlobalAggregate(void* param, bool* newgroup);
|
||||
SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup);
|
||||
SSDataBlock* doSLimit(void* param, bool* newgroup);
|
||||
|
||||
int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols, SSingleColumnFilterInfo** pFilterInfo, uint64_t qId);
|
||||
void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, SSDataBlock* pBlock);
|
||||
bool doFilterDataBlock(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, int32_t numOfRows, int8_t* p);
|
||||
void doCompactSDataBlock(SSDataBlock* pBlock, int32_t numOfRows, int8_t* p);
|
||||
|
||||
SSDataBlock* createOutputBuf(SExprInfo* pExpr, int32_t numOfOutput, int32_t numOfRows);
|
||||
void* destroyOutputBuf(SSDataBlock* pBlock);
|
||||
|
||||
|
@ -536,13 +548,14 @@ int32_t createQueryFunc(SQueriedTableInfo* pTableInfo, int32_t numOfOutput, SExp
|
|||
int32_t createIndirectQueryFuncExprFromMsg(SQueryTableMsg *pQueryMsg, int32_t numOfOutput, SExprInfo **pExprInfo,
|
||||
SSqlExpr **pExpr, SExprInfo *prevExpr);
|
||||
|
||||
SSqlGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *pColIndex, int32_t *code);
|
||||
SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGroupbyExpr, SExprInfo *pExprs,
|
||||
SGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *pColIndex, int32_t *code);
|
||||
SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SGroupbyExpr *pGroupbyExpr, SExprInfo *pExprs,
|
||||
SExprInfo *pSecExprs, STableGroupInfo *pTableGroupInfo, SColumnInfo* pTagCols, int32_t vgId, char* sql, uint64_t *qId);
|
||||
|
||||
int32_t initQInfo(STsBufInfo* pTsBufInfo, void* tsdb, void* sourceOptr, SQInfo* pQInfo, SQueryParam* param, char* start,
|
||||
int32_t prevResultLen, void* merger);
|
||||
|
||||
int32_t createFilterInfo(SQueryAttr* pQueryAttr, uint64_t qId);
|
||||
void freeColumnFilterInfo(SColumnFilterInfo* pFilter, int32_t numOfFilters);
|
||||
|
||||
STableQueryInfo *createTableQueryInfo(SQueryAttr* pQueryAttr, void* pTable, bool groupbyColumn, STimeWindow win, void* buf);
|
||||
|
|
|
@ -62,7 +62,7 @@ typedef struct SFillInfo {
|
|||
|
||||
SFillColInfo* pFillCol; // column info for fill operations
|
||||
SFillTagColInfo* pTags; // tags value for filling gap
|
||||
void* handle; // for dubug purpose
|
||||
void* handle; // for debug purpose
|
||||
} SFillInfo;
|
||||
|
||||
typedef struct SPoint {
|
||||
|
@ -82,8 +82,6 @@ void taosFillSetStartInfo(SFillInfo* pFillInfo, int32_t numOfRows, TSKEY endKey)
|
|||
|
||||
void taosFillSetInputDataBlock(SFillInfo* pFillInfo, const struct SSDataBlock* pInput);
|
||||
|
||||
void taosFillCopyInputDataFromOneFilePage(SFillInfo* pFillInfo, const tFilePage* pInput);
|
||||
|
||||
bool taosFillHasMoreResults(SFillInfo* pFillInfo);
|
||||
|
||||
int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, int64_t ekey, int32_t maxNumOfRows);
|
||||
|
|
|
@ -16,7 +16,38 @@
|
|||
#ifndef TDENGINE_QPLAN_H
|
||||
#define TDENGINE_QPLAN_H
|
||||
|
||||
//TODO refactor
|
||||
struct SQueryInfo;
|
||||
|
||||
typedef struct SQueryNodeBasicInfo {
|
||||
int32_t type;
|
||||
char *name;
|
||||
} SQueryNodeBasicInfo;
|
||||
|
||||
typedef struct SQueryTableInfo {
|
||||
char *tableName;
|
||||
STableId id;
|
||||
} SQueryTableInfo;
|
||||
|
||||
typedef struct SQueryNode {
|
||||
SQueryNodeBasicInfo info;
|
||||
SQueryTableInfo tableInfo;
|
||||
SSchema *pSchema; // the schema of the input SSDatablock
|
||||
int32_t numOfCols; // number of input columns
|
||||
SExprInfo *pExpr; // the query functions or sql aggregations
|
||||
int32_t numOfOutput; // number of result columns, which is also the number of pExprs
|
||||
|
||||
void *pExtInfo; // additional information
|
||||
// previous operator to generated result for current node to process
|
||||
// in case of join, multiple prev nodes exist.
|
||||
SArray *pPrevNodes;// upstream nodes
|
||||
struct SQueryNode *nextNode;
|
||||
} SQueryNode;
|
||||
|
||||
SQueryNode* qCreateQueryPlan(struct SQueryInfo* pQueryInfo);
|
||||
void* qDestroyQueryPlan(SQueryNode* pQueryNode);
|
||||
|
||||
char* queryPlanToString(SQueryNode* pQueryNode);
|
||||
|
||||
SArray* createTableScanPlan(SQueryAttr* pQueryAttr);
|
||||
SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr);
|
||||
SArray* createGlobalMergePlan(SQueryAttr* pQueryAttr);
|
||||
|
|
|
@ -107,14 +107,18 @@ typedef struct SSqlNode {
|
|||
struct tSqlExpr *pHaving; // having clause [optional]
|
||||
} SSqlNode;
|
||||
|
||||
typedef struct STableNamePair {
|
||||
SStrToken name;
|
||||
typedef struct SRelElementPair {
|
||||
union {
|
||||
SStrToken tableName;
|
||||
SArray *pSubquery;
|
||||
};
|
||||
|
||||
SStrToken aliasName;
|
||||
} STableNamePair;
|
||||
} SRelElementPair;
|
||||
|
||||
typedef struct SRelationInfo {
|
||||
int32_t type; // nested query|table name list
|
||||
SArray *list; // SArray<STableNamePair>|SArray<SSqlNode*>
|
||||
SArray *list; // SArray<SRelElementPair>
|
||||
} SRelationInfo;
|
||||
|
||||
typedef struct SCreatedTableInfo {
|
||||
|
@ -254,8 +258,9 @@ SArray *tVariantListInsert(SArray *pList, tVariant *pVar, uint8_t sortOrder, int
|
|||
SArray *tVariantListAppendToken(SArray *pList, SStrToken *pAliasToken, uint8_t sortOrder);
|
||||
|
||||
SRelationInfo *setTableNameList(SRelationInfo* pFromInfo, SStrToken *pName, SStrToken* pAlias);
|
||||
SRelationInfo *setSubquery(SRelationInfo* pFromInfo, SArray* pSqlNode);
|
||||
//SRelationInfo *setSubquery(SRelationInfo* pFromInfo, SRelElementPair* p);
|
||||
void *destroyRelationInfo(SRelationInfo* pFromInfo);
|
||||
SRelationInfo *addSubqueryElem(SRelationInfo* pRelationInfo, SArray* pSub, SStrToken* pAlias);
|
||||
|
||||
// sql expr leaf node
|
||||
tSqlExpr *tSqlExprCreateIdValue(SStrToken *pToken, int32_t optrType);
|
||||
|
|
|
@ -47,6 +47,9 @@ void clearResultRow(SQueryRuntimeEnv* pRuntimeEnv, SResultRow* pResultRow, in
|
|||
|
||||
SResultRowCellInfo* getResultCell(const SResultRow* pRow, int32_t index, int32_t* offset);
|
||||
|
||||
void* destroyQueryFuncExpr(SExprInfo* pExprInfo, int32_t numOfExpr);
|
||||
void* freeColumnInfo(SColumnInfo* pColumnInfo, int32_t numOfCols);
|
||||
|
||||
static FORCE_INLINE SResultRow *getResultRow(SResultRowInfo *pResultRowInfo, int32_t slot) {
|
||||
assert(pResultRowInfo != NULL && slot >= 0 && slot < pResultRowInfo->size);
|
||||
return pResultRowInfo->pResult[slot];
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
#include <stdbool.h>
|
||||
#include "qSqlparser.h"
|
||||
#include "tcmdtype.h"
|
||||
#include "tstoken.h"
|
||||
#include "ttoken.h"
|
||||
#include "ttokendef.h"
|
||||
#include "tutil.h"
|
||||
#include "tvariant.h"
|
||||
|
@ -512,7 +512,13 @@ distinct(X) ::= . { X.n = 0;}
|
|||
%type from {SRelationInfo*}
|
||||
%destructor from {destroyRelationInfo($$);}
|
||||
from(A) ::= FROM tablelist(X). {A = X;}
|
||||
from(A) ::= FROM LP union(Y) RP. {A = setSubquery(NULL, Y);}
|
||||
from(A) ::= FROM sub(X). {A = X;}
|
||||
|
||||
%type sub {SRelationInfo*}
|
||||
%destructor sub {destroyRelationInfo($$);}
|
||||
sub(A) ::= LP union(Y) RP. {A = addSubqueryElem(NULL, Y, NULL);}
|
||||
sub(A) ::= LP union(Y) RP ids(Z). {A = addSubqueryElem(NULL, Y, &Z);}
|
||||
sub(A) ::= sub(X) COMMA LP union(Y) RP ids(Z).{A = addSubqueryElem(X, Y, &Z);}
|
||||
|
||||
%type tablelist {SRelationInfo*}
|
||||
%destructor tablelist {destroyRelationInfo($$);}
|
||||
|
|
|
@ -166,7 +166,7 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
|
|||
int16_t *bytes, int32_t *interBytes, int16_t extLength, bool isSuperTable) {
|
||||
if (!isValidDataType(dataType)) {
|
||||
qError("Illegal data type %d or data type length %d", dataType, dataBytes);
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (functionId == TSDB_FUNC_TS || functionId == TSDB_FUNC_TS_DUMMY || functionId == TSDB_FUNC_TAG_DUMMY ||
|
||||
|
@ -353,7 +353,7 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
|
|||
*interBytes = (*bytes);
|
||||
|
||||
} else {
|
||||
return TSDB_CODE_TSC_INVALID_SQL;
|
||||
return TSDB_CODE_TSC_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -3700,7 +3700,7 @@ char *getArithColumnData(void *param, const char* name, int32_t colId) {
|
|||
}
|
||||
}
|
||||
|
||||
assert(index >= 0 /*&& colId >= 0*/);
|
||||
assert(index >= 0);
|
||||
return pSupport->data[index] + pSupport->offset * pSupport->colList[index].bytes;
|
||||
}
|
||||
|
||||
|
@ -4839,7 +4839,7 @@ static void mergeTableBlockDist(STableBlockDist* pDist, const STableBlockDist* p
|
|||
pDist->dataBlockInfos = taosArrayInit(4, sizeof(SFileBlockInfo));
|
||||
}
|
||||
|
||||
taosArrayPushBatch(pDist->dataBlockInfos, pSrc->dataBlockInfos->pData, (int32_t) taosArrayGetSize(pSrc->dataBlockInfos));
|
||||
taosArrayAddBatch(pDist->dataBlockInfos, pSrc->dataBlockInfos->pData, (int32_t) taosArrayGetSize(pSrc->dataBlockInfos));
|
||||
}
|
||||
|
||||
void block_func_merge(SQLFunctionCtx* pCtx) {
|
||||
|
|
|
@ -33,6 +33,8 @@
|
|||
#define SET_MASTER_SCAN_FLAG(runtime) ((runtime)->scanFlag = MASTER_SCAN)
|
||||
#define SET_REVERSE_SCAN_FLAG(runtime) ((runtime)->scanFlag = REVERSE_SCAN)
|
||||
|
||||
#define TSWINDOW_IS_EQUAL(t1, t2) (((t1).skey == (t2).skey) && ((t1).ekey == (t2).ekey))
|
||||
|
||||
#define SWITCH_ORDER(n) (((n) = ((n) == TSDB_ORDER_ASC) ? TSDB_ORDER_DESC : TSDB_ORDER_ASC))
|
||||
|
||||
#define SDATA_BLOCK_INITIALIZER (SDataBlockInfo) {{0}, 0}
|
||||
|
@ -169,6 +171,8 @@ static void setBlockStatisInfo(SQLFunctionCtx *pCtx, SSDataBlock* pSDataBlock, S
|
|||
static void destroyTableQueryInfoImpl(STableQueryInfo *pTableQueryInfo);
|
||||
static bool hasMainOutput(SQueryAttr *pQueryAttr);
|
||||
|
||||
static SColumnInfo* extractColumnFilterInfo(SExprInfo* pExpr, int32_t numOfOutput, int32_t* numOfFilterCols);
|
||||
|
||||
static int32_t setTimestampListJoinInfo(SQueryRuntimeEnv* pRuntimeEnv, tVariant* pTag, STableQueryInfo *pTableQueryInfo);
|
||||
static void releaseQueryBuf(size_t numOfTables);
|
||||
static int32_t binarySearchForKey(char *pValue, int num, TSKEY key, int order);
|
||||
|
@ -176,8 +180,6 @@ static STsdbQueryCond createTsdbQueryCond(SQueryAttr* pQueryAttr, STimeWindow* w
|
|||
static STableIdInfo createTableIdInfo(STableQueryInfo* pTableQueryInfo);
|
||||
|
||||
static void setTableScanFilterOperatorInfo(STableScanInfo* pTableScanInfo, SOperatorInfo* pDownstream);
|
||||
static int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols,
|
||||
SSingleColumnFilterInfo** pFilterInfo, uint64_t qId);
|
||||
static void* doDestroyFilterInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols);
|
||||
|
||||
static int32_t getNumOfScanTimes(SQueryAttr* pQueryAttr);
|
||||
|
@ -191,7 +193,7 @@ static void destroyOperatorInfo(SOperatorInfo* pOperator);
|
|||
|
||||
static int32_t doCopyToSDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SGroupResInfo* pGroupResInfo, int32_t orderType, SSDataBlock* pBlock);
|
||||
|
||||
static int32_t getGroupbyColumnIndex(SSqlGroupbyExpr *pGroupbyExpr, SSDataBlock* pDataBlock);
|
||||
static int32_t getGroupbyColumnIndex(SGroupbyExpr *pGroupbyExpr, SSDataBlock* pDataBlock);
|
||||
static int32_t setGroupResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SGroupbyOperatorInfo *pInfo, int32_t numOfCols, char *pData, int16_t type, int16_t bytes, int32_t groupIndex);
|
||||
|
||||
static void initCtxOutputBuffer(SQLFunctionCtx* pCtx, int32_t size);
|
||||
|
@ -1420,7 +1422,7 @@ static int32_t setGroupResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SGroupbyOp
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t getGroupbyColumnIndex(SSqlGroupbyExpr *pGroupbyExpr, SSDataBlock* pDataBlock) {
|
||||
static int32_t getGroupbyColumnIndex(SGroupbyExpr *pGroupbyExpr, SSDataBlock* pDataBlock) {
|
||||
for (int32_t k = 0; k < pGroupbyExpr->numOfGroupCols; ++k) {
|
||||
SColIndex* pColIndex = taosArrayGet(pGroupbyExpr->columnInfo, k);
|
||||
if (TSDB_COL_IS_TAG(pColIndex->flag)) {
|
||||
|
@ -1710,38 +1712,40 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
|
|||
case OP_MultiTableTimeInterval: {
|
||||
pRuntimeEnv->proot =
|
||||
createMultiTableTimeIntervalOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
break;
|
||||
}
|
||||
case OP_TimeWindow: {
|
||||
pRuntimeEnv->proot =
|
||||
createTimeIntervalOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
break;
|
||||
}
|
||||
case OP_Groupby: {
|
||||
pRuntimeEnv->proot =
|
||||
createGroupbyOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
break;
|
||||
}
|
||||
case OP_SessionWindow: {
|
||||
pRuntimeEnv->proot =
|
||||
createSWindowOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
break;
|
||||
}
|
||||
case OP_MultiTableAggregate: {
|
||||
pRuntimeEnv->proot =
|
||||
createMultiTableAggOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
break;
|
||||
}
|
||||
case OP_Aggregate: {
|
||||
pRuntimeEnv->proot =
|
||||
createAggregateOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
if (pRuntimeEnv->proot->upstream->operatorType != OP_DummyInput) {
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream->info, pRuntimeEnv->proot);
|
||||
|
||||
int32_t opType = pRuntimeEnv->proot->upstream[0]->operatorType;
|
||||
if (opType != OP_DummyInput && opType != OP_Join) {
|
||||
setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -1750,7 +1754,7 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
|
|||
SOperatorInfo* prev = pRuntimeEnv->proot;
|
||||
if (i == 0) {
|
||||
pRuntimeEnv->proot = createArithOperatorInfo(pRuntimeEnv, prev, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
if (pRuntimeEnv->proot != NULL && pRuntimeEnv->proot->operatorType != OP_DummyInput) { // TODO refactor
|
||||
if (pRuntimeEnv->proot != NULL && prev->operatorType != OP_DummyInput && prev->operatorType != OP_Join) { // TODO refactor
|
||||
setTableScanFilterOperatorInfo(prev->info, pRuntimeEnv->proot);
|
||||
}
|
||||
} else {
|
||||
|
@ -1767,12 +1771,25 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
|
|||
}
|
||||
|
||||
case OP_Filter: { // todo refactor
|
||||
assert(pQueryAttr->havingNum > 0);
|
||||
if (pQueryAttr->stableQuery) {
|
||||
pRuntimeEnv->proot = createFilterOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr3, pQueryAttr->numOfExpr3);
|
||||
} else {
|
||||
pRuntimeEnv->proot = createFilterOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1, pQueryAttr->numOfOutput);
|
||||
}
|
||||
int32_t numOfFilterCols = 0;
|
||||
// if (pQueryAttr->numOfFilterCols > 0) {
|
||||
// pRuntimeEnv->proot = createFilterOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1,
|
||||
// pQueryAttr->numOfOutput, pQueryAttr->tableCols, pQueryAttr->numOfFilterCols);
|
||||
// } else {
|
||||
if (pQueryAttr->stableQuery) {
|
||||
SColumnInfo* pColInfo =
|
||||
extractColumnFilterInfo(pQueryAttr->pExpr3, pQueryAttr->numOfExpr3, &numOfFilterCols);
|
||||
pRuntimeEnv->proot = createFilterOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr3,
|
||||
pQueryAttr->numOfExpr3, pColInfo, numOfFilterCols);
|
||||
freeColumnInfo(pColInfo, pQueryAttr->numOfExpr3);
|
||||
} else {
|
||||
SColumnInfo* pColInfo =
|
||||
extractColumnFilterInfo(pQueryAttr->pExpr1, pQueryAttr->numOfOutput, &numOfFilterCols);
|
||||
pRuntimeEnv->proot = createFilterOperatorInfo(pRuntimeEnv, pRuntimeEnv->proot, pQueryAttr->pExpr1,
|
||||
pQueryAttr->numOfOutput, pColInfo, numOfFilterCols);
|
||||
freeColumnInfo(pColInfo, pQueryAttr->numOfOutput);
|
||||
}
|
||||
// }
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1979,6 +1996,37 @@ static bool isFirstLastRowQuery(SQueryAttr *pQueryAttr) {
|
|||
return false;
|
||||
}
|
||||
|
||||
static bool isCachedLastQuery(SQueryAttr *pQueryAttr) {
|
||||
for (int32_t i = 0; i < pQueryAttr->numOfOutput; ++i) {
|
||||
int32_t functionID = pQueryAttr->pExpr1[i].base.functionId;
|
||||
if (functionID == TSDB_FUNC_LAST || functionID == TSDB_FUNC_LAST_DST) {
|
||||
continue;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
if (pQueryAttr->order.order != TSDB_ORDER_DESC || !TSWINDOW_IS_EQUAL(pQueryAttr->window, TSWINDOW_DESC_INITIALIZER)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (pQueryAttr->groupbyColumn) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (pQueryAttr->interval.interval > 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (pQueryAttr->numOfFilterCols > 0 || pQueryAttr->havingNum > 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* The following 4 kinds of query are treated as the tags query
|
||||
* tagprj, tid_tag query, count(tbname), 'abc' (user defined constant value column) query
|
||||
|
@ -2084,7 +2132,7 @@ static int32_t updateBlockLoadStatus(SQueryAttr *pQuery, int32_t status) {
|
|||
if (hasFirstLastFunc && status == BLK_DATA_NO_NEEDED) {
|
||||
if(!hasOtherFunc) {
|
||||
return BLK_DATA_DISCARD;
|
||||
} else{
|
||||
} else {
|
||||
return BLK_DATA_ALL_NEEDED;
|
||||
}
|
||||
}
|
||||
|
@ -2359,6 +2407,105 @@ static int32_t doTSJoinFilter(SQueryRuntimeEnv *pRuntimeEnv, TSKEY key, bool asc
|
|||
return TS_JOIN_TS_EQUAL;
|
||||
}
|
||||
|
||||
bool doFilterDataBlock(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, int32_t numOfRows, int8_t* p) {
|
||||
bool all = true;
|
||||
|
||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
||||
bool qualified = false;
|
||||
|
||||
for (int32_t k = 0; k < numOfFilterCols; ++k) {
|
||||
char* pElem = (char*)pFilterInfo[k].pData + pFilterInfo[k].info.bytes * i;
|
||||
|
||||
qualified = false;
|
||||
for (int32_t j = 0; j < pFilterInfo[k].numOfFilters; ++j) {
|
||||
SColumnFilterElem* pFilterElem = &pFilterInfo[k].pFilters[j];
|
||||
|
||||
bool isnull = isNull(pElem, pFilterInfo[k].info.type);
|
||||
if (isnull) {
|
||||
if (pFilterElem->fp == isNullOperator) {
|
||||
qualified = true;
|
||||
break;
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
} else {
|
||||
if (pFilterElem->fp == notNullOperator) {
|
||||
qualified = true;
|
||||
break;
|
||||
} else if (pFilterElem->fp == isNullOperator) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (pFilterElem->fp(pFilterElem, pElem, pElem, pFilterInfo[k].info.type)) {
|
||||
qualified = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!qualified) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
p[i] = qualified ? 1 : 0;
|
||||
if (!qualified) {
|
||||
all = false;
|
||||
}
|
||||
}
|
||||
|
||||
return all;
|
||||
}
|
||||
|
||||
void doCompactSDataBlock(SSDataBlock* pBlock, int32_t numOfRows, int8_t* p) {
|
||||
int32_t len = 0;
|
||||
int32_t start = 0;
|
||||
for (int32_t j = 0; j < numOfRows; ++j) {
|
||||
if (p[j] == 1) {
|
||||
len++;
|
||||
} else {
|
||||
if (len > 0) {
|
||||
int32_t cstart = j - len;
|
||||
for (int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData* pColumnInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
||||
|
||||
int16_t bytes = pColumnInfoData->info.bytes;
|
||||
memmove(((char*)pColumnInfoData->pData) + start * bytes, pColumnInfoData->pData + cstart * bytes,
|
||||
len * bytes);
|
||||
}
|
||||
|
||||
start += len;
|
||||
len = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (len > 0) {
|
||||
int32_t cstart = numOfRows - len;
|
||||
for (int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData* pColumnInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
||||
|
||||
int16_t bytes = pColumnInfoData->info.bytes;
|
||||
memmove(pColumnInfoData->pData + start * bytes, pColumnInfoData->pData + cstart * bytes, len * bytes);
|
||||
}
|
||||
|
||||
start += len;
|
||||
len = 0;
|
||||
}
|
||||
|
||||
pBlock->info.rows = start;
|
||||
pBlock->pBlockStatis = NULL; // clean the block statistics info
|
||||
|
||||
if (start > 0) {
|
||||
SColumnInfoData* pColumnInfoData = taosArrayGet(pBlock->pDataBlock, 0);
|
||||
if (pColumnInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP &&
|
||||
pColumnInfoData->info.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
pBlock->info.window.skey = *(int64_t*)pColumnInfoData->pData;
|
||||
pBlock->info.window.ekey = *(int64_t*)(pColumnInfoData->pData + TSDB_KEYSIZE * (start - 1));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void filterRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols,
|
||||
SSDataBlock* pBlock, bool ascQuery) {
|
||||
int32_t numOfRows = pBlock->info.rows;
|
||||
|
@ -2391,97 +2538,11 @@ void filterRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSingleColumnFilterInf
|
|||
// save the cursor status
|
||||
pRuntimeEnv->current->cur = tsBufGetCursor(pRuntimeEnv->pTsBuf);
|
||||
} else {
|
||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
||||
bool qualified = false;
|
||||
|
||||
for (int32_t k = 0; k < numOfFilterCols; ++k) {
|
||||
char* pElem = (char*)pFilterInfo[k].pData + pFilterInfo[k].info.bytes * i;
|
||||
|
||||
qualified = false;
|
||||
for (int32_t j = 0; j < pFilterInfo[k].numOfFilters; ++j) {
|
||||
SColumnFilterElem* pFilterElem = &pFilterInfo[k].pFilters[j];
|
||||
|
||||
bool isnull = isNull(pElem, pFilterInfo[k].info.type);
|
||||
if (isnull) {
|
||||
if (pFilterElem->fp == isNullOperator) {
|
||||
qualified = true;
|
||||
break;
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
} else {
|
||||
if (pFilterElem->fp == notNullOperator) {
|
||||
qualified = true;
|
||||
break;
|
||||
} else if (pFilterElem->fp == isNullOperator) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (pFilterElem->fp(pFilterElem, pElem, pElem, pFilterInfo[k].info.type)) {
|
||||
qualified = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!qualified) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
p[i] = qualified ? 1 : 0;
|
||||
if (!qualified) {
|
||||
all = false;
|
||||
}
|
||||
}
|
||||
all = doFilterDataBlock(pFilterInfo, numOfFilterCols, numOfRows, p);
|
||||
}
|
||||
|
||||
if (!all) {
|
||||
int32_t start = 0;
|
||||
int32_t len = 0;
|
||||
for (int32_t j = 0; j < numOfRows; ++j) {
|
||||
if (p[j] == 1) {
|
||||
len++;
|
||||
} else {
|
||||
if (len > 0) {
|
||||
int32_t cstart = j - len;
|
||||
for (int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData *pColumnInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
||||
|
||||
int16_t bytes = pColumnInfoData->info.bytes;
|
||||
memmove(((char*)pColumnInfoData->pData) + start * bytes, pColumnInfoData->pData + cstart * bytes, len * bytes);
|
||||
}
|
||||
|
||||
start += len;
|
||||
len = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (len > 0) {
|
||||
int32_t cstart = numOfRows - len;
|
||||
for (int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData *pColumnInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
||||
|
||||
int16_t bytes = pColumnInfoData->info.bytes;
|
||||
memmove(pColumnInfoData->pData + start * bytes, pColumnInfoData->pData + cstart * bytes, len * bytes);
|
||||
}
|
||||
|
||||
start += len;
|
||||
len = 0;
|
||||
}
|
||||
|
||||
pBlock->info.rows = start;
|
||||
pBlock->pBlockStatis = NULL; // clean the block statistics info
|
||||
|
||||
if (start > 0) {
|
||||
SColumnInfoData* pColumnInfoData = taosArrayGet(pBlock->pDataBlock, 0);
|
||||
if (pColumnInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP &&
|
||||
pColumnInfoData->info.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
pBlock->info.window.skey = *(int64_t*)pColumnInfoData->pData;
|
||||
pBlock->info.window.ekey = *(int64_t*)(pColumnInfoData->pData + TSDB_KEYSIZE * (start - 1));
|
||||
}
|
||||
}
|
||||
doCompactSDataBlock(pBlock, numOfRows, p);
|
||||
}
|
||||
|
||||
tfree(p);
|
||||
|
@ -2509,7 +2570,7 @@ static uint32_t doFilterByBlockTimeWindow(STableScanInfo* pTableScanInfo, SSData
|
|||
return status;
|
||||
}
|
||||
|
||||
static void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, SSDataBlock* pBlock) {
|
||||
void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, SSDataBlock* pBlock) {
|
||||
if (numOfFilterCols > 0 && pFilterInfo[0].pData != NULL) {
|
||||
return;
|
||||
}
|
||||
|
@ -3914,6 +3975,15 @@ void queryCostStatis(SQInfo *pQInfo) {
|
|||
// return true;
|
||||
//}
|
||||
|
||||
void appendUpstream(SOperatorInfo* p, SOperatorInfo* pUpstream) {
|
||||
if (p->upstream == NULL) {
|
||||
assert(p->numOfUpstream == 0);
|
||||
}
|
||||
|
||||
p->upstream = realloc(p->upstream, POINTER_BYTES * (p->numOfUpstream + 1));
|
||||
p->upstream[p->numOfUpstream++] = pUpstream;
|
||||
}
|
||||
|
||||
static void doDestroyTableQueryInfo(STableGroupInfo* pTableqinfoGroupInfo);
|
||||
|
||||
static int32_t setupQueryHandle(void* tsdb, SQueryRuntimeEnv* pRuntimeEnv, int64_t qId, bool isSTableQuery) {
|
||||
|
@ -3963,6 +4033,8 @@ static int32_t setupQueryHandle(void* tsdb, SQueryRuntimeEnv* pRuntimeEnv, int64
|
|||
}
|
||||
}
|
||||
}
|
||||
} else if (isCachedLastQuery(pQueryAttr)) {
|
||||
pRuntimeEnv->pQueryHandle = tsdbQueryCacheLast(tsdb, &cond, &pQueryAttr->tableGroupInfo, qId, &pQueryAttr->memRef);
|
||||
} else if (pQueryAttr->pointInterpQuery) {
|
||||
pRuntimeEnv->pQueryHandle = tsdbQueryRowsInExternalWindow(tsdb, &cond, &pQueryAttr->tableGroupInfo, qId, &pQueryAttr->memRef);
|
||||
} else {
|
||||
|
@ -4143,12 +4215,12 @@ static void doCloseAllTimeWindow(SQueryRuntimeEnv* pRuntimeEnv) {
|
|||
}
|
||||
|
||||
static SSDataBlock* doTableScanImpl(void* param, bool* newgroup) {
|
||||
SOperatorInfo* pOperator = (SOperatorInfo*) param;
|
||||
SOperatorInfo *pOperator = (SOperatorInfo*) param;
|
||||
|
||||
STableScanInfo* pTableScanInfo = pOperator->info;
|
||||
SSDataBlock* pBlock = &pTableScanInfo->block;
|
||||
STableScanInfo *pTableScanInfo = pOperator->info;
|
||||
SSDataBlock *pBlock = &pTableScanInfo->block;
|
||||
SQueryRuntimeEnv *pRuntimeEnv = pOperator->pRuntimeEnv;
|
||||
SQueryAttr* pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
SQueryAttr *pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
STableGroupInfo *pTableGroupInfo = &pOperator->pRuntimeEnv->tableqinfoGroupInfo;
|
||||
|
||||
*newgroup = false;
|
||||
|
@ -4208,7 +4280,7 @@ static SSDataBlock* doTableScan(void* param, bool *newgroup) {
|
|||
}
|
||||
|
||||
if (++pTableScanInfo->current >= pTableScanInfo->times) {
|
||||
if (pTableScanInfo->reverseTimes <= 0) {
|
||||
if (pTableScanInfo->reverseTimes <= 0 || isTsdbCacheLastRow(pTableScanInfo->pQueryHandle)) {
|
||||
return NULL;
|
||||
} else {
|
||||
break;
|
||||
|
@ -4591,13 +4663,14 @@ SOperatorInfo* createGlobalAggregateOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv,
|
|||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
|
||||
pOperator->exec = doGlobalAggregate;
|
||||
pOperator->cleanup = destroyGlobalAggOperatorInfo;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -4662,7 +4735,7 @@ static SSDataBlock* doAggregate(void* param, bool* newgroup) {
|
|||
SQueryAttr* pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
int32_t order = pQueryAttr->order.order;
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -4717,7 +4790,7 @@ static SSDataBlock* doSTableAggregate(void* param, bool* newgroup) {
|
|||
SQueryAttr* pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
int32_t order = pQueryAttr->order.order;
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -4801,7 +4874,7 @@ static SSDataBlock* doArithmeticOperation(void* param, bool* newgroup) {
|
|||
bool prevVal = *newgroup;
|
||||
|
||||
// The upstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
SSDataBlock* pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
assert(*newgroup == false);
|
||||
|
||||
|
@ -4835,7 +4908,7 @@ static SSDataBlock* doArithmeticOperation(void* param, bool* newgroup) {
|
|||
}
|
||||
|
||||
pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
|
||||
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
|
||||
if (pRes->info.rows >= 1000/*pRuntimeEnv->resultInfo.threshold*/) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -4855,7 +4928,7 @@ static SSDataBlock* doLimit(void* param, bool* newgroup) {
|
|||
|
||||
SSDataBlock* pBlock = NULL;
|
||||
while (1) {
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
@ -4895,27 +4968,6 @@ static SSDataBlock* doLimit(void* param, bool* newgroup) {
|
|||
return pBlock;
|
||||
}
|
||||
|
||||
|
||||
bool doFilterData(SColumnInfoData* p, int32_t rid, SColumnFilterElem *filterElem, __filter_func_t fp) {
|
||||
char* input = p->pData + p->info.bytes * rid;
|
||||
bool isnull = isNull(input, p->info.type);
|
||||
if (isnull) {
|
||||
return (fp == isNullOperator) ? true : false;
|
||||
} else {
|
||||
if (fp == notNullOperator) {
|
||||
return true;
|
||||
} else if (fp == isNullOperator) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (fp(filterElem, input, input, p->info.type)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static SSDataBlock* doFilter(void* param, bool* newgroup) {
|
||||
SOperatorInfo *pOperator = (SOperatorInfo *)param;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
|
@ -4926,7 +4978,7 @@ static SSDataBlock* doFilter(void* param, bool* newgroup) {
|
|||
SQueryRuntimeEnv* pRuntimeEnv = pOperator->pRuntimeEnv;
|
||||
|
||||
while (1) {
|
||||
SSDataBlock *pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
SSDataBlock *pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
break;
|
||||
}
|
||||
|
@ -4968,7 +5020,7 @@ static SSDataBlock* doIntervalAgg(void* param, bool* newgroup) {
|
|||
int32_t order = pQueryAttr->order.order;
|
||||
STimeWindow win = pQueryAttr->window;
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -5021,7 +5073,7 @@ static SSDataBlock* doSTableIntervalAgg(void* param, bool* newgroup) {
|
|||
SQueryAttr* pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
int32_t order = pQueryAttr->order.order;
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -5076,7 +5128,7 @@ static SSDataBlock* doSessionWindowAgg(void* param, bool* newgroup) {
|
|||
int32_t order = pQueryAttr->order.order;
|
||||
STimeWindow win = pQueryAttr->window;
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -5127,7 +5179,7 @@ static SSDataBlock* hashGroupbyAggregate(void* param, bool* newgroup) {
|
|||
return pInfo->binfo.pRes;
|
||||
}
|
||||
|
||||
SOperatorInfo* upstream = pOperator->upstream;
|
||||
SOperatorInfo* upstream = pOperator->upstream[0];
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
|
||||
|
@ -5196,7 +5248,7 @@ static SSDataBlock* doFill(void* param, bool* newgroup) {
|
|||
}
|
||||
|
||||
while(1) {
|
||||
SSDataBlock* pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
SSDataBlock* pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (*newgroup) {
|
||||
assert(pBlock != NULL);
|
||||
}
|
||||
|
@ -5272,7 +5324,15 @@ static void destroyOperatorInfo(SOperatorInfo* pOperator) {
|
|||
pOperator->cleanup(pOperator->info, pOperator->numOfOutput);
|
||||
}
|
||||
|
||||
destroyOperatorInfo(pOperator->upstream);
|
||||
if (pOperator->upstream != NULL) {
|
||||
for(int32_t i = 0; i < pOperator->numOfUpstream; ++i) {
|
||||
destroyOperatorInfo(pOperator->upstream[i]);
|
||||
}
|
||||
|
||||
tfree(pOperator->upstream);
|
||||
pOperator->numOfUpstream = 0;
|
||||
}
|
||||
|
||||
tfree(pOperator->info);
|
||||
tfree(pOperator);
|
||||
}
|
||||
|
@ -5297,13 +5357,14 @@ SOperatorInfo* createAggregateOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOpera
|
|||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
|
||||
pOperator->exec = doAggregate;
|
||||
pOperator->cleanup = destroyBasicOperatorInfo;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5370,13 +5431,13 @@ SOperatorInfo* createMultiTableAggOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SO
|
|||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
|
||||
pOperator->exec = doSTableAggregate;
|
||||
pOperator->cleanup = destroyBasicOperatorInfo;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
@ -5400,63 +5461,62 @@ SOperatorInfo* createArithOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorI
|
|||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
|
||||
pOperator->exec = doArithmeticOperation;
|
||||
pOperator->cleanup = destroyArithOperatorInfo;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
SColumnInfo* extractColumnFilterInfo(SExprInfo* pExpr, int32_t numOfOutput, int32_t* numOfFilterCols) {
|
||||
SColumnInfo* pCols = calloc(numOfOutput, sizeof(SColumnInfo));
|
||||
|
||||
int32_t numOfFilter = 0;
|
||||
for(int32_t i = 0; i < numOfOutput; ++i) {
|
||||
if (pExpr[i].base.flist.numOfFilters > 0) {
|
||||
numOfFilter += 1;
|
||||
}
|
||||
|
||||
pCols[i].type = pExpr[i].base.resType;
|
||||
pCols[i].bytes = pExpr[i].base.resBytes;
|
||||
pCols[i].colId = pExpr[i].base.resColId;
|
||||
|
||||
pCols[i].flist.numOfFilters = pExpr[i].base.flist.numOfFilters;
|
||||
pCols[i].flist.filterInfo = calloc(pCols[i].flist.numOfFilters, sizeof(SColumnFilterInfo));
|
||||
memcpy(pCols[i].flist.filterInfo, pExpr[i].base.flist.filterInfo, pCols[i].flist.numOfFilters * sizeof(SColumnFilterInfo));
|
||||
}
|
||||
|
||||
assert(numOfFilter > 0);
|
||||
|
||||
*numOfFilterCols = numOfFilter;
|
||||
return pCols;
|
||||
}
|
||||
|
||||
SOperatorInfo* createFilterOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorInfo* upstream, SExprInfo* pExpr,
|
||||
int32_t numOfOutput) {
|
||||
int32_t numOfOutput, SColumnInfo* pCols, int32_t numOfFilter) {
|
||||
SFilterOperatorInfo* pInfo = calloc(1, sizeof(SFilterOperatorInfo));
|
||||
|
||||
{
|
||||
SColumnInfo* pCols = calloc(numOfOutput, sizeof(SColumnInfo));
|
||||
|
||||
int32_t numOfFilter = 0;
|
||||
for(int32_t i = 0; i < numOfOutput; ++i) {
|
||||
if (pExpr[i].base.flist.numOfFilters > 0) {
|
||||
numOfFilter += 1;
|
||||
}
|
||||
|
||||
pCols[i].type = pExpr[i].base.resType;
|
||||
pCols[i].bytes = pExpr[i].base.resBytes;
|
||||
pCols[i].colId = pExpr[i].base.resColId;
|
||||
|
||||
pCols[i].flist.numOfFilters = pExpr[i].base.flist.numOfFilters;
|
||||
pCols[i].flist.filterInfo = calloc(pCols[i].flist.numOfFilters, sizeof(SColumnFilterInfo));
|
||||
memcpy(pCols[i].flist.filterInfo, pExpr[i].base.flist.filterInfo, pCols[i].flist.numOfFilters * sizeof(SColumnFilterInfo));
|
||||
}
|
||||
|
||||
assert(numOfFilter > 0);
|
||||
doCreateFilterInfo(pCols, numOfOutput, numOfFilter, &pInfo->pFilterInfo, 0);
|
||||
pInfo->numOfFilterCols = numOfFilter;
|
||||
|
||||
for(int32_t i = 0; i < numOfOutput; ++i) {
|
||||
tfree(pCols[i].flist.filterInfo);
|
||||
}
|
||||
|
||||
tfree(pCols);
|
||||
}
|
||||
assert(numOfFilter > 0 && pCols != NULL);
|
||||
doCreateFilterInfo(pCols, numOfOutput, numOfFilter, &pInfo->pFilterInfo, 0);
|
||||
pInfo->numOfFilterCols = numOfFilter;
|
||||
|
||||
SOperatorInfo* pOperator = calloc(1, sizeof(SOperatorInfo));
|
||||
|
||||
pOperator->name = "ConditionOperator";
|
||||
pOperator->name = "FilterOperator";
|
||||
pOperator->operatorType = OP_Filter;
|
||||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->exec = doFilter;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
pOperator->cleanup = destroyConditionOperatorInfo;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
@ -5471,10 +5531,10 @@ SOperatorInfo* createLimitOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorI
|
|||
pOperator->operatorType = OP_Limit;
|
||||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->exec = doLimit;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
appendUpstream(pOperator, upstream);
|
||||
|
||||
return pOperator;
|
||||
}
|
||||
|
@ -5492,7 +5552,6 @@ SOperatorInfo* createTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOp
|
|||
pOperator->operatorType = OP_TimeWindow;
|
||||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
|
@ -5500,6 +5559,7 @@ SOperatorInfo* createTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOp
|
|||
pOperator->exec = doIntervalAgg;
|
||||
pOperator->cleanup = destroyBasicOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5517,7 +5577,6 @@ SOperatorInfo* createSWindowOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperato
|
|||
pOperator->operatorType = OP_SessionWindow;
|
||||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
|
@ -5525,6 +5584,7 @@ SOperatorInfo* createSWindowOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperato
|
|||
pOperator->exec = doSessionWindowAgg;
|
||||
pOperator->cleanup = destroyBasicOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5540,7 +5600,6 @@ SOperatorInfo* createMultiTableTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRunti
|
|||
pOperator->operatorType = OP_MultiTableTimeInterval;
|
||||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
|
@ -5549,6 +5608,7 @@ SOperatorInfo* createMultiTableTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRunti
|
|||
pOperator->exec = doSTableIntervalAgg;
|
||||
pOperator->cleanup = destroyBasicOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5565,7 +5625,6 @@ SOperatorInfo* createGroupbyOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperato
|
|||
pOperator->blockingOptr = true;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->operatorType = OP_Groupby;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
|
@ -5573,6 +5632,7 @@ SOperatorInfo* createGroupbyOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperato
|
|||
pOperator->exec = hashGroupbyAggregate;
|
||||
pOperator->cleanup = destroyGroupbyOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5602,8 +5662,6 @@ SOperatorInfo* createFillOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorIn
|
|||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->operatorType = OP_Fill;
|
||||
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->pExpr = pExpr;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
|
@ -5612,6 +5670,7 @@ SOperatorInfo* createFillOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperatorIn
|
|||
pOperator->exec = doFill;
|
||||
pOperator->cleanup = destroySFillOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5650,11 +5709,12 @@ SOperatorInfo* createSLimitOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperator
|
|||
pOperator->operatorType = OP_SLimit;
|
||||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->exec = doSLimit;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
pOperator->cleanup = destroySlimitOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -5821,7 +5881,7 @@ static SSDataBlock* hashDistinct(void* param, bool* newgroup) {
|
|||
pRes->info.rows = 0;
|
||||
SSDataBlock* pBlock = NULL;
|
||||
while(1) {
|
||||
pBlock = pOperator->upstream->exec(pOperator->upstream, newgroup);
|
||||
pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup);
|
||||
if (pBlock == NULL) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
|
@ -5882,12 +5942,13 @@ SOperatorInfo* createDistinctOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperat
|
|||
pOperator->blockingOptr = false;
|
||||
pOperator->status = OP_IN_EXECUTING;
|
||||
pOperator->operatorType = OP_Distinct;
|
||||
pOperator->upstream = upstream;
|
||||
pOperator->numOfOutput = numOfOutput;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pRuntimeEnv = pRuntimeEnv;
|
||||
pOperator->exec = hashDistinct;
|
||||
pOperator->cleanup = destroyDistinctOperatorInfo;
|
||||
|
||||
appendUpstream(pOperator, upstream);
|
||||
return pOperator;
|
||||
}
|
||||
|
||||
|
@ -6434,7 +6495,7 @@ static int32_t updateOutputBufForTopBotQuery(SQueriedTableInfo* pTableInfo, SCol
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
// TODO tag length should be passed from client
|
||||
// TODO tag length should be passed from client, refactor
|
||||
int32_t createQueryFunc(SQueriedTableInfo* pTableInfo, int32_t numOfOutput, SExprInfo** pExprInfo,
|
||||
SSqlExpr** pExprMsg, SColumnInfo* pTagCols, int32_t queryType, void* pMsg) {
|
||||
*pExprInfo = NULL;
|
||||
|
@ -6605,13 +6666,13 @@ int32_t createIndirectQueryFuncExprFromMsg(SQueryTableMsg* pQueryMsg, int32_t nu
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SSqlGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *pColIndex, int32_t *code) {
|
||||
SGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *pColIndex, int32_t *code) {
|
||||
if (pQueryMsg->numOfGroupCols == 0) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// using group by tag columns
|
||||
SSqlGroupbyExpr *pGroupbyExpr = (SSqlGroupbyExpr *)calloc(1, sizeof(SSqlGroupbyExpr));
|
||||
SGroupbyExpr *pGroupbyExpr = (SGroupbyExpr *)calloc(1, sizeof(SGroupbyExpr));
|
||||
if (pGroupbyExpr == NULL) {
|
||||
*code = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
|
@ -6629,8 +6690,7 @@ SSqlGroupbyExpr *createGroupbyExprFromMsg(SQueryTableMsg *pQueryMsg, SColIndex *
|
|||
return pGroupbyExpr;
|
||||
}
|
||||
|
||||
static int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols,
|
||||
SSingleColumnFilterInfo** pFilterInfo, uint64_t qId) {
|
||||
int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols, SSingleColumnFilterInfo** pFilterInfo, uint64_t qId) {
|
||||
*pFilterInfo = calloc(1, sizeof(SSingleColumnFilterInfo) * numOfFilterCols);
|
||||
if (pFilterInfo == NULL) {
|
||||
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
|
@ -6687,7 +6747,7 @@ void* doDestroyFilterInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFil
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int32_t createFilterInfo(SQueryAttr* pQueryAttr, uint64_t qId) {
|
||||
int32_t createFilterInfo(SQueryAttr* pQueryAttr, uint64_t qId) {
|
||||
for (int32_t i = 0; i < pQueryAttr->numOfCols; ++i) {
|
||||
if (pQueryAttr->tableCols[i].flist.numOfFilters > 0) {
|
||||
pQueryAttr->numOfFilterCols++;
|
||||
|
@ -6772,7 +6832,7 @@ FORCE_INLINE bool checkQIdEqual(void *qHandle, uint64_t qId) {
|
|||
return ((SQInfo *)qHandle)->qId == qId;
|
||||
}
|
||||
|
||||
SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SSqlGroupbyExpr* pGroupbyExpr, SExprInfo* pExprs,
|
||||
SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, SExprInfo* pExprs,
|
||||
SExprInfo* pSecExprs, STableGroupInfo* pTableGroupInfo, SColumnInfo* pTagCols, int32_t vgId,
|
||||
char* sql, uint64_t *qId) {
|
||||
int16_t numOfCols = pQueryMsg->numOfCols;
|
||||
|
@ -7075,7 +7135,7 @@ static void doDestroyTableQueryInfo(STableGroupInfo* pTableqinfoGroupInfo) {
|
|||
pTableqinfoGroupInfo->numOfTables = 0;
|
||||
}
|
||||
|
||||
static void* destroyQueryFuncExpr(SExprInfo* pExprInfo, int32_t numOfExpr) {
|
||||
void* destroyQueryFuncExpr(SExprInfo* pExprInfo, int32_t numOfExpr) {
|
||||
if (pExprInfo == NULL) {
|
||||
assert(numOfExpr == 0);
|
||||
return NULL;
|
||||
|
@ -7099,6 +7159,20 @@ static void* destroyQueryFuncExpr(SExprInfo* pExprInfo, int32_t numOfExpr) {
|
|||
return NULL;
|
||||
}
|
||||
|
||||
void* freeColumnInfo(SColumnInfo* pColumnInfo, int32_t numOfCols) {
|
||||
if (pColumnInfo != NULL) {
|
||||
assert(numOfCols >= 0);
|
||||
|
||||
for (int32_t i = 0; i < numOfCols; i++) {
|
||||
freeColumnFilterInfo(pColumnInfo[i].flist.filterInfo, pColumnInfo[i].flist.numOfFilters);
|
||||
}
|
||||
|
||||
tfree(pColumnInfo);
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void freeQInfo(SQInfo *pQInfo) {
|
||||
if (!isValidQInfo(pQInfo)) {
|
||||
return;
|
||||
|
@ -7283,13 +7357,7 @@ void freeQueryAttr(SQueryAttr* pQueryAttr) {
|
|||
tfree(pQueryAttr->tagColList);
|
||||
tfree(pQueryAttr->pFilterInfo);
|
||||
|
||||
if (pQueryAttr->tableCols != NULL) {
|
||||
for (int32_t i = 0; i < pQueryAttr->numOfCols; i++) {
|
||||
SColumnInfo* column = pQueryAttr->tableCols + i;
|
||||
freeColumnFilterInfo(column->flist.filterInfo, column->flist.numOfFilters);
|
||||
}
|
||||
tfree(pQueryAttr->tableCols);
|
||||
}
|
||||
pQueryAttr->tableCols = freeColumnInfo(pQueryAttr->tableCols, pQueryAttr->numOfCols);
|
||||
|
||||
if (pQueryAttr->pGroupbyExpr != NULL) {
|
||||
taosArrayDestroy(pQueryAttr->pGroupbyExpr->columnInfo);
|
||||
|
|
|
@ -363,10 +363,6 @@ SFillInfo* taosCreateFillInfo(int32_t order, TSKEY skey, int32_t numOfTags, int3
|
|||
pFillInfo->rowSize = setTagColumnInfo(pFillInfo, pFillInfo->numOfCols, pFillInfo->alloc);
|
||||
assert(pFillInfo->rowSize > 0);
|
||||
|
||||
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
|
||||
pFillInfo->pData[i] = malloc(pFillInfo->pFillCol[i].col.bytes * pFillInfo->alloc);
|
||||
}
|
||||
|
||||
return pFillInfo;
|
||||
}
|
||||
|
||||
|
@ -392,10 +388,6 @@ void* taosDestroyFillInfo(SFillInfo* pFillInfo) {
|
|||
tfree(pFillInfo->pTags[i].tagVal);
|
||||
}
|
||||
|
||||
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
|
||||
tfree(pFillInfo->pData[i]);
|
||||
}
|
||||
|
||||
tfree(pFillInfo->pTags);
|
||||
|
||||
tfree(pFillInfo->pData);
|
||||
|
@ -417,17 +409,6 @@ void taosFillSetStartInfo(SFillInfo* pFillInfo, int32_t numOfRows, TSKEY endKey)
|
|||
|
||||
pFillInfo->index = 0;
|
||||
pFillInfo->numOfRows = numOfRows;
|
||||
|
||||
// ensure the space
|
||||
if (pFillInfo->alloc < numOfRows) {
|
||||
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
|
||||
char* tmp = realloc(pFillInfo->pData[i], numOfRows*pFillInfo->pFillCol[i].col.bytes);
|
||||
assert(tmp != NULL); // todo handle error
|
||||
|
||||
memset(tmp, 0, numOfRows*pFillInfo->pFillCol[i].col.bytes);
|
||||
pFillInfo->pData[i] = tmp;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void taosFillSetInputDataBlock(SFillInfo* pFillInfo, const SSDataBlock* pInput) {
|
||||
|
@ -435,16 +416,7 @@ void taosFillSetInputDataBlock(SFillInfo* pFillInfo, const SSDataBlock* pInput)
|
|||
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
|
||||
|
||||
SColumnInfoData* pColData = taosArrayGet(pInput->pDataBlock, i);
|
||||
// pFillInfo->pData[i] = pColData->pData;
|
||||
if (pInput->info.rows > pFillInfo->alloc) {
|
||||
char* t = realloc(pFillInfo->pData[i], pColData->info.bytes * pInput->info.rows);
|
||||
assert(t != NULL);
|
||||
|
||||
pFillInfo->pData[i] = t;
|
||||
pFillInfo->alloc = pInput->info.rows;
|
||||
}
|
||||
|
||||
memcpy(pFillInfo->pData[i], pColData->pData, pColData->info.bytes * pInput->info.rows);
|
||||
pFillInfo->pData[i] = pColData->pData;
|
||||
|
||||
if (TSDB_COL_IS_TAG(pCol->flag)/* || IS_VAR_DATA_TYPE(pCol->col.type)*/) { // copy the tag value to tag value buffer
|
||||
SFillTagColInfo* pTag = &pFillInfo->pTags[pCol->tagIndex];
|
||||
|
@ -454,31 +426,6 @@ void taosFillSetInputDataBlock(SFillInfo* pFillInfo, const SSDataBlock* pInput)
|
|||
}
|
||||
}
|
||||
|
||||
void taosFillCopyInputDataFromOneFilePage(SFillInfo* pFillInfo, const tFilePage* pInput) {
|
||||
assert(pFillInfo->numOfRows == pInput->num);
|
||||
|
||||
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
|
||||
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
|
||||
|
||||
const char* data = pInput->data + pCol->col.offset * pInput->num;
|
||||
if (pInput->num > pFillInfo->alloc) {
|
||||
char* t = realloc(pFillInfo->pData[i], (size_t)(pCol->col.bytes * pInput->num));
|
||||
assert(t != NULL);
|
||||
|
||||
pFillInfo->pData[i] = t;
|
||||
pFillInfo->alloc = (int32_t)pInput->num;
|
||||
}
|
||||
|
||||
memcpy(pFillInfo->pData[i], data, (size_t)(pCol->col.bytes * pInput->num));
|
||||
|
||||
if (TSDB_COL_IS_TAG(pCol->flag)/* || IS_VAR_DATA_TYPE(pCol->col.type)*/) { // copy the tag value to tag value buffer
|
||||
SFillTagColInfo* pTag = &pFillInfo->pTags[pCol->tagIndex];
|
||||
assert (pTag->col.colId == pCol->col.colId);
|
||||
memcpy(pTag->tagVal, data, pCol->col.bytes); // TODO not memcpy??
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool taosFillHasMoreResults(SFillInfo* pFillInfo) {
|
||||
int32_t remain = taosNumOfRemainRows(pFillInfo);
|
||||
if (remain > 0) {
|
||||
|
|
|
@ -1,42 +1,521 @@
|
|||
#include "os.h"
|
||||
#include "tsclient.h"
|
||||
#include "tschemautil.h"
|
||||
#include "qPlan.h"
|
||||
#include "qExecutor.h"
|
||||
#include "qUtil.h"
|
||||
#include "texpr.h"
|
||||
#include "tscUtil.h"
|
||||
#include "tsclient.h"
|
||||
|
||||
#define QNODE_PROJECT 1
|
||||
#define QNODE_FILTER 2
|
||||
#define QNODE_RELATION 3
|
||||
#define QNODE_AGGREGATE 4
|
||||
#define QNODE_GROUPBY 5
|
||||
#define QNODE_LIMIT 6
|
||||
#define QNODE_JOIN 7
|
||||
#define QNODE_DIST 8
|
||||
#define QNODE_SORT 9
|
||||
#define QNODE_UNIONALL 10
|
||||
#define QNODE_TIMEWINDOW 11
|
||||
#define QNODE_TAGSCAN 1
|
||||
#define QNODE_TABLESCAN 2
|
||||
#define QNODE_PROJECT 3
|
||||
#define QNODE_AGGREGATE 4
|
||||
#define QNODE_GROUPBY 5
|
||||
#define QNODE_LIMIT 6
|
||||
#define QNODE_JOIN 7
|
||||
#define QNODE_DISTINCT 8
|
||||
#define QNODE_SORT 9
|
||||
#define QNODE_UNIONALL 10
|
||||
#define QNODE_TIMEWINDOW 11
|
||||
#define QNODE_SESSIONWINDOW 12
|
||||
#define QNODE_FILL 13
|
||||
|
||||
typedef struct SQueryNode {
|
||||
int32_t type; // the type of logic node
|
||||
char *name; // the name of logic node
|
||||
typedef struct SFillEssInfo {
|
||||
int32_t fillType; // fill type
|
||||
int64_t *val; // fill value
|
||||
} SFillEssInfo;
|
||||
|
||||
SSchema *pSchema; // the schema of the input SSDatablock
|
||||
int32_t numOfCols; // number of input columns
|
||||
SExprInfo *pExpr; // the query functions or sql aggregations
|
||||
int32_t numOfOutput; // number of result columns, which is also the number of pExprs
|
||||
typedef struct SJoinCond {
|
||||
bool tagExists; // denote if tag condition exists or not
|
||||
SColumn *tagCond[2];
|
||||
SColumn *colCond[2];
|
||||
} SJoinCond;
|
||||
|
||||
// previous operator to generated result for current node to process
|
||||
// in case of join, multiple prev nodes exist.
|
||||
struct SQueryNode* prevNode;
|
||||
struct SQueryNode* nextNode;
|
||||
} SQueryNode;
|
||||
static SQueryNode* createQueryNode(int32_t type, const char* name, SQueryNode** prev,
|
||||
int32_t numOfPrev, SExprInfo** pExpr, int32_t numOfOutput, SQueryTableInfo* pTableInfo,
|
||||
void* pExtInfo) {
|
||||
SQueryNode* pNode = calloc(1, sizeof(SQueryNode));
|
||||
|
||||
pNode->info.type = type;
|
||||
pNode->info.name = strdup(name);
|
||||
|
||||
if (pTableInfo->id.uid != 0) { // it is a true table
|
||||
pNode->tableInfo.id = pTableInfo->id;
|
||||
pNode->tableInfo.tableName = strdup(pTableInfo->tableName);
|
||||
}
|
||||
|
||||
pNode->numOfOutput = numOfOutput;
|
||||
pNode->pExpr = calloc(numOfOutput, sizeof(SExprInfo));
|
||||
for(int32_t i = 0; i < numOfOutput; ++i) {
|
||||
tscExprAssign(&pNode->pExpr[i], pExpr[i]);
|
||||
}
|
||||
|
||||
pNode->pPrevNodes = taosArrayInit(4, POINTER_BYTES);
|
||||
for(int32_t i = 0; i < numOfPrev; ++i) {
|
||||
taosArrayPush(pNode->pPrevNodes, &prev[i]);
|
||||
}
|
||||
|
||||
switch(type) {
|
||||
case QNODE_TABLESCAN: {
|
||||
STimeWindow* window = calloc(1, sizeof(STimeWindow));
|
||||
memcpy(window, pExtInfo, sizeof(STimeWindow));
|
||||
pNode->pExtInfo = window;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_TIMEWINDOW: {
|
||||
SInterval* pInterval = calloc(1, sizeof(SInterval));
|
||||
pNode->pExtInfo = pInterval;
|
||||
memcpy(pInterval, pExtInfo, sizeof(SInterval));
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_GROUPBY: {
|
||||
SGroupbyExpr* p = (SGroupbyExpr*) pExtInfo;
|
||||
SGroupbyExpr* pGroupbyExpr = calloc(1, sizeof(SGroupbyExpr));
|
||||
|
||||
pGroupbyExpr->tableIndex = p->tableIndex;
|
||||
pGroupbyExpr->orderType = p->orderType;
|
||||
pGroupbyExpr->orderIndex = p->orderIndex;
|
||||
pGroupbyExpr->numOfGroupCols = p->numOfGroupCols;
|
||||
pGroupbyExpr->columnInfo = taosArrayDup(p->columnInfo);
|
||||
pNode->pExtInfo = pGroupbyExpr;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_FILL: { // todo !!
|
||||
pNode->pExtInfo = pExtInfo;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_LIMIT: {
|
||||
pNode->pExtInfo = calloc(1, sizeof(SLimitVal));
|
||||
memcpy(pNode->pExtInfo, pExtInfo, sizeof(SLimitVal));
|
||||
break;
|
||||
}
|
||||
}
|
||||
return pNode;
|
||||
}
|
||||
|
||||
static SQueryNode* doAddTableColumnNode(SQueryInfo* pQueryInfo, STableMetaInfo* pTableMetaInfo, SQueryTableInfo* info,
|
||||
SArray* pExprs, SArray* tableCols) {
|
||||
if (pQueryInfo->onlyTagQuery) {
|
||||
int32_t num = (int32_t) taosArrayGetSize(pExprs);
|
||||
SQueryNode* pNode = createQueryNode(QNODE_TAGSCAN, "TableTagScan", NULL, 0, pExprs->pData, num, info, NULL);
|
||||
|
||||
if (pQueryInfo->distinctTag) {
|
||||
pNode = createQueryNode(QNODE_DISTINCT, "Distinct", &pNode, 1, pExprs->pData, num, info, NULL);
|
||||
}
|
||||
|
||||
return pNode;
|
||||
}
|
||||
|
||||
STimeWindow* window = &pQueryInfo->window;
|
||||
SQueryNode* pNode = createQueryNode(QNODE_TABLESCAN, "TableScan", NULL, 0, NULL, 0,
|
||||
info, window);
|
||||
if (pQueryInfo->projectionQuery) {
|
||||
int32_t numOfOutput = (int32_t) taosArrayGetSize(pExprs);
|
||||
pNode = createQueryNode(QNODE_PROJECT, "Projection", &pNode, 1, pExprs->pData, numOfOutput, info, NULL);
|
||||
} else {
|
||||
// table source column projection, generate the projection expr
|
||||
int32_t numOfCols = (int32_t) taosArrayGetSize(tableCols);
|
||||
SExprInfo** pExpr = calloc(numOfCols, POINTER_BYTES);
|
||||
SSchema* pSchema = pTableMetaInfo->pTableMeta->schema;
|
||||
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
SColumn* pCol = taosArrayGetP(tableCols, i);
|
||||
|
||||
SColumnIndex index = {.tableIndex = 0, .columnIndex = pCol->columnIndex};
|
||||
SExprInfo* p = tscExprCreate(pQueryInfo, TSDB_FUNC_PRJ, &index, pCol->info.type, pCol->info.bytes,
|
||||
pCol->info.colId, 0, TSDB_COL_NORMAL);
|
||||
strncpy(p->base.aliasName, pSchema[pCol->columnIndex].name, tListLen(p->base.aliasName));
|
||||
|
||||
pExpr[i] = p;
|
||||
}
|
||||
|
||||
pNode = createQueryNode(QNODE_PROJECT, "Projection", &pNode, 1, pExpr, numOfCols, info, NULL);
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
destroyQueryFuncExpr(pExpr[i], 1);
|
||||
}
|
||||
tfree(pExpr);
|
||||
}
|
||||
|
||||
return pNode;
|
||||
}
|
||||
|
||||
static SQueryNode* doCreateQueryPlanForOneTableImpl(SQueryInfo* pQueryInfo, SQueryNode* pNode, SQueryTableInfo* info,
|
||||
SArray* pExprs) {
|
||||
// check for aggregation
|
||||
if (pQueryInfo->interval.interval > 0) {
|
||||
int32_t numOfOutput = (int32_t) taosArrayGetSize(pExprs);
|
||||
|
||||
pNode = createQueryNode(QNODE_TIMEWINDOW, "TimeWindowAgg", &pNode, 1, pExprs->pData, numOfOutput, info,
|
||||
&pQueryInfo->interval);
|
||||
} else if (pQueryInfo->groupbyColumn) {
|
||||
int32_t numOfOutput = (int32_t) taosArrayGetSize(pExprs);
|
||||
pNode = createQueryNode(QNODE_GROUPBY, "Groupby", &pNode, 1, pExprs->pData, numOfOutput, info,
|
||||
&pQueryInfo->groupbyExpr);
|
||||
} else if (pQueryInfo->sessionWindow.gap > 0) {
|
||||
pNode = createQueryNode(QNODE_SESSIONWINDOW, "SessionWindowAgg", &pNode, 1, NULL, 0, info, NULL);
|
||||
} else if (pQueryInfo->simpleAgg) {
|
||||
int32_t numOfOutput = (int32_t) taosArrayGetSize(pExprs);
|
||||
pNode = createQueryNode(QNODE_AGGREGATE, "Aggregate", &pNode, 1, pExprs->pData, numOfOutput, info, NULL);
|
||||
}
|
||||
|
||||
if (pQueryInfo->havingFieldNum > 0 || pQueryInfo->arithmeticOnAgg) {
|
||||
int32_t numOfExpr = (int32_t) taosArrayGetSize(pQueryInfo->exprList1);
|
||||
pNode =
|
||||
createQueryNode(QNODE_PROJECT, "Projection", &pNode, 1, pQueryInfo->exprList1->pData, numOfExpr, info, NULL);
|
||||
}
|
||||
|
||||
if (pQueryInfo->fillType != TSDB_FILL_NONE) {
|
||||
SFillEssInfo* pInfo = calloc(1, sizeof(SFillEssInfo));
|
||||
pInfo->fillType = pQueryInfo->fillType;
|
||||
pInfo->val = calloc(pNode->numOfOutput, sizeof(int64_t));
|
||||
memcpy(pInfo->val, pQueryInfo->fillVal, pNode->numOfOutput);
|
||||
|
||||
pNode = createQueryNode(QNODE_FILL, "Fill", &pNode, 1, NULL, 0, info, pInfo);
|
||||
}
|
||||
|
||||
|
||||
if (pQueryInfo->limit.limit != -1 || pQueryInfo->limit.offset != 0) {
|
||||
pNode = createQueryNode(QNODE_LIMIT, "Limit", &pNode, 1, NULL, 0, info, &pQueryInfo->limit);
|
||||
}
|
||||
|
||||
return pNode;
|
||||
}
|
||||
|
||||
static SQueryNode* doCreateQueryPlanForOneTable(SQueryInfo* pQueryInfo, STableMetaInfo* pTableMetaInfo, SArray* pExprs,
|
||||
SArray* tableCols) {
|
||||
char name[TSDB_TABLE_FNAME_LEN] = {0};
|
||||
tNameExtractFullName(&pTableMetaInfo->name, name);
|
||||
|
||||
SQueryTableInfo info = {.tableName = strdup(name), .id = pTableMetaInfo->pTableMeta->id,};
|
||||
|
||||
// handle the only tag query
|
||||
SQueryNode* pNode = doAddTableColumnNode(pQueryInfo, pTableMetaInfo, &info, pExprs, tableCols);
|
||||
if (pQueryInfo->onlyTagQuery) {
|
||||
tfree(info.tableName);
|
||||
return pNode;
|
||||
}
|
||||
|
||||
SQueryNode* pNode1 = doCreateQueryPlanForOneTableImpl(pQueryInfo, pNode, &info, pExprs);
|
||||
tfree(info.tableName);
|
||||
return pNode1;
|
||||
}
|
||||
|
||||
SArray* createQueryPlanImpl(SQueryInfo* pQueryInfo) {
|
||||
SArray* upstream = NULL;
|
||||
|
||||
if (pQueryInfo->pUpstream != NULL && taosArrayGetSize(pQueryInfo->pUpstream) > 0) { // subquery in the from clause
|
||||
upstream = taosArrayInit(4, POINTER_BYTES);
|
||||
|
||||
size_t size = taosArrayGetSize(pQueryInfo->pUpstream);
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
SQueryInfo* pq = taosArrayGet(pQueryInfo->pUpstream, i);
|
||||
SArray* p = createQueryPlanImpl(pq);
|
||||
taosArrayAddBatch(upstream, p->pData, (int32_t) taosArrayGetSize(p));
|
||||
}
|
||||
}
|
||||
|
||||
if (pQueryInfo->numOfTables > 1) { // it is a join query
|
||||
// 1. separate the select clause according to table
|
||||
upstream = taosArrayInit(5, POINTER_BYTES);
|
||||
|
||||
for(int32_t i = 0; i < pQueryInfo->numOfTables; ++i) {
|
||||
STableMetaInfo* pTableMetaInfo = pQueryInfo->pTableMetaInfo[i];
|
||||
uint64_t uid = pTableMetaInfo->pTableMeta->id.uid;
|
||||
|
||||
SArray* exprList = taosArrayInit(4, POINTER_BYTES);
|
||||
if (tscExprCopy(exprList, pQueryInfo->exprList, uid, true) != 0) {
|
||||
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
// 2. create the query execution node
|
||||
char name[TSDB_TABLE_FNAME_LEN] = {0};
|
||||
tNameExtractFullName(&pTableMetaInfo->name, name);
|
||||
SQueryTableInfo info = {.tableName = strdup(name), .id = pTableMetaInfo->pTableMeta->id,};
|
||||
|
||||
// 3. get the required table column list
|
||||
SArray* tableColumnList = taosArrayInit(4, sizeof(SColumn));
|
||||
tscColumnListCopy(tableColumnList, pQueryInfo->colList, uid);
|
||||
|
||||
// 4. add the projection query node
|
||||
SQueryNode* pNode = doAddTableColumnNode(pQueryInfo, pTableMetaInfo, &info, exprList, tableColumnList);
|
||||
taosArrayPush(upstream, &pNode);
|
||||
}
|
||||
|
||||
// 3. add the join node here
|
||||
SQueryTableInfo info = {0};
|
||||
int32_t num = (int32_t) taosArrayGetSize(pQueryInfo->exprList);
|
||||
SQueryNode* pNode = createQueryNode(QNODE_JOIN, "Join", upstream->pData, pQueryInfo->numOfTables,
|
||||
pQueryInfo->exprList->pData, num, &info, NULL);
|
||||
|
||||
// 4. add the aggregation or projection execution node
|
||||
pNode = doCreateQueryPlanForOneTableImpl(pQueryInfo, pNode, &info, pQueryInfo->exprList);
|
||||
upstream = taosArrayInit(5, POINTER_BYTES);
|
||||
taosArrayPush(upstream, &pNode);
|
||||
} else { // only one table, normal query process
|
||||
STableMetaInfo* pTableMetaInfo = pQueryInfo->pTableMetaInfo[0];
|
||||
SQueryNode* pNode = doCreateQueryPlanForOneTable(pQueryInfo, pTableMetaInfo, pQueryInfo->exprList, pQueryInfo->colList);
|
||||
upstream = taosArrayInit(5, POINTER_BYTES);
|
||||
taosArrayPush(upstream, &pNode);
|
||||
}
|
||||
|
||||
return upstream;
|
||||
}
|
||||
|
||||
// TODO create the query plan
|
||||
SQueryNode* qCreateQueryPlan(SQueryInfo* pQueryInfo) {
|
||||
SArray* upstream = createQueryPlanImpl(pQueryInfo);
|
||||
assert(taosArrayGetSize(upstream) == 1);
|
||||
|
||||
SQueryNode* p = taosArrayGetP(upstream, 0);
|
||||
taosArrayDestroy(upstream);
|
||||
|
||||
return p;
|
||||
}
|
||||
|
||||
static void doDestroyQueryNode(SQueryNode* pQueryNode) {
|
||||
tfree(pQueryNode->pExtInfo);
|
||||
tfree(pQueryNode->pSchema);
|
||||
tfree(pQueryNode->info.name);
|
||||
|
||||
tfree(pQueryNode->tableInfo.tableName);
|
||||
|
||||
pQueryNode->pExpr = destroyQueryFuncExpr(pQueryNode->pExpr, pQueryNode->numOfOutput);
|
||||
|
||||
if (pQueryNode->pPrevNodes != NULL) {
|
||||
int32_t size = (int32_t) taosArrayGetSize(pQueryNode->pPrevNodes);
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
SQueryNode* p = taosArrayGetP(pQueryNode->pPrevNodes, i);
|
||||
doDestroyQueryNode(p);
|
||||
}
|
||||
|
||||
taosArrayDestroy(pQueryNode->pPrevNodes);
|
||||
}
|
||||
|
||||
tfree(pQueryNode);
|
||||
}
|
||||
|
||||
void* qDestroyQueryPlan(SQueryNode* pQueryNode) {
|
||||
if (pQueryNode == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
doDestroyQueryNode(pQueryNode);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
char* queryPlanToString() {
|
||||
return NULL;
|
||||
bool hasAliasName(SExprInfo* pExpr) {
|
||||
assert(pExpr != NULL);
|
||||
return strncmp(pExpr->base.token, pExpr->base.aliasName, tListLen(pExpr->base.aliasName)) != 0;
|
||||
}
|
||||
|
||||
static int32_t doPrintPlan(char* buf, SQueryNode* pQueryNode, int32_t level, int32_t totalLen) {
|
||||
if (level > 0) {
|
||||
sprintf(buf + totalLen, "%*c", level, ' ');
|
||||
totalLen += level;
|
||||
}
|
||||
|
||||
int32_t len1 = sprintf(buf + totalLen, "%s(", pQueryNode->info.name);
|
||||
int32_t len = len1 + totalLen;
|
||||
|
||||
switch(pQueryNode->info.type) {
|
||||
case QNODE_TABLESCAN: {
|
||||
STimeWindow* win = (STimeWindow*)pQueryNode->pExtInfo;
|
||||
len1 = sprintf(buf + len, "%s #0x%" PRIx64 ") time_range: %" PRId64 " - %" PRId64 "\n",
|
||||
pQueryNode->tableInfo.tableName, pQueryNode->tableInfo.id.uid, win->skey, win->ekey);
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_PROJECT: {
|
||||
len1 = sprintf(buf + len, "cols: ");
|
||||
len += len1;
|
||||
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
SSqlExpr* p = &pQueryNode->pExpr[i].base;
|
||||
len1 = sprintf(buf + len, "[%s #%d]", p->aliasName, p->resColId);
|
||||
len += len1;
|
||||
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len, ", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
|
||||
len1 = sprintf(buf + len, ")");
|
||||
len += len1;
|
||||
|
||||
//todo print filter info
|
||||
len1 = sprintf(buf + len, " filters:(nil)\n");
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_AGGREGATE: {
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
SSqlExpr* pExpr = &pQueryNode->pExpr[i].base;
|
||||
if (hasAliasName(&pQueryNode->pExpr[i])) {
|
||||
len1 = sprintf(buf + len,"[%s #%s]", pExpr->token, pExpr->aliasName);
|
||||
} else {
|
||||
len1 = sprintf(buf + len,"[%s]", pExpr->token);
|
||||
}
|
||||
|
||||
len += len1;
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len, ", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
|
||||
len1 = sprintf(buf + len, ")\n");
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_TIMEWINDOW: {
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
SSqlExpr* pExpr = &pQueryNode->pExpr[i].base;
|
||||
if (hasAliasName(&pQueryNode->pExpr[i])) {
|
||||
len1 = sprintf(buf + len,"[%s #%s]", pExpr->token, pExpr->aliasName);
|
||||
} else {
|
||||
len1 = sprintf(buf + len,"[%s]", pExpr->token);
|
||||
}
|
||||
|
||||
len += len1;
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len,", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
|
||||
len1 = sprintf(buf + len,") ");
|
||||
len += len1;
|
||||
|
||||
SInterval* pInterval = pQueryNode->pExtInfo;
|
||||
len1 = sprintf(buf + len, "interval:%" PRId64 "(%c), sliding:%" PRId64 "(%c), offset:%" PRId64 "\n",
|
||||
pInterval->interval, pInterval->intervalUnit, pInterval->sliding, pInterval->slidingUnit,
|
||||
pInterval->offset);
|
||||
len += len1;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_GROUPBY: { // todo hide the invisible column
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
SSqlExpr* pExpr = &pQueryNode->pExpr[i].base;
|
||||
|
||||
if (hasAliasName(&pQueryNode->pExpr[i])) {
|
||||
len1 = sprintf(buf + len,"[%s #%s]", pExpr->token, pExpr->aliasName);
|
||||
} else {
|
||||
len1 = sprintf(buf + len,"[%s]", pExpr->token);
|
||||
}
|
||||
|
||||
len += len1;
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len,", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
|
||||
SGroupbyExpr* pGroupbyExpr = pQueryNode->pExtInfo;
|
||||
SColIndex* pIndex = taosArrayGet(pGroupbyExpr->columnInfo, 0);
|
||||
|
||||
len1 = sprintf(buf + len,") groupby_col: [%s #%d]\n", pIndex->name, pIndex->colId);
|
||||
len += len1;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_FILL: {
|
||||
SFillEssInfo* pEssInfo = pQueryNode->pExtInfo;
|
||||
len1 = sprintf(buf + len,"%d", pEssInfo->fillType);
|
||||
len += len1;
|
||||
|
||||
if (pEssInfo->fillType == TSDB_FILL_SET_VALUE) {
|
||||
len1 = sprintf(buf + len,", val:");
|
||||
len += len1;
|
||||
|
||||
// todo get the correct fill data type
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
len1 = sprintf(buf + len,"%"PRId64, pEssInfo->val[i]);
|
||||
len += len1;
|
||||
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len,", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
len1 = sprintf(buf + len,")\n");
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_LIMIT: {
|
||||
SLimitVal* pVal = pQueryNode->pExtInfo;
|
||||
len1 = sprintf(buf + len,"limit: %"PRId64", offset: %"PRId64")\n", pVal->limit, pVal->offset);
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_DISTINCT:
|
||||
case QNODE_TAGSCAN: {
|
||||
len1 = sprintf(buf + len,"cols: ");
|
||||
len += len1;
|
||||
|
||||
for(int32_t i = 0; i < pQueryNode->numOfOutput; ++i) {
|
||||
SSqlExpr* p = &pQueryNode->pExpr[i].base;
|
||||
len1 = sprintf(buf + len,"[%s #%d]", p->aliasName, p->resColId);
|
||||
len += len1;
|
||||
|
||||
if (i < pQueryNode->numOfOutput - 1) {
|
||||
len1 = sprintf(buf + len,", ");
|
||||
len += len1;
|
||||
}
|
||||
}
|
||||
|
||||
len1 = sprintf(buf + len,")\n");
|
||||
len += len1;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
case QNODE_JOIN: {
|
||||
// print join condition
|
||||
len1 = sprintf(buf + len, ")\n");
|
||||
len += len1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
int32_t queryPlanToStringImpl(char* buf, SQueryNode* pQueryNode, int32_t level, int32_t totalLen) {
|
||||
int32_t len = doPrintPlan(buf, pQueryNode, level, totalLen);
|
||||
|
||||
for(int32_t i = 0; i < taosArrayGetSize(pQueryNode->pPrevNodes); ++i) {
|
||||
SQueryNode* p1 = taosArrayGetP(pQueryNode->pPrevNodes, i);
|
||||
int32_t len1 = queryPlanToStringImpl(buf, p1, level + 1, len);
|
||||
len = len1;
|
||||
}
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
char* queryPlanToString(SQueryNode* pQueryNode) {
|
||||
assert(pQueryNode);
|
||||
|
||||
char* buf = calloc(1, 4096);
|
||||
|
||||
int32_t len = sprintf(buf, "===== logic plan =====\n");
|
||||
queryPlanToStringImpl(buf, pQueryNode, 0, len);
|
||||
return buf;
|
||||
}
|
||||
|
||||
SQueryNode* queryPlanFromString() {
|
||||
|
@ -136,8 +615,13 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
|
|||
taosArrayPush(plan, &op);
|
||||
}
|
||||
} else { // diff/add/multiply/subtract/division
|
||||
op = OP_Arithmetic;
|
||||
taosArrayPush(plan, &op);
|
||||
if (pQueryAttr->numOfFilterCols > 0 && pQueryAttr->vgId == 0) { // todo refactor
|
||||
op = OP_Filter;
|
||||
taosArrayPush(plan, &op);
|
||||
} else {
|
||||
op = OP_Arithmetic;
|
||||
taosArrayPush(plan, &op);
|
||||
}
|
||||
}
|
||||
|
||||
if (pQueryAttr->limit.limit > 0 || pQueryAttr->limit.offset > 0) {
|
||||
|
|
|
@ -229,7 +229,6 @@ tSqlExpr *tSqlExprCreate(tSqlExpr *pLeft, tSqlExpr *pRight, int32_t optrType) {
|
|||
pExpr->flags &= ~(1 << EXPR_FLAG_TS_ERROR);
|
||||
}
|
||||
|
||||
|
||||
switch (optrType) {
|
||||
case TK_PLUS: {
|
||||
pExpr->value.i64 = pLeft->value.i64 + pRight->value.i64;
|
||||
|
@ -325,7 +324,6 @@ static FORCE_INLINE int32_t tStrTokenCompare(SStrToken* left, SStrToken* right)
|
|||
return (left->type == right->type && left->n == right->n && strncasecmp(left->z, right->z, left->n) == 0) ? 0 : 1;
|
||||
}
|
||||
|
||||
|
||||
int32_t tSqlExprCompare(tSqlExpr *left, tSqlExpr *right) {
|
||||
if ((left == NULL && right) || (left && right == NULL)) {
|
||||
return 1;
|
||||
|
@ -389,8 +387,6 @@ int32_t tSqlExprCompare(tSqlExpr *left, tSqlExpr *right) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
|
||||
tSqlExpr *tSqlExprClone(tSqlExpr *pSrc) {
|
||||
tSqlExpr *pExpr = calloc(1, sizeof(tSqlExpr));
|
||||
|
||||
|
@ -536,11 +532,11 @@ SArray *tVariantListInsert(SArray *pList, tVariant *pVar, uint8_t sortOrder, int
|
|||
SRelationInfo *setTableNameList(SRelationInfo* pRelationInfo, SStrToken *pName, SStrToken* pAlias) {
|
||||
if (pRelationInfo == NULL) {
|
||||
pRelationInfo = calloc(1, sizeof(SRelationInfo));
|
||||
pRelationInfo->list = taosArrayInit(4, sizeof(STableNamePair));
|
||||
pRelationInfo->list = taosArrayInit(4, sizeof(SRelElementPair));
|
||||
}
|
||||
|
||||
pRelationInfo->type = SQL_NODE_FROM_TABLELIST;
|
||||
STableNamePair p = {.name = *pName};
|
||||
SRelElementPair p = {.tableName = *pName};
|
||||
if (pAlias != NULL) {
|
||||
p.aliasName = *pAlias;
|
||||
} else {
|
||||
|
@ -551,18 +547,6 @@ SRelationInfo *setTableNameList(SRelationInfo* pRelationInfo, SStrToken *pName,
|
|||
return pRelationInfo;
|
||||
}
|
||||
|
||||
SRelationInfo* setSubquery(SRelationInfo* pRelationInfo, SArray* pList) {
|
||||
if (pRelationInfo == NULL) {
|
||||
pRelationInfo = calloc(1, sizeof(SRelationInfo));
|
||||
pRelationInfo->list = taosArrayInit(4, POINTER_BYTES);
|
||||
}
|
||||
|
||||
pRelationInfo->type = SQL_NODE_FROM_SUBQUERY;
|
||||
taosArrayPush(pRelationInfo->list, &pList);
|
||||
|
||||
return pRelationInfo;
|
||||
}
|
||||
|
||||
void* destroyRelationInfo(SRelationInfo* pRelationInfo) {
|
||||
if (pRelationInfo == NULL) {
|
||||
return NULL;
|
||||
|
@ -573,7 +557,7 @@ void* destroyRelationInfo(SRelationInfo* pRelationInfo) {
|
|||
} else {
|
||||
size_t size = taosArrayGetSize(pRelationInfo->list);
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
SArray* pa = taosArrayGetP(pRelationInfo->list, 0);
|
||||
SArray* pa = taosArrayGetP(pRelationInfo->list, i);
|
||||
destroyAllSqlNode(pa);
|
||||
}
|
||||
taosArrayDestroy(pRelationInfo->list);
|
||||
|
@ -583,6 +567,24 @@ void* destroyRelationInfo(SRelationInfo* pRelationInfo) {
|
|||
return NULL;
|
||||
}
|
||||
|
||||
SRelationInfo* addSubqueryElem(SRelationInfo* pRelationInfo, SArray* pSub, SStrToken* pAlias) {
|
||||
if (pRelationInfo == NULL) {
|
||||
pRelationInfo = calloc(1, sizeof(SRelationInfo));
|
||||
pRelationInfo->list = taosArrayInit(4, sizeof(SRelElementPair));
|
||||
}
|
||||
|
||||
pRelationInfo->type = SQL_NODE_FROM_SUBQUERY;
|
||||
|
||||
SRelElementPair p = {.pSubquery = pSub};
|
||||
if (pAlias != NULL) {
|
||||
p.aliasName = *pAlias;
|
||||
} else {
|
||||
TPARSER_SET_NONE_TOKEN(p.aliasName);
|
||||
}
|
||||
|
||||
taosArrayPush(pRelationInfo->list, &p);
|
||||
return pRelationInfo;
|
||||
}
|
||||
|
||||
void tSetDbName(SStrToken *pCpxName, SStrToken *pDb) {
|
||||
pCpxName->type = pDb->type;
|
||||
|
@ -724,9 +726,9 @@ void tSetColumnType(TAOS_FIELD *pField, SStrToken *type) {
|
|||
* extract the select info out of sql string
|
||||
*/
|
||||
SSqlNode *tSetQuerySqlNode(SStrToken *pSelectToken, SArray *pSelNodeList, SRelationInfo *pFrom, tSqlExpr *pWhere,
|
||||
SArray *pGroupby, SArray *pSortOrder, SIntervalVal *pInterval,
|
||||
SSessionWindowVal *pSession, SStrToken *pSliding, SArray *pFill, SLimitVal *pLimit,
|
||||
SLimitVal *psLimit, tSqlExpr *pHaving) {
|
||||
SArray *pGroupby, SArray *pSortOrder, SIntervalVal *pInterval, SSessionWindowVal *pSession,
|
||||
SStrToken *pSliding, SArray *pFill, SLimitVal *pLimit, SLimitVal *psLimit,
|
||||
tSqlExpr *pHaving) {
|
||||
assert(pSelNodeList != NULL);
|
||||
|
||||
SSqlNode *pSqlNode = calloc(1, sizeof(SSqlNode));
|
||||
|
|
1667
src/query/src/sql.c
1667
src/query/src/sql.c
File diff suppressed because it is too large
Load Diff
|
@ -99,47 +99,47 @@ TEST(testCase, db_table_name) {
|
|||
EXPECT_EQ(testValidateName(t4), TSDB_CODE_SUCCESS);
|
||||
|
||||
char t5[] = "table.'def'";
|
||||
EXPECT_EQ(testValidateName(t5), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t5), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t6[] = "'table'.'def'";
|
||||
EXPECT_EQ(testValidateName(t6), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t6), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t7[] = "'_ab1234'.'def'";
|
||||
EXPECT_EQ(testValidateName(t7), TSDB_CODE_SUCCESS);
|
||||
printf("%s\n", t7);
|
||||
|
||||
char t8[] = "'_ab&^%1234'.'def'";
|
||||
EXPECT_EQ(testValidateName(t8), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t8), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t9[] = "'_123'.'gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t9), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t9), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t10[] = "abc.'gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t10), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t10), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t10_1[] = "abc.'中文gtest'";
|
||||
EXPECT_EQ(testValidateName(t10_1), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t10_1), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t11[] = "'192.168.0.1'.abc";
|
||||
EXPECT_EQ(testValidateName(t11), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t11), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t12[] = "192.168.0.1.abc";
|
||||
EXPECT_EQ(testValidateName(t12), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t12), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t13[] = "abc.";
|
||||
EXPECT_EQ(testValidateName(t13), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t13), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t14[] = ".abc";
|
||||
EXPECT_EQ(testValidateName(t14), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t14), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t15[] = ".'abc'";
|
||||
EXPECT_EQ(testValidateName(t15), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t15), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t16[] = ".abc'";
|
||||
EXPECT_EQ(testValidateName(t16), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t16), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t17[] = "123a.\"abc\"";
|
||||
EXPECT_EQ(testValidateName(t17), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t17), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
printf("%s\n", t17);
|
||||
|
||||
char t18[] = "a.\"abc\"";
|
||||
|
@ -147,13 +147,13 @@ TEST(testCase, db_table_name) {
|
|||
printf("%s\n", t18);
|
||||
|
||||
char t19[] = "'_ab1234'.'def'.'ab123'";
|
||||
EXPECT_EQ(testValidateName(t19), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t19), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t20[] = "'_ab1234*&^'";
|
||||
EXPECT_EQ(testValidateName(t20), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t20), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t21[] = "'1234_abc'";
|
||||
EXPECT_EQ(testValidateName(t21), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t21), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
|
||||
// =======Containing capital letters=================
|
||||
|
@ -167,10 +167,10 @@ TEST(testCase, db_table_name) {
|
|||
EXPECT_EQ(testValidateName(t32), TSDB_CODE_SUCCESS);
|
||||
|
||||
char t33[] = "'ABC.def";
|
||||
EXPECT_EQ(testValidateName(t33), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t33), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t33_0[] = "abc.DEF'";
|
||||
EXPECT_EQ(testValidateName(t33_0), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t33_0), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t34[] = "'ABC.def'";
|
||||
//int32_t tmp0 = testValidateName(t34);
|
||||
|
@ -193,136 +193,136 @@ TEST(testCase, db_table_name) {
|
|||
|
||||
// do not use key words
|
||||
char t39[] = "table.'DEF'";
|
||||
EXPECT_EQ(testValidateName(t39), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t39), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t40[] = "'table'.'DEF'";
|
||||
EXPECT_EQ(testValidateName(t40), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t40), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t41[] = "'_abXYZ1234'.'deFF'";
|
||||
EXPECT_EQ(testValidateName(t41), TSDB_CODE_SUCCESS);
|
||||
|
||||
char t42[] = "'_abDEF&^%1234'.'DIef'";
|
||||
EXPECT_EQ(testValidateName(t42), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t42), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t43[] = "'_123'.'Gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t43), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t43), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t44[] = "'aABC'.'Gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t44), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t44), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t45[] = "'ABC'.";
|
||||
EXPECT_EQ(testValidateName(t45), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t45), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t46[] = ".'ABC'";
|
||||
EXPECT_EQ(testValidateName(t46), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t46), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t47[] = "a.\"aTWc\"";
|
||||
EXPECT_EQ(testValidateName(t47), TSDB_CODE_SUCCESS);
|
||||
|
||||
// ================has space =================
|
||||
char t60[] = " ABC ";
|
||||
EXPECT_EQ(testValidateName(t60), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t60), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t60_1[] = " ABC ";
|
||||
EXPECT_EQ(testValidateName(t60_1), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t60_1), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t61[] = "' ABC '";
|
||||
EXPECT_EQ(testValidateName(t61), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t61), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t61_1[] = "' ABC '";
|
||||
EXPECT_EQ(testValidateName(t61_1), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t61_1), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t62[] = " ABC . def ";
|
||||
EXPECT_EQ(testValidateName(t62), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t62), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t63[] = "' ABC . def ";
|
||||
EXPECT_EQ(testValidateName(t63), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t63), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t63_0[] = " abc . DEF ' ";
|
||||
EXPECT_EQ(testValidateName(t63_0), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t63_0), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t64[] = " ' ABC . def ' ";
|
||||
//int32_t tmp1 = testValidateName(t64);
|
||||
EXPECT_EQ(testValidateName(t64), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t64), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t65[] = " ' ABC '. def ";
|
||||
EXPECT_EQ(testValidateName(t65), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t65), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t66[] = "' ABC '.' DEF '";
|
||||
EXPECT_EQ(testValidateName(t66), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t66), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t67[] = "abc . ' DEF '";
|
||||
EXPECT_EQ(testValidateName(t67), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t67), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t68[] = "' abc '.' DEF '";
|
||||
EXPECT_EQ(testValidateName(t68), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t68), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
// do not use key words
|
||||
char t69[] = "table.'DEF'";
|
||||
EXPECT_EQ(testValidateName(t69), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t69), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t70[] = "'table'.'DEF'";
|
||||
EXPECT_EQ(testValidateName(t70), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t70), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t71[] = "'_abXYZ1234 '.' deFF '";
|
||||
EXPECT_EQ(testValidateName(t71), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t71), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t72[] = "'_abDEF&^%1234'.' DIef'";
|
||||
EXPECT_EQ(testValidateName(t72), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t72), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t73[] = "'_123'.' Gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t73), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t73), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t74[] = "' aABC'.'Gtest中文'";
|
||||
EXPECT_EQ(testValidateName(t74), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t74), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t75[] = "' ABC '.";
|
||||
EXPECT_EQ(testValidateName(t75), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t75), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t76[] = ".' ABC'";
|
||||
EXPECT_EQ(testValidateName(t76), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t76), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t77[] = " a . \"aTWc\" ";
|
||||
EXPECT_EQ(testValidateName(t77), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t77), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t78[] = " a.\"aTWc \"";
|
||||
EXPECT_EQ(testValidateName(t78), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t78), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
|
||||
// ===============muti string by space ===================
|
||||
// There's no such case.
|
||||
//char t160[] = "A BC";
|
||||
//EXPECT_EQ(testValidateName(t160), TSDB_CODE_TSC_INVALID_SQL);
|
||||
//EXPECT_EQ(testValidateName(t160), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
//printf("end:%s\n", t160);
|
||||
|
||||
// There's no such case.
|
||||
//char t161[] = "' A BC '";
|
||||
//EXPECT_EQ(testValidateName(t161), TSDB_CODE_TSC_INVALID_SQL);
|
||||
//EXPECT_EQ(testValidateName(t161), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t162[] = " AB C . de f ";
|
||||
EXPECT_EQ(testValidateName(t162), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t162), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t163[] = "' AB C . de f ";
|
||||
EXPECT_EQ(testValidateName(t163), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t163), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t163_0[] = " ab c . DE F ' ";
|
||||
EXPECT_EQ(testValidateName(t163_0), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t163_0), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t164[] = " ' AB C . de f ' ";
|
||||
//int32_t tmp2 = testValidateName(t164);
|
||||
EXPECT_EQ(testValidateName(t164), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t164), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t165[] = " ' A BC '. de f ";
|
||||
EXPECT_EQ(testValidateName(t165), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t165), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t166[] = "' AB C '.' DE F '";
|
||||
EXPECT_EQ(testValidateName(t166), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t166), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t167[] = "ab c . ' D EF '";
|
||||
EXPECT_EQ(testValidateName(t167), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t167), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
char t168[] = "' a bc '.' DE F '";
|
||||
EXPECT_EQ(testValidateName(t168), TSDB_CODE_TSC_INVALID_SQL);
|
||||
EXPECT_EQ(testValidateName(t168), TSDB_CODE_TSC_INVALID_OPERATION);
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -709,7 +709,7 @@ static void syncChooseMaster(SSyncNode *pNode) {
|
|||
}
|
||||
|
||||
static SSyncPeer *syncCheckMaster(SSyncNode *pNode) {
|
||||
int32_t onlineNum = 0;
|
||||
int32_t onlineNum = 0, arbOnlineNum = 0;
|
||||
int32_t masterIndex = -1;
|
||||
int32_t replica = pNode->replica;
|
||||
|
||||
|
@ -723,13 +723,15 @@ static SSyncPeer *syncCheckMaster(SSyncNode *pNode) {
|
|||
SSyncPeer *pArb = pNode->peerInfo[TAOS_SYNC_MAX_REPLICA];
|
||||
if (pArb && pArb->role != TAOS_SYNC_ROLE_OFFLINE) {
|
||||
onlineNum++;
|
||||
++arbOnlineNum;
|
||||
replica = pNode->replica + 1;
|
||||
}
|
||||
|
||||
if (onlineNum <= replica * 0.5) {
|
||||
if (nodeRole != TAOS_SYNC_ROLE_UNSYNCED) {
|
||||
if (nodeRole == TAOS_SYNC_ROLE_MASTER && onlineNum == replica * 0.5 && onlineNum >= 1) {
|
||||
if (nodeRole == TAOS_SYNC_ROLE_MASTER && onlineNum == replica * 0.5 && ((replica > 2 && onlineNum - arbOnlineNum > 1) || pNode->replica < 3)) {
|
||||
sInfo("vgId:%d, self keep work as master, online:%d replica:%d", pNode->vgId, onlineNum, replica);
|
||||
masterIndex = pNode->selfIndex;
|
||||
} else {
|
||||
nodeRole = TAOS_SYNC_ROLE_UNSYNCED;
|
||||
sInfo("vgId:%d, self change to unsynced state, online:%d replica:%d", pNode->vgId, onlineNum, replica);
|
||||
|
@ -1002,6 +1004,7 @@ static void syncProcessForwardFromPeer(char *cont, SSyncPeer *pPeer) {
|
|||
if (nodeRole == TAOS_SYNC_ROLE_SLAVE) {
|
||||
// nodeVersion = pHead->version;
|
||||
code = (*pNode->writeToCacheFp)(pNode->vgId, pHead, TAOS_QTYPE_FWD, NULL);
|
||||
syncConfirmForward(pNode->rid, pHead->version, code, false);
|
||||
} else {
|
||||
if (nodeSStatus != TAOS_SYNC_STATUS_INIT) {
|
||||
code = syncSaveIntoBuffer(pPeer, pHead);
|
||||
|
@ -1404,7 +1407,7 @@ static void syncMonitorFwdInfos(void *param, void *tmrId) {
|
|||
pthread_mutex_lock(&pNode->mutex);
|
||||
for (int32_t i = 0; i < pSyncFwds->fwds; ++i) {
|
||||
SFwdInfo *pFwdInfo = pSyncFwds->fwdInfo + (pSyncFwds->first + i) % SYNC_MAX_FWDS;
|
||||
if (ABS(time - pFwdInfo->time) < 2000) break;
|
||||
if (ABS(time - pFwdInfo->time) < 10000) break;
|
||||
|
||||
sDebug("vgId:%d, forward info expired, hver:%" PRIu64 " curtime:%" PRIu64 " savetime:%" PRIu64, pNode->vgId,
|
||||
pFwdInfo->version, time, pFwdInfo->time);
|
||||
|
|
|
@ -36,6 +36,12 @@ typedef struct STable {
|
|||
char* sql;
|
||||
void* cqhandle;
|
||||
SRWLatch latch; // TODO: implementa latch functions
|
||||
|
||||
SDataCol *lastCols;
|
||||
int16_t maxColNum;
|
||||
int16_t restoreColumnNum;
|
||||
bool hasRestoreLastColumn;
|
||||
int lastColSVersion;
|
||||
T_REF_DECLARE()
|
||||
} STable;
|
||||
|
||||
|
@ -78,6 +84,11 @@ void tsdbUnRefTable(STable* pTable);
|
|||
void tsdbUpdateTableSchema(STsdbRepo* pRepo, STable* pTable, STSchema* pSchema, bool insertAct);
|
||||
int tsdbRestoreTable(STsdbRepo* pRepo, void* cont, int contLen);
|
||||
void tsdbOrgMeta(STsdbRepo* pRepo);
|
||||
int tsdbInitColIdCacheWithSchema(STable* pTable, STSchema* pSchema);
|
||||
int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId);
|
||||
int tsdbUpdateLastColSchema(STable *pTable, STSchema *pNewSchema);
|
||||
STSchema* tsdbGetTableLatestSchema(STable *pTable);
|
||||
void tsdbFreeLastColumns(STable* pTable);
|
||||
|
||||
static FORCE_INLINE int tsdbCompareSchemaVersion(const void *key1, const void *key2) {
|
||||
if (*(int16_t *)key1 < schemaVersion(*(STSchema **)key2)) {
|
||||
|
|
|
@ -76,6 +76,9 @@ struct STsdbRepo {
|
|||
bool config_changed; // config changed flag
|
||||
pthread_mutex_t save_mutex; // protect save config
|
||||
|
||||
uint8_t hasCachedLastRow;
|
||||
uint8_t hasCachedLastColumn;
|
||||
|
||||
STsdbAppH appH;
|
||||
STsdbStat stat;
|
||||
STsdbMeta* tsdbMeta;
|
||||
|
@ -100,6 +103,7 @@ int tsdbUnlockRepo(STsdbRepo* pRepo);
|
|||
STsdbMeta* tsdbGetMeta(STsdbRepo* pRepo);
|
||||
int tsdbCheckCommit(STsdbRepo* pRepo);
|
||||
int tsdbRestoreInfo(STsdbRepo* pRepo);
|
||||
int tsdbCacheLastData(STsdbRepo *pRepo, STsdbCfg* oldCfg);
|
||||
void tsdbGetRootDir(int repoid, char dirName[]);
|
||||
void tsdbGetDataDir(int repoid, char dirName[]);
|
||||
|
||||
|
|
|
@ -90,6 +90,9 @@ static int tsdbApplyRtn(STsdbRepo *pRepo);
|
|||
static int tsdbApplyRtnOnFSet(STsdbRepo *pRepo, SDFileSet *pSet, SRtn *pRtn);
|
||||
|
||||
void *tsdbCommitData(STsdbRepo *pRepo) {
|
||||
if (pRepo->imem == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
tsdbStartCommit(pRepo);
|
||||
|
||||
// Commit to update meta file
|
||||
|
@ -1149,7 +1152,7 @@ static int tsdbCommitAddBlock(SCommitH *pCommith, const SBlock *pSupBlock, const
|
|||
return -1;
|
||||
}
|
||||
|
||||
if (pSubBlocks && taosArrayPushBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) == NULL) {
|
||||
if (pSubBlocks && taosArrayAddBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) == NULL) {
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -113,11 +113,15 @@ int tsdbScheduleCommit(STsdbRepo *pRepo) {
|
|||
}
|
||||
|
||||
static void tsdbApplyRepoConfig(STsdbRepo *pRepo) {
|
||||
pthread_mutex_lock(&pRepo->save_mutex);
|
||||
|
||||
pRepo->config_changed = false;
|
||||
STsdbCfg * pSaveCfg = &pRepo->save_config;
|
||||
|
||||
STsdbCfg oldCfg;
|
||||
int32_t oldTotalBlocks = pRepo->config.totalBlocks;
|
||||
|
||||
memcpy(&oldCfg, &(pRepo->config), sizeof(STsdbCfg));
|
||||
|
||||
pRepo->config.compression = pRepo->save_config.compression;
|
||||
pRepo->config.keep = pRepo->save_config.keep;
|
||||
pRepo->config.keep1 = pRepo->save_config.keep1;
|
||||
|
@ -125,10 +129,12 @@ static void tsdbApplyRepoConfig(STsdbRepo *pRepo) {
|
|||
pRepo->config.cacheLastRow = pRepo->save_config.cacheLastRow;
|
||||
pRepo->config.totalBlocks = pRepo->save_config.totalBlocks;
|
||||
|
||||
tsdbInfo("vgId:%d apply new config: compression(%d), keep(%d,%d,%d), totalBlocks(%d), cacheLastRow(%d),totalBlocks(%d)",
|
||||
pthread_mutex_unlock(&pRepo->save_mutex);
|
||||
|
||||
tsdbInfo("vgId:%d apply new config: compression(%d), keep(%d,%d,%d), totalBlocks(%d), cacheLastRow(%d->%d),totalBlocks(%d->%d)",
|
||||
REPO_ID(pRepo),
|
||||
pSaveCfg->compression, pSaveCfg->keep,pSaveCfg->keep1, pSaveCfg->keep2,
|
||||
pSaveCfg->totalBlocks, pSaveCfg->cacheLastRow, pSaveCfg->totalBlocks);
|
||||
pSaveCfg->totalBlocks, oldCfg.cacheLastRow, pSaveCfg->cacheLastRow, oldTotalBlocks, pSaveCfg->totalBlocks);
|
||||
|
||||
int err = tsdbExpendPool(pRepo, oldTotalBlocks);
|
||||
if (!TAOS_SUCCEEDED(err)) {
|
||||
|
@ -136,6 +142,12 @@ static void tsdbApplyRepoConfig(STsdbRepo *pRepo) {
|
|||
REPO_ID(pRepo), oldTotalBlocks, pSaveCfg->totalBlocks, tstrerror(err));
|
||||
}
|
||||
|
||||
if (oldCfg.cacheLastRow != pRepo->config.cacheLastRow) {
|
||||
if (tsdbLockRepo(pRepo) < 0) return;
|
||||
tsdbCacheLastData(pRepo, &oldCfg);
|
||||
tsdbUnlockRepo(pRepo);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static void *tsdbLoopCommit(void *arg) {
|
||||
|
@ -166,9 +178,7 @@ static void *tsdbLoopCommit(void *arg) {
|
|||
|
||||
// check if need to apply new config
|
||||
if (pRepo->config_changed) {
|
||||
pthread_mutex_lock(&pRepo->save_mutex);
|
||||
tsdbApplyRepoConfig(pRepo);
|
||||
pthread_mutex_unlock(&pRepo->save_mutex);
|
||||
}
|
||||
|
||||
tsdbCommitData(pRepo);
|
||||
|
|
|
@ -26,6 +26,8 @@ static STsdbRepo *tsdbNewRepo(STsdbCfg *pCfg, STsdbAppH *pAppH);
|
|||
static void tsdbFreeRepo(STsdbRepo *pRepo);
|
||||
static void tsdbStartStream(STsdbRepo *pRepo);
|
||||
static void tsdbStopStream(STsdbRepo *pRepo);
|
||||
static int tsdbRestoreLastColumns(STsdbRepo *pRepo, STable *pTable, SReadH* pReadh);
|
||||
static int tsdbRestoreLastRow(STsdbRepo *pRepo, STable *pTable, SReadH* pReadh, SBlockIdx *pIdx);
|
||||
|
||||
// Function declaration
|
||||
int32_t tsdbCreateRepo(int repoid) {
|
||||
|
@ -267,6 +269,10 @@ int32_t tsdbConfigRepo(STsdbRepo *repo, STsdbCfg *pCfg) {
|
|||
repo->config_changed = true;
|
||||
|
||||
pthread_mutex_unlock(&repo->save_mutex);
|
||||
|
||||
// schedule a commit msg then the new config will be applied immediatly
|
||||
tsdbAsyncCommit(repo);
|
||||
|
||||
return 0;
|
||||
#if 0
|
||||
STsdbRepo *pRepo = (STsdbRepo *)repo;
|
||||
|
@ -511,8 +517,10 @@ static int32_t tsdbCheckAndSetDefaultCfg(STsdbCfg *pCfg) {
|
|||
if (pCfg->update != 0) pCfg->update = 1;
|
||||
|
||||
// update cacheLastRow
|
||||
if (pCfg->cacheLastRow != 0) pCfg->cacheLastRow = 1;
|
||||
|
||||
if (pCfg->cacheLastRow != 0) {
|
||||
if (pCfg->cacheLastRow > 3)
|
||||
pCfg->cacheLastRow = 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -545,6 +553,8 @@ static STsdbRepo *tsdbNewRepo(STsdbCfg *pCfg, STsdbAppH *pAppH) {
|
|||
return NULL;
|
||||
}
|
||||
pRepo->config_changed = false;
|
||||
atomic_store_8(&pRepo->hasCachedLastRow, 0);
|
||||
atomic_store_8(&pRepo->hasCachedLastColumn, 0);
|
||||
|
||||
code = tsem_init(&(pRepo->readyToCommit), 0, 1);
|
||||
if (code != 0) {
|
||||
|
@ -614,13 +624,180 @@ static void tsdbStopStream(STsdbRepo *pRepo) {
|
|||
}
|
||||
}
|
||||
|
||||
static int tsdbRestoreLastColumns(STsdbRepo *pRepo, STable *pTable, SReadH* pReadh) {
|
||||
//tsdbInfo("tsdbRestoreLastColumns of table %s", pTable->name->data);
|
||||
|
||||
STSchema *pSchema = tsdbGetTableLatestSchema(pTable);
|
||||
if (pSchema == NULL) {
|
||||
tsdbError("tsdbGetTableLatestSchema of table %s fail", pTable->name->data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SBlock* pBlock;
|
||||
int numColumns;
|
||||
int32_t blockIdx;
|
||||
SDataStatis* pBlockStatis = NULL;
|
||||
SDataRow row = NULL;
|
||||
// restore last column data with last schema
|
||||
|
||||
int err = 0;
|
||||
|
||||
numColumns = schemaNCols(pSchema);
|
||||
if (numColumns <= pTable->restoreColumnNum) {
|
||||
pTable->hasRestoreLastColumn = true;
|
||||
return 0;
|
||||
}
|
||||
if (pTable->lastColSVersion != schemaVersion(pSchema)) {
|
||||
if (tsdbInitColIdCacheWithSchema(pTable, pSchema) < 0) {
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
row = taosTMalloc(dataRowMaxBytesFromSchema(pSchema));
|
||||
if (row == NULL) {
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
tdInitDataRow(row, pSchema);
|
||||
|
||||
// first load block index info
|
||||
if (tsdbLoadBlockInfo(pReadh, NULL) < 0) {
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
pBlockStatis = calloc(numColumns, sizeof(SDataStatis));
|
||||
if (pBlockStatis == NULL) {
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
memset(pBlockStatis, 0, numColumns * sizeof(SDataStatis));
|
||||
for(int32_t i = 0; i < numColumns; ++i) {
|
||||
STColumn *pCol = schemaColAt(pSchema, i);
|
||||
pBlockStatis[i].colId = pCol->colId;
|
||||
}
|
||||
|
||||
// load block from backward
|
||||
SBlockIdx *pIdx = pReadh->pBlkIdx;
|
||||
blockIdx = (int32_t)(pIdx->numOfBlocks - 1);
|
||||
|
||||
while (numColumns > pTable->restoreColumnNum && blockIdx >= 0) {
|
||||
bool loadStatisData = false;
|
||||
pBlock = pReadh->pBlkInfo->blocks + blockIdx;
|
||||
blockIdx -= 1;
|
||||
|
||||
// load block data
|
||||
if (tsdbLoadBlockData(pReadh, pBlock, NULL) < 0) {
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
// file block with sub-blocks has no statistics data
|
||||
if (pBlock->numOfSubBlocks <= 1) {
|
||||
tsdbLoadBlockStatis(pReadh, pBlock);
|
||||
tsdbGetBlockStatis(pReadh, pBlockStatis, (int)numColumns);
|
||||
loadStatisData = true;
|
||||
}
|
||||
|
||||
for (int16_t i = 0; i < numColumns && numColumns > pTable->restoreColumnNum; ++i) {
|
||||
STColumn *pCol = schemaColAt(pSchema, i);
|
||||
// ignore loaded columns
|
||||
if (pTable->lastCols[i].bytes != 0) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// ignore block which has no not-null colId column
|
||||
if (loadStatisData && pBlockStatis[i].numOfNull == pBlock->numOfRows) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// OK,let's load row from backward to get not-null column
|
||||
for (int32_t rowId = pBlock->numOfRows - 1; rowId >= 0; rowId--) {
|
||||
SDataCol *pDataCol = pReadh->pDCols[0]->cols + i;
|
||||
tdAppendColVal(row, tdGetColDataOfRow(pDataCol, rowId), pCol->type, pCol->bytes, pCol->offset);
|
||||
//SDataCol *pDataCol = readh.pDCols[0]->cols + j;
|
||||
void* value = tdGetRowDataOfCol(row, (int8_t)pCol->type, TD_DATA_ROW_HEAD_SIZE + pCol->offset);
|
||||
if (isNull(value, pCol->type)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
int16_t idx = tsdbGetLastColumnsIndexByColId(pTable, pCol->colId);
|
||||
if (idx == -1) {
|
||||
tsdbError("tsdbRestoreLastColumns restore vgId:%d,table:%s cache column %d fail", REPO_ID(pRepo), pTable->name->data, pCol->colId);
|
||||
continue;
|
||||
}
|
||||
// save not-null column
|
||||
SDataCol *pLastCol = &(pTable->lastCols[idx]);
|
||||
pLastCol->pData = malloc(pCol->bytes);
|
||||
pLastCol->bytes = pCol->bytes;
|
||||
pLastCol->colId = pCol->colId;
|
||||
memcpy(pLastCol->pData, value, pCol->bytes);
|
||||
|
||||
// save row ts(in column 0)
|
||||
pDataCol = pReadh->pDCols[0]->cols + 0;
|
||||
pCol = schemaColAt(pSchema, 0);
|
||||
tdAppendColVal(row, tdGetColDataOfRow(pDataCol, rowId), pCol->type, pCol->bytes, pCol->offset);
|
||||
pLastCol->ts = dataRowKey(row);
|
||||
|
||||
pTable->restoreColumnNum += 1;
|
||||
|
||||
tsdbDebug("tsdbRestoreLastColumns restore vgId:%d,table:%s cache column %d, %" PRId64, REPO_ID(pRepo), pTable->name->data, pLastCol->colId, pLastCol->ts);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
taosTZfree(row);
|
||||
tfree(pBlockStatis);
|
||||
|
||||
if (err == 0 && numColumns <= pTable->restoreColumnNum) {
|
||||
pTable->hasRestoreLastColumn = true;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int tsdbRestoreLastRow(STsdbRepo *pRepo, STable *pTable, SReadH* pReadh, SBlockIdx *pIdx) {
|
||||
ASSERT(pTable->lastRow == NULL);
|
||||
if (tsdbLoadBlockInfo(pReadh, NULL) < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
SBlock* pBlock = pReadh->pBlkInfo->blocks + pIdx->numOfBlocks - 1;
|
||||
|
||||
if (tsdbLoadBlockData(pReadh, pBlock, NULL) < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get the data in row
|
||||
|
||||
STSchema *pSchema = tsdbGetTableSchema(pTable);
|
||||
pTable->lastRow = taosTMalloc(dataRowMaxBytesFromSchema(pSchema));
|
||||
if (pTable->lastRow == NULL) {
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
tdInitDataRow(pTable->lastRow, pSchema);
|
||||
for (int icol = 0; icol < schemaNCols(pSchema); icol++) {
|
||||
STColumn *pCol = schemaColAt(pSchema, icol);
|
||||
SDataCol *pDataCol = pReadh->pDCols[0]->cols + icol;
|
||||
tdAppendColVal(pTable->lastRow, tdGetColDataOfRow(pDataCol, pBlock->numOfRows - 1), pCol->type, pCol->bytes,
|
||||
pCol->offset);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int tsdbRestoreInfo(STsdbRepo *pRepo) {
|
||||
SFSIter fsiter;
|
||||
SReadH readh;
|
||||
SDFileSet *pSet;
|
||||
STsdbMeta *pMeta = pRepo->tsdbMeta;
|
||||
STsdbCfg * pCfg = REPO_CFG(pRepo);
|
||||
SBlock * pBlock;
|
||||
|
||||
if (tsdbInitReadH(&readh, pRepo) < 0) {
|
||||
return -1;
|
||||
|
@ -628,6 +805,14 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
|
|||
|
||||
tsdbFSIterInit(&fsiter, REPO_FS(pRepo), TSDB_FS_ITER_BACKWARD);
|
||||
|
||||
if (CACHE_LAST_NULL_COLUMN(pCfg)) {
|
||||
for (int i = 1; i < pMeta->maxTables; i++) {
|
||||
STable *pTable = pMeta->tables[i];
|
||||
if (pTable == NULL) continue;
|
||||
pTable->restoreColumnNum = 0;
|
||||
}
|
||||
}
|
||||
|
||||
while ((pSet = tsdbFSIterNext(&fsiter)) != NULL) {
|
||||
if (tsdbSetAndOpenReadFSet(&readh, pSet) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
|
@ -643,6 +828,8 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
|
|||
STable *pTable = pMeta->tables[i];
|
||||
if (pTable == NULL) continue;
|
||||
|
||||
//tsdbInfo("tsdbRestoreInfo restore vgId:%d,table:%s", REPO_ID(pRepo), pTable->name->data);
|
||||
|
||||
if (tsdbSetReadTable(&readh, pTable) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
|
@ -653,42 +840,155 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
|
|||
if (pIdx && lastKey < pIdx->maxKey) {
|
||||
pTable->lastKey = pIdx->maxKey;
|
||||
|
||||
if (pCfg->cacheLastRow) {
|
||||
if (tsdbLoadBlockInfo(&readh, NULL) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
pBlock = readh.pBlkInfo->blocks + pIdx->numOfBlocks - 1;
|
||||
|
||||
if (tsdbLoadBlockData(&readh, pBlock, NULL) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// Get the data in row
|
||||
ASSERT(pTable->lastRow == NULL);
|
||||
STSchema *pSchema = tsdbGetTableSchema(pTable);
|
||||
pTable->lastRow = taosTMalloc(dataRowMaxBytesFromSchema(pSchema));
|
||||
if (pTable->lastRow == NULL) {
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
tdInitDataRow(pTable->lastRow, pSchema);
|
||||
for (int icol = 0; icol < schemaNCols(pSchema); icol++) {
|
||||
STColumn *pCol = schemaColAt(pSchema, icol);
|
||||
SDataCol *pDataCol = readh.pDCols[0]->cols + icol;
|
||||
tdAppendColVal(pTable->lastRow, tdGetColDataOfRow(pDataCol, pBlock->numOfRows - 1), pCol->type, pCol->bytes,
|
||||
pCol->offset);
|
||||
}
|
||||
if (CACHE_LAST_ROW(pCfg) && tsdbRestoreLastRow(pRepo, pTable, &readh, pIdx) != 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
// restore NULL columns
|
||||
if (pIdx && CACHE_LAST_NULL_COLUMN(pCfg) && !pTable->hasRestoreLastColumn) {
|
||||
if (tsdbRestoreLastColumns(pRepo, pTable, &readh) != 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tsdbDestroyReadH(&readh);
|
||||
if (CACHE_LAST_ROW(pCfg)) {
|
||||
atomic_store_8(&pRepo->hasCachedLastRow, 1);
|
||||
}
|
||||
if (CACHE_LAST_NULL_COLUMN(pCfg)) {
|
||||
atomic_store_8(&pRepo->hasCachedLastColumn, 1);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int tsdbCacheLastData(STsdbRepo *pRepo, STsdbCfg* oldCfg) {
|
||||
bool cacheLastRow = false, cacheLastCol = false;
|
||||
SFSIter fsiter;
|
||||
SReadH readh;
|
||||
SDFileSet *pSet;
|
||||
STsdbMeta *pMeta = pRepo->tsdbMeta;
|
||||
int tableNum = 0;
|
||||
int maxTableIdx = 0;
|
||||
int cacheLastRowTableNum = 0;
|
||||
int cacheLastColTableNum = 0;
|
||||
|
||||
bool need_free_last_row = CACHE_LAST_ROW(oldCfg) && !CACHE_LAST_ROW(&(pRepo->config));
|
||||
bool need_free_last_col = CACHE_LAST_NULL_COLUMN(oldCfg) && !CACHE_LAST_NULL_COLUMN(&(pRepo->config));
|
||||
|
||||
if (CACHE_LAST_ROW(&(pRepo->config)) || CACHE_LAST_NULL_COLUMN(&(pRepo->config))) {
|
||||
tsdbInfo("tsdbCacheLastData cache last data since cacheLast option changed");
|
||||
cacheLastRow = !CACHE_LAST_ROW(oldCfg) && CACHE_LAST_ROW(&(pRepo->config));
|
||||
cacheLastCol = !CACHE_LAST_NULL_COLUMN(oldCfg) && CACHE_LAST_NULL_COLUMN(&(pRepo->config));
|
||||
}
|
||||
|
||||
// calc max table idx and table num
|
||||
for (int i = 1; i < pMeta->maxTables; i++) {
|
||||
STable *pTable = pMeta->tables[i];
|
||||
if (pTable == NULL) continue;
|
||||
tableNum += 1;
|
||||
maxTableIdx = i;
|
||||
if (cacheLastCol) {
|
||||
pTable->restoreColumnNum = 0;
|
||||
}
|
||||
}
|
||||
|
||||
// if close last option,need to free data
|
||||
if (need_free_last_row || need_free_last_col) {
|
||||
if (need_free_last_row) {
|
||||
atomic_store_8(&pRepo->hasCachedLastRow, 0);
|
||||
}
|
||||
if (need_free_last_col) {
|
||||
atomic_store_8(&pRepo->hasCachedLastColumn, 0);
|
||||
}
|
||||
tsdbInfo("free cache last data since cacheLast option changed");
|
||||
for (int i = 1; i < maxTableIdx; i++) {
|
||||
STable *pTable = pMeta->tables[i];
|
||||
if (pTable == NULL) continue;
|
||||
if (need_free_last_row) {
|
||||
taosTZfree(pTable->lastRow);
|
||||
pTable->lastRow = NULL;
|
||||
pTable->lastKey = TSKEY_INITIAL_VAL;
|
||||
}
|
||||
if (need_free_last_col) {
|
||||
tsdbFreeLastColumns(pTable);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!cacheLastRow && !cacheLastCol) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
cacheLastRowTableNum = cacheLastRow ? tableNum : 0;
|
||||
cacheLastColTableNum = cacheLastCol ? tableNum : 0;
|
||||
|
||||
if (tsdbInitReadH(&readh, pRepo) < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
tsdbFSIterInit(&fsiter, REPO_FS(pRepo), TSDB_FS_ITER_BACKWARD);
|
||||
|
||||
while ((pSet = tsdbFSIterNext(&fsiter)) != NULL && (cacheLastRowTableNum > 0 || cacheLastColTableNum > 0)) {
|
||||
if (tsdbSetAndOpenReadFSet(&readh, pSet) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (tsdbLoadBlockIdx(&readh) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
for (int i = 1; i <= maxTableIdx; i++) {
|
||||
STable *pTable = pMeta->tables[i];
|
||||
if (pTable == NULL) continue;
|
||||
|
||||
//tsdbInfo("tsdbRestoreInfo restore vgId:%d,table:%s", REPO_ID(pRepo), pTable->name->data);
|
||||
|
||||
if (tsdbSetReadTable(&readh, pTable) < 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
|
||||
SBlockIdx *pIdx = readh.pBlkIdx;
|
||||
|
||||
if (pIdx && cacheLastRowTableNum > 0 && pTable->lastRow == NULL) {
|
||||
pTable->lastKey = pIdx->maxKey;
|
||||
|
||||
if (tsdbRestoreLastRow(pRepo, pTable, &readh, pIdx) != 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
cacheLastRowTableNum -= 1;
|
||||
}
|
||||
|
||||
// restore NULL columns
|
||||
if (pIdx && cacheLastColTableNum > 0 && !pTable->hasRestoreLastColumn) {
|
||||
if (tsdbRestoreLastColumns(pRepo, pTable, &readh) != 0) {
|
||||
tsdbDestroyReadH(&readh);
|
||||
return -1;
|
||||
}
|
||||
if (pTable->hasRestoreLastColumn) {
|
||||
cacheLastColTableNum -= 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tsdbDestroyReadH(&readh);
|
||||
|
||||
if (cacheLastRow) {
|
||||
atomic_store_8(&pRepo->hasCachedLastRow, 1);
|
||||
}
|
||||
if (cacheLastCol) {
|
||||
atomic_store_8(&pRepo->hasCachedLastColumn, 1);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -274,7 +274,7 @@ void *tsdbAllocBytes(STsdbRepo *pRepo, int bytes) {
|
|||
int tsdbAsyncCommit(STsdbRepo *pRepo) {
|
||||
tsem_wait(&(pRepo->readyToCommit));
|
||||
|
||||
ASSERT(pRepo->imem == NULL);
|
||||
//ASSERT(pRepo->imem == NULL);
|
||||
if (pRepo->mem == NULL) {
|
||||
tsem_post(&(pRepo->readyToCommit));
|
||||
return 0;
|
||||
|
@ -964,6 +964,49 @@ static void tsdbFreeRows(STsdbRepo *pRepo, void **rows, int rowCounter) {
|
|||
}
|
||||
}
|
||||
|
||||
static void updateTableLatestColumn(STsdbRepo *pRepo, STable *pTable, SDataRow row) {
|
||||
tsdbDebug("vgId:%d updateTableLatestColumn, %s row version:%d", REPO_ID(pRepo), pTable->name->data, dataRowVersion(row));
|
||||
|
||||
STSchema* pSchema = tsdbGetTableLatestSchema(pTable);
|
||||
if (tsdbUpdateLastColSchema(pTable, pSchema) < 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
pSchema = tsdbGetTableSchemaByVersion(pTable, dataRowVersion(row));
|
||||
if (pSchema == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
SDataCol *pLatestCols = pTable->lastCols;
|
||||
|
||||
for (int16_t j = 0; j < schemaNCols(pSchema); j++) {
|
||||
STColumn *pTCol = schemaColAt(pSchema, j);
|
||||
// ignore not exist colId
|
||||
int16_t idx = tsdbGetLastColumnsIndexByColId(pTable, pTCol->colId);
|
||||
if (idx == -1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
void* value = tdGetRowDataOfCol(row, (int8_t)pTCol->type, TD_DATA_ROW_HEAD_SIZE + pSchema->columns[j].offset);
|
||||
if (isNull(value, pTCol->type)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
SDataCol *pDataCol = &(pLatestCols[idx]);
|
||||
if (pDataCol->pData == NULL) {
|
||||
pDataCol->pData = malloc(pSchema->columns[j].bytes);
|
||||
pDataCol->bytes = pSchema->columns[j].bytes;
|
||||
} else if (pDataCol->bytes < pSchema->columns[j].bytes) {
|
||||
pDataCol->pData = realloc(pDataCol->pData, pSchema->columns[j].bytes);
|
||||
pDataCol->bytes = pSchema->columns[j].bytes;
|
||||
}
|
||||
|
||||
memcpy(pDataCol->pData, value, pDataCol->bytes);
|
||||
//tsdbInfo("updateTableLatestColumn vgId:%d cache column %d for %d,%s", REPO_ID(pRepo), j, pDataCol->bytes, (char*)pDataCol->pData);
|
||||
pDataCol->ts = dataRowKey(row);
|
||||
}
|
||||
}
|
||||
|
||||
static int tsdbUpdateTableLatestInfo(STsdbRepo *pRepo, STable *pTable, SDataRow row) {
|
||||
STsdbCfg *pCfg = &pRepo->config;
|
||||
|
||||
|
@ -977,7 +1020,7 @@ static int tsdbUpdateTableLatestInfo(STsdbRepo *pRepo, STable *pTable, SDataRow
|
|||
}
|
||||
|
||||
if (tsdbGetTableLastKeyImpl(pTable) < dataRowKey(row)) {
|
||||
if (pCfg->cacheLastRow || pTable->lastRow != NULL) {
|
||||
if (CACHE_LAST_ROW(pCfg) || pTable->lastRow != NULL) {
|
||||
SDataRow nrow = pTable->lastRow;
|
||||
if (taosTSizeof(nrow) < dataRowLen(row)) {
|
||||
SDataRow orow = nrow;
|
||||
|
@ -1002,7 +1045,10 @@ static int tsdbUpdateTableLatestInfo(STsdbRepo *pRepo, STable *pTable, SDataRow
|
|||
} else {
|
||||
pTable->lastKey = dataRowKey(row);
|
||||
}
|
||||
}
|
||||
|
||||
if (CACHE_LAST_NULL_COLUMN(pCfg)) {
|
||||
updateTableLatestColumn(pRepo, pTable, row);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -589,6 +589,131 @@ void tsdbUnRefTable(STable *pTable) {
|
|||
}
|
||||
}
|
||||
|
||||
void tsdbFreeLastColumns(STable* pTable) {
|
||||
if (pTable->lastCols == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (int i = 0; i < pTable->maxColNum; ++i) {
|
||||
if (pTable->lastCols[i].bytes == 0) {
|
||||
continue;
|
||||
}
|
||||
tfree(pTable->lastCols[i].pData);
|
||||
pTable->lastCols[i].bytes = 0;
|
||||
pTable->lastCols[i].pData = NULL;
|
||||
}
|
||||
tfree(pTable->lastCols);
|
||||
pTable->lastCols = NULL;
|
||||
pTable->maxColNum = 0;
|
||||
pTable->lastColSVersion = -1;
|
||||
pTable->restoreColumnNum = 0;
|
||||
}
|
||||
|
||||
int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) {
|
||||
if (pTable->lastCols == NULL) {
|
||||
return -1;
|
||||
}
|
||||
for (int16_t i = 0; i < pTable->maxColNum; ++i) {
|
||||
if (pTable->lastCols[i].colId == colId) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
int tsdbInitColIdCacheWithSchema(STable* pTable, STSchema* pSchema) {
|
||||
ASSERT(pTable->lastCols == NULL);
|
||||
|
||||
int16_t numOfColumn = pSchema->numOfCols;
|
||||
|
||||
pTable->lastCols = (SDataCol*)malloc(numOfColumn * sizeof(SDataCol));
|
||||
if (pTable->lastCols == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
for (int16_t i = 0; i < numOfColumn; ++i) {
|
||||
STColumn *pCol = schemaColAt(pSchema, i);
|
||||
SDataCol* pDataCol = &(pTable->lastCols[i]);
|
||||
pDataCol->bytes = 0;
|
||||
pDataCol->pData = NULL;
|
||||
pDataCol->colId = pCol->colId;
|
||||
}
|
||||
|
||||
pTable->lastColSVersion = schemaVersion(pSchema);
|
||||
pTable->maxColNum = numOfColumn;
|
||||
pTable->restoreColumnNum = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
STSchema* tsdbGetTableLatestSchema(STable *pTable) {
|
||||
return tsdbGetTableSchemaByVersion(pTable, -1);
|
||||
}
|
||||
|
||||
int tsdbUpdateLastColSchema(STable *pTable, STSchema *pNewSchema) {
|
||||
if (pTable->lastColSVersion == schemaVersion(pNewSchema)) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
tsdbInfo("tsdbUpdateLastColSchema:%s,%d->%d", pTable->name->data, pTable->lastColSVersion, schemaVersion(pNewSchema));
|
||||
|
||||
int16_t numOfCols = pNewSchema->numOfCols;
|
||||
SDataCol *lastCols = (SDataCol*)malloc(numOfCols * sizeof(SDataCol));
|
||||
if (lastCols == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
TSDB_WLOCK_TABLE(pTable);
|
||||
|
||||
for (int16_t i = 0; i < numOfCols; ++i) {
|
||||
STColumn *pCol = schemaColAt(pNewSchema, i);
|
||||
int16_t idx = tsdbGetLastColumnsIndexByColId(pTable, pCol->colId);
|
||||
|
||||
SDataCol* pDataCol = &(lastCols[i]);
|
||||
if (idx != -1) {
|
||||
// move col data to new last column array
|
||||
SDataCol* pOldDataCol = &(pTable->lastCols[idx]);
|
||||
memcpy(pDataCol, pOldDataCol, sizeof(SDataCol));
|
||||
} else {
|
||||
// init new colid data
|
||||
pDataCol->colId = pCol->colId;
|
||||
pDataCol->bytes = 0;
|
||||
pDataCol->pData = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
SDataCol *oldLastCols = pTable->lastCols;
|
||||
int16_t oldLastColNum = pTable->maxColNum;
|
||||
|
||||
pTable->lastColSVersion = schemaVersion(pNewSchema);
|
||||
pTable->lastCols = lastCols;
|
||||
pTable->maxColNum = numOfCols;
|
||||
|
||||
if (oldLastCols == NULL) {
|
||||
TSDB_WUNLOCK_TABLE(pTable);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// free old schema last column datas
|
||||
for (int16_t i = 0; i < oldLastColNum; ++i) {
|
||||
SDataCol* pDataCol = &(oldLastCols[i]);
|
||||
if (pDataCol->bytes == 0) {
|
||||
continue;
|
||||
}
|
||||
int16_t idx = tsdbGetLastColumnsIndexByColId(pTable, pDataCol->colId);
|
||||
if (idx != -1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// free not exist column data
|
||||
tfree(pDataCol->pData);
|
||||
}
|
||||
TSDB_WUNLOCK_TABLE(pTable);
|
||||
tfree(oldLastCols);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema, bool insertAct) {
|
||||
ASSERT(TABLE_TYPE(pTable) != TSDB_STREAM_TABLE && TABLE_TYPE(pTable) != TSDB_SUPER_TABLE);
|
||||
STsdbMeta *pMeta = pRepo->tsdbMeta;
|
||||
|
@ -672,6 +797,10 @@ static STable *tsdbNewTable() {
|
|||
|
||||
pTable->lastKey = TSKEY_INITIAL_VAL;
|
||||
|
||||
pTable->lastCols = NULL;
|
||||
pTable->restoreColumnNum = 0;
|
||||
pTable->maxColNum = 0;
|
||||
pTable->lastColSVersion = -1;
|
||||
return pTable;
|
||||
}
|
||||
|
||||
|
@ -787,6 +916,8 @@ static void tsdbFreeTable(STable *pTable) {
|
|||
tSkipListDestroy(pTable->pIndex);
|
||||
taosTZfree(pTable->lastRow);
|
||||
tfree(pTable->sql);
|
||||
|
||||
tsdbFreeLastColumns(pTable);
|
||||
free(pTable);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -62,6 +62,7 @@ typedef struct SLoadCompBlockInfo {
|
|||
int32_t fileId;
|
||||
} SLoadCompBlockInfo;
|
||||
|
||||
|
||||
typedef struct STableCheckInfo {
|
||||
STableId tableId;
|
||||
TSKEY lastKey;
|
||||
|
@ -107,7 +108,7 @@ typedef struct STsdbQueryHandle {
|
|||
SArray* pTableCheckInfo; // SArray<STableCheckInfo>
|
||||
int32_t activeIndex;
|
||||
bool checkFiles; // check file stage
|
||||
bool cachelastrow; // check if last row cached
|
||||
int8_t cachelastrow; // check if last row cached
|
||||
bool loadExternalRow; // load time window external data rows
|
||||
bool currentLoadExternalRows; // current load external rows
|
||||
int32_t loadType; // block load type
|
||||
|
@ -117,7 +118,6 @@ typedef struct STsdbQueryHandle {
|
|||
SFSIter fileIter;
|
||||
SReadH rhelper;
|
||||
STableBlockInfo* pDataBlockInfo;
|
||||
|
||||
SDataCols *pDataCols; // in order to hold current file data block
|
||||
int32_t allocSize; // allocated data block size
|
||||
SMemRef *pMemRef;
|
||||
|
@ -138,6 +138,7 @@ typedef struct STableGroupSupporter {
|
|||
|
||||
static STimeWindow updateLastrowForEachGroup(STableGroupInfo *groupList);
|
||||
static int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *groupList);
|
||||
static int32_t checkForCachedLast(STsdbQueryHandle* pQueryHandle);
|
||||
static int32_t tsdbGetCachedLastRow(STable* pTable, SDataRow* pRes, TSKEY* lastKey);
|
||||
|
||||
static void changeQueryHandleForInterpQuery(TsdbQueryHandleT pHandle);
|
||||
|
@ -512,6 +513,8 @@ void tsdbResetQueryHandleForNewTable(TsdbQueryHandleT queryHandle, STsdbQueryCon
|
|||
pQueryHandle->next = doFreeColumnInfoData(pQueryHandle->next);
|
||||
}
|
||||
|
||||
|
||||
|
||||
TsdbQueryHandleT tsdbQueryLastRow(STsdbRepo *tsdb, STsdbQueryCond *pCond, STableGroupInfo *groupList, uint64_t qId, SMemRef* pMemRef) {
|
||||
pCond->twindow = updateLastrowForEachGroup(groupList);
|
||||
|
||||
|
@ -528,10 +531,30 @@ TsdbQueryHandleT tsdbQueryLastRow(STsdbRepo *tsdb, STsdbQueryCond *pCond, STable
|
|||
}
|
||||
|
||||
assert(pCond->order == TSDB_ORDER_ASC && pCond->twindow.skey <= pCond->twindow.ekey);
|
||||
pQueryHandle->type = TSDB_QUERY_TYPE_LAST;
|
||||
if (pQueryHandle->cachelastrow) {
|
||||
pQueryHandle->type = TSDB_QUERY_TYPE_LAST;
|
||||
}
|
||||
|
||||
return pQueryHandle;
|
||||
}
|
||||
|
||||
|
||||
TsdbQueryHandleT tsdbQueryCacheLast(STsdbRepo *tsdb, STsdbQueryCond *pCond, STableGroupInfo *groupList, uint64_t qId, SMemRef* pMemRef) {
|
||||
STsdbQueryHandle *pQueryHandle = (STsdbQueryHandle*) tsdbQueryTables(tsdb, pCond, groupList, qId, pMemRef);
|
||||
int32_t code = checkForCachedLast(pQueryHandle);
|
||||
if (code != TSDB_CODE_SUCCESS) { // set the numOfTables to be 0
|
||||
terrno = code;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (pQueryHandle->cachelastrow) {
|
||||
pQueryHandle->type = TSDB_QUERY_TYPE_LAST;
|
||||
}
|
||||
|
||||
return pQueryHandle;
|
||||
}
|
||||
|
||||
|
||||
SArray* tsdbGetQueriedTableList(TsdbQueryHandleT *pHandle) {
|
||||
assert(pHandle != NULL);
|
||||
|
||||
|
@ -2460,6 +2483,159 @@ static bool loadCachedLastRow(STsdbQueryHandle* pQueryHandle) {
|
|||
return false;
|
||||
}
|
||||
|
||||
|
||||
|
||||
static bool loadCachedLast(STsdbQueryHandle* pQueryHandle) {
|
||||
// the last row is cached in buffer, return it directly.
|
||||
// here note that the pQueryHandle->window must be the TS_INITIALIZER
|
||||
int32_t tgNumOfCols = (int32_t)QH_GET_NUM_OF_COLS(pQueryHandle);
|
||||
size_t numOfTables = taosArrayGetSize(pQueryHandle->pTableCheckInfo);
|
||||
int32_t numOfRows = 0;
|
||||
assert(numOfTables > 0 && tgNumOfCols > 0);
|
||||
SQueryFilePos* cur = &pQueryHandle->cur;
|
||||
TSKEY priKey = TSKEY_INITIAL_VAL;
|
||||
int32_t priIdx = -1;
|
||||
SColumnInfoData* pColInfo = NULL;
|
||||
|
||||
while (++pQueryHandle->activeIndex < numOfTables) {
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pQueryHandle->pTableCheckInfo, pQueryHandle->activeIndex);
|
||||
STable* pTable = pCheckInfo->pTableObj;
|
||||
char* pData = NULL;
|
||||
|
||||
int32_t numOfCols = pTable->maxColNum;
|
||||
|
||||
if (pTable->lastCols == NULL || pTable->maxColNum <= 0) {
|
||||
tsdbWarn("no last cached for table, uid:%" PRIu64 ",tid:%d", pTable->tableId.uid, pTable->tableId.tid);
|
||||
continue;
|
||||
}
|
||||
|
||||
int32_t i = 0, j = 0;
|
||||
while(i < tgNumOfCols && j < numOfCols) {
|
||||
pColInfo = taosArrayGet(pQueryHandle->pColumns, i);
|
||||
if (pTable->lastCols[j].colId < pColInfo->info.colId) {
|
||||
j++;
|
||||
continue;
|
||||
} else if (pTable->lastCols[j].colId > pColInfo->info.colId) {
|
||||
i++;
|
||||
continue;
|
||||
}
|
||||
|
||||
pData = (char*)pColInfo->pData + numOfRows * pColInfo->info.bytes;
|
||||
|
||||
if (pTable->lastCols[j].bytes > 0) {
|
||||
void* value = pTable->lastCols[j].pData;
|
||||
switch (pColInfo->info.type) {
|
||||
case TSDB_DATA_TYPE_BINARY:
|
||||
case TSDB_DATA_TYPE_NCHAR:
|
||||
memcpy(pData, value, varDataTLen(value));
|
||||
break;
|
||||
case TSDB_DATA_TYPE_NULL:
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
case TSDB_DATA_TYPE_TINYINT:
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
*(uint8_t *)pData = *(uint8_t *)value;
|
||||
break;
|
||||
case TSDB_DATA_TYPE_SMALLINT:
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
*(uint16_t *)pData = *(uint16_t *)value;
|
||||
break;
|
||||
case TSDB_DATA_TYPE_INT:
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
*(uint32_t *)pData = *(uint32_t *)value;
|
||||
break;
|
||||
case TSDB_DATA_TYPE_BIGINT:
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
*(uint64_t *)pData = *(uint64_t *)value;
|
||||
break;
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
SET_FLOAT_PTR(pData, value);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
SET_DOUBLE_PTR(pData, value);
|
||||
break;
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
if (pColInfo->info.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
priKey = tdGetKey(*(TKEY *)value);
|
||||
priIdx = i;
|
||||
|
||||
i++;
|
||||
j++;
|
||||
continue;
|
||||
} else {
|
||||
*(TSKEY *)pData = *(TSKEY *)value;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
memcpy(pData, value, pColInfo->info.bytes);
|
||||
}
|
||||
|
||||
for (int32_t n = 0; n < tgNumOfCols; ++n) {
|
||||
if (n == i) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pColInfo = taosArrayGet(pQueryHandle->pColumns, n);
|
||||
pData = (char*)pColInfo->pData + numOfRows * pColInfo->info.bytes;;
|
||||
|
||||
if (pColInfo->info.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
*(TSKEY *)pData = pTable->lastCols[j].ts;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pColInfo->info.type == TSDB_DATA_TYPE_BINARY || pColInfo->info.type == TSDB_DATA_TYPE_NCHAR) {
|
||||
setVardataNull(pData, pColInfo->info.type);
|
||||
} else {
|
||||
setNull(pData, pColInfo->info.type, pColInfo->info.bytes);
|
||||
}
|
||||
}
|
||||
|
||||
numOfRows++;
|
||||
assert(numOfRows < pQueryHandle->outputCapacity);
|
||||
}
|
||||
|
||||
i++;
|
||||
j++;
|
||||
}
|
||||
|
||||
// leave the real ts column as the last row, because last function only (not stable) use the last row as res
|
||||
if (priKey != TSKEY_INITIAL_VAL) {
|
||||
pColInfo = taosArrayGet(pQueryHandle->pColumns, priIdx);
|
||||
pData = (char*)pColInfo->pData + numOfRows * pColInfo->info.bytes;
|
||||
|
||||
*(TSKEY *)pData = priKey;
|
||||
|
||||
for (int32_t n = 0; n < tgNumOfCols; ++n) {
|
||||
if (n == priIdx) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pColInfo = taosArrayGet(pQueryHandle->pColumns, n);
|
||||
pData = (char*)pColInfo->pData + numOfRows * pColInfo->info.bytes;;
|
||||
|
||||
assert (pColInfo->info.colId != PRIMARYKEY_TIMESTAMP_COL_INDEX);
|
||||
|
||||
if (pColInfo->info.type == TSDB_DATA_TYPE_BINARY || pColInfo->info.type == TSDB_DATA_TYPE_NCHAR) {
|
||||
setVardataNull(pData, pColInfo->info.type);
|
||||
} else {
|
||||
setNull(pData, pColInfo->info.type, pColInfo->info.bytes);
|
||||
}
|
||||
}
|
||||
|
||||
numOfRows++;
|
||||
}
|
||||
|
||||
if (numOfRows > 0) {
|
||||
cur->rows = numOfRows;
|
||||
cur->mixBlock = true;
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
static bool loadDataBlockFromTableSeq(STsdbQueryHandle* pQueryHandle) {
|
||||
size_t numOfTables = taosArrayGetSize(pQueryHandle->pTableCheckInfo);
|
||||
assert(numOfTables > 0);
|
||||
|
@ -2496,8 +2672,12 @@ bool tsdbNextDataBlock(TsdbQueryHandleT pHandle) {
|
|||
int64_t stime = taosGetTimestampUs();
|
||||
int64_t elapsedTime = stime;
|
||||
|
||||
if (pQueryHandle->type == TSDB_QUERY_TYPE_LAST && pQueryHandle->cachelastrow) {
|
||||
return loadCachedLastRow(pQueryHandle);
|
||||
if (pQueryHandle->type == TSDB_QUERY_TYPE_LAST) {
|
||||
if (pQueryHandle->cachelastrow == 1) {
|
||||
return loadCachedLastRow(pQueryHandle);
|
||||
} else if (pQueryHandle->cachelastrow == 2) {
|
||||
return loadCachedLast(pQueryHandle);
|
||||
}
|
||||
}
|
||||
|
||||
if (pQueryHandle->loadType == BLOCK_LOAD_TABLE_SEQ_ORDER) {
|
||||
|
@ -2695,6 +2875,10 @@ int32_t tsdbGetCachedLastRow(STable* pTable, SDataRow* pRes, TSKEY* lastKey) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
bool isTsdbCacheLastRow(TsdbQueryHandleT* pQueryHandle) {
|
||||
return ((STsdbQueryHandle *)pQueryHandle)->cachelastrow > 0;
|
||||
}
|
||||
|
||||
int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *groupList) {
|
||||
assert(pQueryHandle != NULL && groupList != NULL);
|
||||
|
||||
|
@ -2706,11 +2890,15 @@ int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *g
|
|||
|
||||
STableKeyInfo* pInfo = (STableKeyInfo*)taosArrayGet(group, 0);
|
||||
|
||||
int32_t code = tsdbGetCachedLastRow(pInfo->pTable, &pRow, &key);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
pQueryHandle->cachelastrow = false;
|
||||
} else {
|
||||
pQueryHandle->cachelastrow = (pRow != NULL);
|
||||
int32_t code = 0;
|
||||
|
||||
if (((STable*)pInfo->pTable)->lastRow) {
|
||||
code = tsdbGetCachedLastRow(pInfo->pTable, &pRow, &key);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
pQueryHandle->cachelastrow = 0;
|
||||
} else {
|
||||
pQueryHandle->cachelastrow = 1;
|
||||
}
|
||||
}
|
||||
|
||||
// update the tsdb query time range
|
||||
|
@ -2724,6 +2912,26 @@ int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *g
|
|||
return code;
|
||||
}
|
||||
|
||||
int32_t checkForCachedLast(STsdbQueryHandle* pQueryHandle) {
|
||||
assert(pQueryHandle != NULL);
|
||||
|
||||
int32_t code = 0;
|
||||
|
||||
if (pQueryHandle->pTsdb && atomic_load_8(&pQueryHandle->pTsdb->hasCachedLastColumn)){
|
||||
pQueryHandle->cachelastrow = 2;
|
||||
}
|
||||
|
||||
// update the tsdb query time range
|
||||
if (pQueryHandle->cachelastrow) {
|
||||
pQueryHandle->window = TSWINDOW_INITIALIZER;
|
||||
pQueryHandle->checkFiles = false;
|
||||
pQueryHandle->activeIndex = -1; // start from -1
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
|
||||
STimeWindow updateLastrowForEachGroup(STableGroupInfo *groupList) {
|
||||
STimeWindow window = {INT64_MAX, INT64_MIN};
|
||||
|
||||
|
|
|
@ -148,6 +148,7 @@ int32_t taosHashGetMaxOverflowLinkLength(const SHashObj *pHashObj);
|
|||
size_t taosHashGetMemSize(const SHashObj *pHashObj);
|
||||
|
||||
void *taosHashIterate(SHashObj *pHashObj, void *p);
|
||||
|
||||
void taosHashCancelIterate(SHashObj *pHashObj, void *p);
|
||||
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -50,7 +50,15 @@ void* taosArrayInit(size_t size, size_t elemSize);
|
|||
* @param nEles
|
||||
* @return
|
||||
*/
|
||||
void *taosArrayPushBatch(SArray *pArray, const void *pData, int nEles);
|
||||
void *taosArrayAddBatch(SArray *pArray, const void *pData, int nEles);
|
||||
|
||||
/**
|
||||
* add all element from the source array list into the destination
|
||||
* @param pArray
|
||||
* @param pInput
|
||||
* @return
|
||||
*/
|
||||
void* taosArrayAddAll(SArray* pArray, const SArray* pInput);
|
||||
|
||||
/**
|
||||
*
|
||||
|
@ -59,7 +67,7 @@ void *taosArrayPushBatch(SArray *pArray, const void *pData, int nEles);
|
|||
* @return
|
||||
*/
|
||||
static FORCE_INLINE void* taosArrayPush(SArray* pArray, const void* pData) {
|
||||
return taosArrayPushBatch(pArray, pData, 1);
|
||||
return taosArrayAddBatch(pArray, pData, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -37,8 +37,6 @@ typedef struct SStrToken {
|
|||
char *z;
|
||||
} SStrToken;
|
||||
|
||||
extern const char escapeChar[];
|
||||
|
||||
/**
|
||||
* check if it is a number or not
|
||||
* @param pToken
|
||||
|
@ -47,8 +45,6 @@ extern const char escapeChar[];
|
|||
#define isNumber(tk) \
|
||||
((tk)->type == TK_INTEGER || (tk)->type == TK_FLOAT || (tk)->type == TK_HEX || (tk)->type == TK_BIN)
|
||||
|
||||
#define GET_ESCAPE_CHAR(c) (escapeChar[(uint8_t)(c)])
|
||||
|
||||
/**
|
||||
* tokenizer for sql string
|
||||
* @param z
|
||||
|
|
|
@ -56,7 +56,7 @@ static int32_t taosArrayResize(SArray* pArray) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
void* taosArrayPushBatch(SArray* pArray, const void* pData, int nEles) {
|
||||
void* taosArrayAddBatch(SArray* pArray, const void* pData, int nEles) {
|
||||
if (pArray == NULL || pData == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
@ -82,6 +82,10 @@ void* taosArrayPushBatch(SArray* pArray, const void* pData, int nEles) {
|
|||
return dst;
|
||||
}
|
||||
|
||||
void* taosArrayAddAll(SArray* pArray, const SArray* pInput) {
|
||||
return taosArrayAddBatch(pArray, pInput->pData, (int32_t) taosArrayGetSize(pInput));
|
||||
}
|
||||
|
||||
void* taosArrayPop(SArray* pArray) {
|
||||
assert( pArray != NULL );
|
||||
|
||||
|
|
|
@ -613,7 +613,7 @@ void doCleanupDataCache(SCacheObj *pCacheObj) {
|
|||
|
||||
// todo memory leak if there are object with refcount greater than 0 in hash table?
|
||||
taosHashCleanup(pCacheObj->pHashTable);
|
||||
taosTrashcanEmpty(pCacheObj, true);
|
||||
taosTrashcanEmpty(pCacheObj, false);
|
||||
|
||||
__cache_lock_destroy(pCacheObj);
|
||||
|
||||
|
|
|
@ -83,7 +83,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_REF_ALREADY_EXIST, "Ref is already there"
|
|||
TAOS_DEFINE_ERROR(TSDB_CODE_REF_NOT_EXIST, "Ref is not there")
|
||||
|
||||
//client
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_SQL, "Invalid SQL statement")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_OPERATION, "Invalid operation")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_QHANDLE, "Invalid qhandle")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_TIME_STAMP, "Invalid combination of client/service time")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_VALUE, "Invalid value in client")
|
||||
|
|
|
@ -232,18 +232,6 @@ static const char isIdChar[] = {
|
|||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */
|
||||
};
|
||||
|
||||
const char escapeChar[] = {
|
||||
/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
|
||||
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, /* 0x */
|
||||
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, /* 1x */
|
||||
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x2B, 0x2C, 0x2D, 0x2E, 0x2F, /* 2x */
|
||||
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x3B, 0x3C, 0x3D, 0x3E, 0x3F, /* 3x */
|
||||
0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4A, 0x4B, 0x4C, 0x4D, 0x4E, 0x4F,/* 4x */
|
||||
0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5A, 0x5B, 0x5C, 0x5D, 0x5E, 0x5F,/* 5x */
|
||||
0x60, 0x07, 0x08, 0x63, 0x64, 0x65, 0x0C, 0x67, 0x68, 0x69, 0x6A, 0x6B, 0x6C, 0x6D, 0x0A, 0x6F,/* 6x */
|
||||
0x70, 0x71, 0x0D, 0x73, 0x09, 0x75, 0x0B, 0x77, 0x78, 0x79, 0x7A, 0x7B, 0x7C, 0x7D, 0x7E, 0x7F,/* 7x */
|
||||
};
|
||||
|
||||
static void* keywordHashTable = NULL;
|
||||
|
||||
static void doInitKeywordsTable(void) {
|
||||
|
@ -593,7 +581,6 @@ SStrToken tscReplaceStrToken(char **str, SStrToken *token, const char* newToken)
|
|||
return ntoken;
|
||||
}
|
||||
|
||||
|
||||
SStrToken tStrGetToken(char* str, int32_t* i, bool isPrevOptr) {
|
||||
SStrToken t0 = {0};
|
||||
|
||||
|
|
|
@ -454,7 +454,11 @@ void vnodeDestroy(SVnodeObj *pVnode) {
|
|||
}
|
||||
|
||||
if (pVnode->tsdb) {
|
||||
code = tsdbCloseRepo(pVnode->tsdb, 1);
|
||||
// the deleted vnode does not need to commit, so as to speed up the deletion
|
||||
int toCommit = 1;
|
||||
if (pVnode->dropped) toCommit = 0;
|
||||
|
||||
code = tsdbCloseRepo(pVnode->tsdb, toCommit);
|
||||
pVnode->tsdb = NULL;
|
||||
}
|
||||
|
||||
|
|
|
@ -357,7 +357,7 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
|
|||
|
||||
// kill current query and free corresponding resources.
|
||||
if (pRetrieve->free == 1) {
|
||||
vWarn("vgId:%d, QInfo:%"PRIu64 "-%p, retrieve msg received to kill query and free qhandle", pVnode->vgId, pRetrieve->qId, *handle);
|
||||
vWarn("vgId:%d, QInfo:%"PRIx64 "-%p, retrieve msg received to kill query and free qhandle", pVnode->vgId, pRetrieve->qId, *handle);
|
||||
qKillQuery(*handle);
|
||||
qReleaseQInfo(pVnode->qMgmt, (void **)&handle, true);
|
||||
|
||||
|
|
|
@ -126,11 +126,16 @@ void vnodeStopSyncFile(int32_t vgId, uint64_t fversion) {
|
|||
}
|
||||
|
||||
void vnodeConfirmForard(int32_t vgId, void *wparam, int32_t code) {
|
||||
void *pVnode = vnodeAcquire(vgId);
|
||||
SVnodeObj *pVnode = vnodeAcquire(vgId);
|
||||
if (pVnode == NULL) {
|
||||
vError("vgId:%d, vnode not found while confirm forward", vgId);
|
||||
}
|
||||
|
||||
if (code == TSDB_CODE_SYN_CONFIRM_EXPIRED && pVnode->status == TAOS_VN_STATUS_CLOSING) {
|
||||
vDebug("vgId:%d, db:%s, vnode is closing while confirm forward", vgId, pVnode->db);
|
||||
code = TSDB_CODE_VND_IS_CLOSING;
|
||||
}
|
||||
|
||||
dnodeSendRpcVWriteRsp(pVnode, wparam, code);
|
||||
vnodeRelease(pVnode);
|
||||
}
|
||||
|
|
|
@ -5,6 +5,8 @@ IF (TD_LINUX)
|
|||
AUX_SOURCE_DIRECTORY(. SRC)
|
||||
ADD_EXECUTABLE(demo apitest.c)
|
||||
TARGET_LINK_LIBRARIES(demo taos_static trpc tutil pthread )
|
||||
ADD_EXECUTABLE(subscribe subscribe.c)
|
||||
TARGET_LINK_LIBRARIES(subscribe taos_static trpc tutil pthread )
|
||||
ADD_EXECUTABLE(epoll epoll.c)
|
||||
TARGET_LINK_LIBRARIES(epoll taos_static trpc tutil pthread )
|
||||
ENDIF ()
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
#include <taos.h>
|
||||
#include <unistd.h>
|
||||
|
||||
|
||||
static void prepare_data(TAOS* taos) {
|
||||
TAOS_RES *result;
|
||||
result = taos_query(taos, "drop database if exists test;");
|
||||
|
@ -69,7 +68,6 @@ static void prepare_data(TAOS* taos) {
|
|||
usleep(1000000);
|
||||
}
|
||||
|
||||
|
||||
static int print_result(TAOS_RES* res, int blockFetch) {
|
||||
TAOS_ROW row = NULL;
|
||||
int num_fields = taos_num_fields(res);
|
||||
|
@ -99,7 +97,6 @@ static int print_result(TAOS_RES* res, int blockFetch) {
|
|||
return nRows;
|
||||
}
|
||||
|
||||
|
||||
static void check_row_count(int line, TAOS_RES* res, int expected) {
|
||||
int actual = print_result(res, expected % 2);
|
||||
if (actual != expected) {
|
||||
|
@ -109,7 +106,6 @@ static void check_row_count(int line, TAOS_RES* res, int expected) {
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
static void verify_query(TAOS* taos) {
|
||||
prepare_data(taos);
|
||||
|
||||
|
@ -153,7 +149,6 @@ static void verify_query(TAOS* taos) {
|
|||
taos_free_result(res);
|
||||
}
|
||||
|
||||
|
||||
void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
||||
int rows = print_result(res, *(int*)param);
|
||||
printf("%d rows consumed in subscribe_callback\n", rows);
|
||||
|
@ -235,10 +230,10 @@ static void verify_subscribe(TAOS* taos) {
|
|||
taos_unsubscribe(tsub, 0);
|
||||
}
|
||||
|
||||
|
||||
void verify_prepare(TAOS* taos) {
|
||||
TAOS_RES* result = taos_query(taos, "drop database if exists test;");
|
||||
taos_free_result(result);
|
||||
|
||||
usleep(100000);
|
||||
result = taos_query(taos, "create database test;");
|
||||
|
||||
|
@ -248,6 +243,7 @@ void verify_prepare(TAOS* taos) {
|
|||
taos_free_result(result);
|
||||
return;
|
||||
}
|
||||
|
||||
taos_free_result(result);
|
||||
|
||||
usleep(100000);
|
||||
|
@ -369,6 +365,7 @@ void verify_prepare(TAOS* taos) {
|
|||
taos_stmt_add_batch(stmt);
|
||||
}
|
||||
if (taos_stmt_execute(stmt) != 0) {
|
||||
taos_stmt_close(stmt);
|
||||
printf("\033[31mfailed to execute insert statement.\033[0m\n");
|
||||
return;
|
||||
}
|
||||
|
@ -381,6 +378,7 @@ void verify_prepare(TAOS* taos) {
|
|||
v.v2 = 15;
|
||||
taos_stmt_bind_param(stmt, params + 2);
|
||||
if (taos_stmt_execute(stmt) != 0) {
|
||||
taos_stmt_close(stmt);
|
||||
printf("\033[31mfailed to execute select statement.\033[0m\n");
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,309 @@
|
|||
def pre_test(){
|
||||
|
||||
sh '''
|
||||
sudo rmtaos||echo 'no taosd installed'
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WK}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf ${WK}/debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make > /dev/null
|
||||
make install > /dev/null
|
||||
pip3 install ${WKC}/src/connector/python/linux/python3/
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
pipeline {
|
||||
agent none
|
||||
environment{
|
||||
|
||||
WK = '/var/lib/jenkins/workspace/TDinternal'
|
||||
WKC= '/var/lib/jenkins/workspace/TDinternal/community'
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Parallel test stage') {
|
||||
parallel {
|
||||
stage('pytest') {
|
||||
agent{label 'slam1'}
|
||||
steps {
|
||||
pre_test()
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
find pytest -name '*'sql|xargs rm -rf
|
||||
./test-all.sh pytest
|
||||
date'''
|
||||
}
|
||||
}
|
||||
stage('test_b1') {
|
||||
agent{label 'slam2'}
|
||||
steps {
|
||||
pre_test()
|
||||
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b1
|
||||
date'''
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_crash_gen') {
|
||||
agent{label "slam3"}
|
||||
steps {
|
||||
pre_test()
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./crash_gen.sh -a -p -t 4 -s 2000
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_crash_gen_val_log.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_taosd_val_log.sh
|
||||
'''
|
||||
}
|
||||
|
||||
sh'''
|
||||
systemctl start taosd
|
||||
sleep 10
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/gotest
|
||||
bash batchtest.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
|
||||
python3 PythonChecker.py
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
|
||||
mvn clean package assembly:single -DskipTests >/dev/null
|
||||
java -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/src/connector/jdbc
|
||||
mvn clean package -Dmaven.test.skip=true >/dev/null
|
||||
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
|
||||
java --class-path=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
|
||||
cd ${JENKINS_HOME}/workspace/nodejs
|
||||
node nodejsChecker.js host=localhost
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
|
||||
dotnet run
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
systemctl stop taosd
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b2
|
||||
date
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full unit
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_valgrind') {
|
||||
agent{label "slam4"}
|
||||
|
||||
steps {
|
||||
pre_test()
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
nohup taosd >/dev/null &
|
||||
sleep 10
|
||||
python3 concurrent_inquiry.py -c 1
|
||||
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full jdbc
|
||||
date'''
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./valgrind-test.sh 2>&1 > mem-error-out.log
|
||||
./handle_val_log.sh
|
||||
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b3
|
||||
date'''
|
||||
sh '''
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full example
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('arm64_build'){
|
||||
agent{label 'arm64'}
|
||||
steps{
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
}
|
||||
stage('arm32_build'){
|
||||
agent{label 'arm32'}
|
||||
steps{
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
post {
|
||||
success {
|
||||
emailext (
|
||||
subject: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
|
||||
body: '''<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${PROJECT_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${CAUSE}</li>
|
||||
<li>变更概要:${CHANGES}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
<li>变更集:${JELLY_SCRIPT}</li>
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>''',
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
failure {
|
||||
emailext (
|
||||
subject: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
|
||||
body: '''<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${PROJECT_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${CAUSE}</li>
|
||||
<li>变更概要:${CHANGES}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
<li>变更集:${JELLY_SCRIPT}</li>
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>''',
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -64,18 +64,25 @@ function runQueryPerfTest {
|
|||
[ -f $PERFORMANCE_TEST_REPORT ] && rm $PERFORMANCE_TEST_REPORT
|
||||
nohup $WORK_DIR/TDengine/debug/build/bin/taosd -c /etc/taosperf/ > /dev/null 2>&1 &
|
||||
echoInfo "Wait TDengine to start"
|
||||
sleep 300
|
||||
sleep 60
|
||||
echoInfo "Run Performance Test"
|
||||
cd $WORK_DIR/TDengine/tests/pytest
|
||||
|
||||
python3 query/queryPerformance.py -c $LOCAL_COMMIT | tee -a $PERFORMANCE_TEST_REPORT
|
||||
|
||||
mkdir -p /var/lib/perf/
|
||||
mkdir -p /var/log/perf/
|
||||
rm -rf /var/lib/perf/*
|
||||
rm -rf /var/log/perf/*
|
||||
nohup $WORK_DIR/TDengine/debug/build/bin/taosd -c /etc/perf/ > /dev/null 2>&1 &
|
||||
echoInfo "Wait TDengine to start"
|
||||
sleep 10
|
||||
echoInfo "Run Performance Test"
|
||||
cd $WORK_DIR/TDengine/tests/pytest
|
||||
|
||||
python3 insert/insertFromCSVPerformance.py -c $LOCAL_COMMIT | tee -a $PERFORMANCE_TEST_REPORT
|
||||
|
||||
python3 tools/taosdemoPerformance.py -c $LOCAL_COMMIT | tee -a $PERFORMANCE_TEST_REPORT
|
||||
|
||||
#python3 perfbenchmark/joinPerformance.py | tee -a $PERFORMANCE_TEST_REPORT
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ from queue import Queue, Empty
|
|||
from .shared.config import Config
|
||||
from .shared.db import DbTarget, DbConn
|
||||
from .shared.misc import Logging, Helper, CrashGenError, Status, Progress, Dice
|
||||
from .shared.types import DirPath
|
||||
from .shared.types import DirPath, IpcStream
|
||||
|
||||
# from crash_gen.misc import CrashGenError, Dice, Helper, Logging, Progress, Status
|
||||
# from crash_gen.db import DbConn, DbTarget
|
||||
|
@ -177,13 +177,12 @@ quorum 2
|
|||
return "127.0.0.1"
|
||||
|
||||
def getServiceCmdLine(self): # to start the instance
|
||||
cmdLine = []
|
||||
if Config.getConfig().track_memory_leaks:
|
||||
Logging.info("Invoking VALGRIND on service...")
|
||||
cmdLine = ['valgrind', '--leak-check=yes']
|
||||
# TODO: move "exec -c" into Popen(), we can both "use shell" and NOT fork so ask to lose kill control
|
||||
cmdLine += ["exec " + self.getExecFile(), '-c', self.getCfgDir()] # used in subproce.Popen()
|
||||
return cmdLine
|
||||
return ['exec /usr/bin/valgrind', '--leak-check=yes', self.getExecFile(), '-c', self.getCfgDir()]
|
||||
else:
|
||||
# TODO: move "exec -c" into Popen(), we can both "use shell" and NOT fork so ask to lose kill control
|
||||
return ["exec " + self.getExecFile(), '-c', self.getCfgDir()] # used in subproce.Popen()
|
||||
|
||||
def _getDnodes(self, dbc):
|
||||
dbc.query("show dnodes")
|
||||
|
@ -281,16 +280,16 @@ class TdeSubProcess:
|
|||
return '[TdeSubProc: pid = {}, status = {}]'.format(
|
||||
self.getPid(), self.getStatus() )
|
||||
|
||||
def getStdOut(self) -> BinaryIO :
|
||||
def getIpcStdOut(self) -> IpcStream :
|
||||
if self._popen.universal_newlines : # alias of text_mode
|
||||
raise CrashGenError("We need binary mode for STDOUT IPC")
|
||||
# Logging.info("Type of stdout is: {}".format(type(self._popen.stdout)))
|
||||
return typing.cast(BinaryIO, self._popen.stdout)
|
||||
return typing.cast(IpcStream, self._popen.stdout)
|
||||
|
||||
def getStdErr(self) -> BinaryIO :
|
||||
def getIpcStdErr(self) -> IpcStream :
|
||||
if self._popen.universal_newlines : # alias of text_mode
|
||||
raise CrashGenError("We need binary mode for STDERR IPC")
|
||||
return typing.cast(BinaryIO, self._popen.stderr)
|
||||
return typing.cast(IpcStream, self._popen.stderr)
|
||||
|
||||
# Now it's always running, since we matched the life cycle
|
||||
# def isRunning(self):
|
||||
|
@ -302,11 +301,6 @@ class TdeSubProcess:
|
|||
def _start(self, cmdLine) -> Popen :
|
||||
ON_POSIX = 'posix' in sys.builtin_module_names
|
||||
|
||||
# Sanity check
|
||||
# if self.subProcess: # already there
|
||||
# raise RuntimeError("Corrupt process state")
|
||||
|
||||
|
||||
# Prepare environment variables for coverage information
|
||||
# Ref: https://stackoverflow.com/questions/2231227/python-subprocess-popen-with-a-modified-environment
|
||||
myEnv = os.environ.copy()
|
||||
|
@ -314,9 +308,8 @@ class TdeSubProcess:
|
|||
|
||||
# print(myEnv)
|
||||
# print("Starting TDengine with env: ", myEnv.items())
|
||||
# print("Starting TDengine via Shell: {}".format(cmdLineStr))
|
||||
print("Starting TDengine: {}".format(cmdLine))
|
||||
|
||||
# useShell = True # Needed to pass environments into it
|
||||
return Popen(
|
||||
' '.join(cmdLine), # ' '.join(cmdLine) if useShell else cmdLine,
|
||||
shell=True, # Always use shell, since we need to pass ENV vars
|
||||
|
@ -732,19 +725,19 @@ class ServiceManagerThread:
|
|||
self._ipcQueue = Queue() # type: Queue
|
||||
self._thread = threading.Thread( # First thread captures server OUTPUT
|
||||
target=self.svcOutputReader,
|
||||
args=(subProc.getStdOut(), self._ipcQueue, logDir))
|
||||
args=(subProc.getIpcStdOut(), self._ipcQueue, logDir))
|
||||
self._thread.daemon = True # thread dies with the program
|
||||
self._thread.start()
|
||||
time.sleep(0.01)
|
||||
if not self._thread.is_alive(): # What happened?
|
||||
Logging.info("Failed to started process to monitor STDOUT")
|
||||
Logging.info("Failed to start process to monitor STDOUT")
|
||||
self.stop()
|
||||
raise CrashGenError("Failed to start thread to monitor STDOUT")
|
||||
Logging.info("Successfully started process to monitor STDOUT")
|
||||
|
||||
self._thread2 = threading.Thread( # 2nd thread captures server ERRORs
|
||||
target=self.svcErrorReader,
|
||||
args=(subProc.getStdErr(), self._ipcQueue, logDir))
|
||||
args=(subProc.getIpcStdErr(), self._ipcQueue, logDir))
|
||||
self._thread2.daemon = True # thread dies with the program
|
||||
self._thread2.start()
|
||||
time.sleep(0.01)
|
||||
|
@ -887,14 +880,19 @@ class ServiceManagerThread:
|
|||
print("\nNon-UTF8 server output: {}\n".format(bChunk.decode('cp437')))
|
||||
return None
|
||||
|
||||
def _textChunkGenerator(self, streamIn: BinaryIO, logDir: str, logFile: str
|
||||
def _textChunkGenerator(self, streamIn: IpcStream, logDir: str, logFile: str
|
||||
) -> Generator[TextChunk, None, None]:
|
||||
'''
|
||||
Take an input stream with binary data, produced a generator of decoded
|
||||
"text chunks", and also save the original binary data in a log file.
|
||||
Take an input stream with binary data (likely from Popen), produced a generator of decoded
|
||||
"text chunks".
|
||||
|
||||
Side effect: it also save the original binary data in a log file.
|
||||
'''
|
||||
os.makedirs(logDir, exist_ok=True)
|
||||
logF = open(os.path.join(logDir, logFile), 'wb')
|
||||
if logF is None:
|
||||
Logging.error("Failed to open log file (binary write): {}/{}".format(logDir, logFile))
|
||||
return
|
||||
for bChunk in iter(streamIn.readline, b''):
|
||||
logF.write(bChunk) # Write to log file immediately
|
||||
tChunk = self._decodeBinaryChunk(bChunk) # decode
|
||||
|
@ -902,14 +900,14 @@ class ServiceManagerThread:
|
|||
yield tChunk # TODO: split into actual text lines
|
||||
|
||||
# At the end...
|
||||
streamIn.close() # Close the stream
|
||||
logF.close() # Close the output file
|
||||
streamIn.close() # Close the incoming stream
|
||||
logF.close() # Close the log file
|
||||
|
||||
def svcOutputReader(self, stdOut: BinaryIO, queue, logDir: str):
|
||||
def svcOutputReader(self, ipcStdOut: IpcStream, queue, logDir: str):
|
||||
'''
|
||||
The infinite routine that processes the STDOUT stream for the sub process being managed.
|
||||
|
||||
:param stdOut: the IO stream object used to fetch the data from
|
||||
:param ipcStdOut: the IO stream object used to fetch the data from
|
||||
:param queue: the queue where we dump the roughly parsed chunk-by-chunk text data
|
||||
:param logDir: where we should dump a verbatim output file
|
||||
'''
|
||||
|
@ -917,7 +915,7 @@ class ServiceManagerThread:
|
|||
# Important Reference: https://stackoverflow.com/questions/375427/non-blocking-read-on-a-subprocess-pipe-in-python
|
||||
# print("This is the svcOutput Reader...")
|
||||
# stdOut.readline() # Skip the first output? TODO: remove?
|
||||
for tChunk in self._textChunkGenerator(stdOut, logDir, 'stdout.log') :
|
||||
for tChunk in self._textChunkGenerator(ipcStdOut, logDir, 'stdout.log') :
|
||||
queue.put(tChunk) # tChunk garanteed not to be None
|
||||
self._printProgress("_i")
|
||||
|
||||
|
@ -940,12 +938,12 @@ class ServiceManagerThread:
|
|||
Logging.info("EOF found TDengine STDOUT, marking the process as terminated")
|
||||
self.setStatus(Status.STATUS_STOPPED)
|
||||
|
||||
def svcErrorReader(self, stdErr: BinaryIO, queue, logDir: str):
|
||||
def svcErrorReader(self, ipcStdErr: IpcStream, queue, logDir: str):
|
||||
# os.makedirs(logDir, exist_ok=True)
|
||||
# logFile = os.path.join(logDir,'stderr.log')
|
||||
# fErr = open(logFile, 'wb')
|
||||
# for line in iter(err.readline, b''):
|
||||
for tChunk in self._textChunkGenerator(stdErr, logDir, 'stderr.log') :
|
||||
for tChunk in self._textChunkGenerator(ipcStdErr, logDir, 'stderr.log') :
|
||||
queue.put(tChunk) # tChunk garanteed not to be None
|
||||
# fErr.write(line)
|
||||
Logging.info("TDengine STDERR: {}".format(tChunk))
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from typing import Any, List, Dict, NewType
|
||||
from typing import Any, BinaryIO, List, Dict, NewType
|
||||
from enum import Enum
|
||||
|
||||
DirPath = NewType('DirPath', str)
|
||||
|
@ -26,3 +26,5 @@ class TdDataType(Enum):
|
|||
|
||||
TdColumns = Dict[str, TdDataType]
|
||||
TdTags = Dict[str, TdDataType]
|
||||
|
||||
IpcStream = NewType('IpcStream', BinaryIO)
|
|
@ -17497,3 +17497,24 @@
|
|||
obj:/usr/bin/python3.8
|
||||
fun:PyVectorcall_Call
|
||||
}
|
||||
{
|
||||
<insert_a_suppression_name_here>
|
||||
Memcheck:Leak
|
||||
match-leak-kinds: definite
|
||||
fun:malloc
|
||||
fun:__libc_alloc_buffer_allocate
|
||||
fun:alloc_buffer_allocate
|
||||
fun:__resolv_conf_allocate
|
||||
fun:__resolv_conf_load
|
||||
fun:__resolv_conf_get_current
|
||||
fun:__res_vinit
|
||||
fun:maybe_init
|
||||
fun:context_get
|
||||
fun:context_get
|
||||
fun:__resolv_context_get
|
||||
fun:gaih_inet.constprop.0
|
||||
fun:getaddrinfo
|
||||
fun:taosGetFqdn
|
||||
fun:taosCheckGlobalCfg
|
||||
fun:taos_init_imp
|
||||
}
|
|
@ -183,7 +183,7 @@ python3 ./test.py -f stable/query_after_reset.py
|
|||
# perfbenchmark
|
||||
python3 ./test.py -f perfbenchmark/bug3433.py
|
||||
#python3 ./test.py -f perfbenchmark/bug3589.py
|
||||
|
||||
python3 ./test.py -f perfbenchmark/taosdemoInsert.py
|
||||
|
||||
#query
|
||||
python3 ./test.py -f query/filter.py
|
||||
|
|
|
@ -31,7 +31,7 @@ class insertFromCSVPerformace:
|
|||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taosperf"
|
||||
self.config = "/etc/perf"
|
||||
self.conn = taos.connect(
|
||||
self.host,
|
||||
self.user,
|
||||
|
|
|
@ -0,0 +1,387 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import taos
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import argparse
|
||||
import subprocess
|
||||
import datetime
|
||||
import re
|
||||
|
||||
|
||||
from multiprocessing import cpu_count
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
from util.dnodes import TDDnode
|
||||
|
||||
class Taosdemo:
|
||||
def __init__(self, clearCache, dbName, keep):
|
||||
self.clearCache = clearCache
|
||||
self.dbname = dbName
|
||||
self.drop = "yes"
|
||||
self.keep = keep
|
||||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
# self.config = "/etc/taosperf"
|
||||
# self.conn = taos.connect(
|
||||
# self.host,
|
||||
# self.user,
|
||||
# self.password,
|
||||
# self.config)
|
||||
|
||||
# env config
|
||||
def getBuildPath(self) -> str:
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/debug/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def getExeToolsDir(self) -> str:
|
||||
self.debugdir = self.getBuildPath() + "/debug/build/bin"
|
||||
return self.debugdir
|
||||
|
||||
def getCfgDir(self) -> str:
|
||||
self.config = self.getBuildPath() + "/sim/dnode1/cfg"
|
||||
return self.config
|
||||
|
||||
# taodemo insert file config
|
||||
def dbinfocfg(self) -> dict:
|
||||
return {
|
||||
"name": self.dbname,
|
||||
"drop": self.drop,
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 16,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": self.keep,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp": 2,
|
||||
"walLevel": 1,
|
||||
"cachelast": 0,
|
||||
"quorum": 1,
|
||||
"fsync": 3000,
|
||||
"update": 0
|
||||
}
|
||||
|
||||
def type_check(func):
|
||||
def wrapper(self, **kwargs):
|
||||
num_types = ["int", "float", "bigint", "tinyint", "smallint", "double"]
|
||||
str_types = ["binary", "nchar"]
|
||||
for k ,v in kwargs.items():
|
||||
if k.lower() not in num_types and k.lower() not in str_types:
|
||||
return f"args {k} type error, not allowed"
|
||||
elif not isinstance(v, (int, list, tuple)):
|
||||
return f"value {v} type error, not allowed"
|
||||
elif k.lower() in num_types and not isinstance(v, int):
|
||||
return f"arg {v} takes 1 positional argument must be type int "
|
||||
elif isinstance(v, (list,tuple)) and len(v) > 2:
|
||||
return f"arg {v} takes from 1 to 2 positional arguments but more than 2 were given "
|
||||
elif isinstance(v,(list,tuple)) and [ False for _ in v if not isinstance(_, int) ]:
|
||||
return f"arg {v} takes from 1 to 2 positional arguments must be type int "
|
||||
else:
|
||||
pass
|
||||
return func(self, **kwargs)
|
||||
return wrapper
|
||||
|
||||
@type_check
|
||||
def column_tag_count(self, **column_tag) -> list :
|
||||
init_column_tag = []
|
||||
for k, v in column_tag.items():
|
||||
if re.search(k, "int, float, bigint, tinyint, smallint, double", re.IGNORECASE):
|
||||
init_column_tag.append({"type": k, "count": v})
|
||||
elif re.search(k, "binary, nchar", re.IGNORECASE):
|
||||
if isinstance(v, int):
|
||||
init_column_tag.append({"type": k, "count": v, "len":8})
|
||||
elif len(v) == 1:
|
||||
init_column_tag.append({"type": k, "count": v[0], "len": 8})
|
||||
else:
|
||||
init_column_tag.append({"type": k, "count": v[0], "len": v[1]})
|
||||
return init_column_tag
|
||||
|
||||
def stbcfg(self, stb: str, child_tab_count: int, rows: int, prechildtab: str, columns: dict, tags: dict) -> dict:
|
||||
return {
|
||||
"name": stb,
|
||||
"child_table_exists": "no",
|
||||
"childtable_count": child_tab_count,
|
||||
"childtable_prefix": prechildtab,
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": rows,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset": 0,
|
||||
"rows_per_tbl": 1,
|
||||
"max_sql_len": 65480,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 10,
|
||||
"start_timestamp": f"{datetime.datetime.now():%F %X}",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": self.column_tag_count(**columns),
|
||||
"tags": self.column_tag_count(**tags)
|
||||
}
|
||||
|
||||
def schemecfg(self,intcount=1,floatcount=0,bcount=0,tcount=0,scount=0,doublecount=0,binarycount=0,ncharcount=0):
|
||||
return {
|
||||
"INT": intcount,
|
||||
"FLOAT": floatcount,
|
||||
"BIGINT": bcount,
|
||||
"TINYINT": tcount,
|
||||
"SMALLINT": scount,
|
||||
"DOUBLE": doublecount,
|
||||
"BINARY": binarycount,
|
||||
"NCHAR": ncharcount
|
||||
}
|
||||
|
||||
def insertcfg(self,db: dict, stbs: list) -> dict:
|
||||
return {
|
||||
"filetype": "insert",
|
||||
"cfgdir": self.config,
|
||||
"host": self.host,
|
||||
"port": 6030,
|
||||
"user": self.user,
|
||||
"password": self.password,
|
||||
"thread_count": cpu_count(),
|
||||
"thread_count_create_tbl": cpu_count(),
|
||||
"result_file": "/tmp/insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"num_of_records_per_req": 100,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": db,
|
||||
"super_tables": stbs
|
||||
}]
|
||||
}
|
||||
|
||||
def createinsertfile(self,db: dict, stbs: list) -> str:
|
||||
date = datetime.datetime.now()
|
||||
file_create_table = f"/tmp/insert_{date:%F-%H%M}.json"
|
||||
|
||||
with open(file_create_table, 'w') as f:
|
||||
json.dump(self.insertcfg(db, stbs), f)
|
||||
|
||||
return file_create_table
|
||||
|
||||
# taosdemo query file config
|
||||
def querysqls(self, sql: str) -> list:
|
||||
return [{"sql":sql,"result":""}]
|
||||
|
||||
def querycfg(self, sql: str) -> dict:
|
||||
return {
|
||||
"filetype": "query",
|
||||
"cfgdir": self.config,
|
||||
"host": self.host,
|
||||
"port": 6030,
|
||||
"user": self.user,
|
||||
"password": self.password,
|
||||
"confirm_parameter_prompt": "yes",
|
||||
"query_times": 10,
|
||||
"query_mode": "taosc",
|
||||
"databases": self.dbname,
|
||||
"specified_table_query": {
|
||||
"query_interval": 0,
|
||||
"concurrent": cpu_count(),
|
||||
"sqls": self.querysqls(sql)
|
||||
}
|
||||
}
|
||||
|
||||
def createqueryfile(self, sql: str):
|
||||
date = datetime.datetime.now()
|
||||
file_query_table = f"/tmp/query_{date:%F-%H%M}.json"
|
||||
|
||||
with open(file_query_table,"w") as f:
|
||||
json.dump(self.querycfg(sql), f)
|
||||
|
||||
return file_query_table
|
||||
|
||||
# Execute taosdemo, and delete temporary files when finished
|
||||
def taosdemotable(self, filepath: str, resultfile="/dev/null"):
|
||||
taosdemopath = self.getBuildPath() + "/debug/build/bin"
|
||||
with open(filepath,"r") as f:
|
||||
filetype = json.load(f)["filetype"]
|
||||
if filetype == "insert":
|
||||
taosdemo_table_cmd = f"{taosdemopath}/taosdemo -f {filepath} > {resultfile} 2>&1"
|
||||
else:
|
||||
taosdemo_table_cmd = f"yes | {taosdemopath}/taosdemo -f {filepath} > {resultfile} 2>&1"
|
||||
try:
|
||||
_ = subprocess.check_output(taosdemo_table_cmd, shell=True).decode("utf-8")
|
||||
except subprocess.CalledProcessError as e:
|
||||
_ = e.output
|
||||
|
||||
def droptmpfile(self, filepath: str):
|
||||
drop_file_cmd = f"[ -f {filepath} ] && rm -f {filepath}"
|
||||
try:
|
||||
_ = subprocess.check_output(drop_file_cmd, shell=True).decode("utf-8")
|
||||
except subprocess.CalledProcessError as e:
|
||||
_ = e.output
|
||||
|
||||
# TODO:需要完成TD-4153的数据插入和客户端请求的性能查询。
|
||||
def td4153insert(self):
|
||||
|
||||
tdLog.printNoPrefix("========== start to create table and insert data ==========")
|
||||
self.dbname = "td4153"
|
||||
db = self.dbinfocfg()
|
||||
stblist = []
|
||||
|
||||
columntype = self.schemecfg(intcount=1, ncharcount=100)
|
||||
tagtype = self.schemecfg(intcount=1)
|
||||
stbname = "stb1"
|
||||
prechild = "t1"
|
||||
stable = self.stbcfg(
|
||||
stb=stbname,
|
||||
prechildtab=prechild,
|
||||
child_tab_count=2,
|
||||
rows=10000,
|
||||
columns=columntype,
|
||||
tags=tagtype
|
||||
)
|
||||
stblist.append(stable)
|
||||
insertfile = self.createinsertfile(db=db, stbs=stblist)
|
||||
|
||||
nmon_file = f"/tmp/insert_{datetime.datetime.now():%F-%H%M}.nmon"
|
||||
cmd = f"nmon -s5 -F {nmon_file} -m /tmp/"
|
||||
try:
|
||||
_ = subprocess.check_output(cmd, shell=True).decode("utf-8")
|
||||
except subprocess.CalledProcessError as e:
|
||||
_ = e.output
|
||||
|
||||
self.taosdemotable(insertfile)
|
||||
self.droptmpfile(insertfile)
|
||||
self.droptmpfile("/tmp/insert_res.txt")
|
||||
|
||||
# In order to prevent too many performance files from being generated, the nmon file is deleted.
|
||||
# and the delete statement can be cancelled during the actual test.
|
||||
self.droptmpfile(nmon_file)
|
||||
|
||||
cmd = f"ps -ef|grep -w nmon| grep -v grep | awk '{{print $2}}'"
|
||||
try:
|
||||
time.sleep(10)
|
||||
_ = subprocess.check_output(cmd,shell=True).decode("utf-8")
|
||||
except BaseException as e:
|
||||
raise e
|
||||
|
||||
def td4153query(self):
|
||||
tdLog.printNoPrefix("========== start to query operation ==========")
|
||||
|
||||
sqls = {
|
||||
"select_all": "select * from stb1",
|
||||
"select_join": "select * from t10, t11 where t10.ts=t11.ts"
|
||||
}
|
||||
for type, sql in sqls.items():
|
||||
result_file = f"/tmp/queryResult_{type}.log"
|
||||
query_file = self.createqueryfile(sql)
|
||||
try:
|
||||
self.taosdemotable(query_file, resultfile=result_file)
|
||||
except subprocess.CalledProcessError as e:
|
||||
out_put = e.output
|
||||
if result_file:
|
||||
print(f"execute rows {type.split('_')[1]} sql, the sql is: {sql}")
|
||||
max_sql_time_cmd = f'''
|
||||
grep -o Spent.*s {result_file} |awk 'NR==1{{max=$2;next}}{{max=max>$2?max:$2}}END{{print "Max=",max,"s"}}'
|
||||
'''
|
||||
max_sql_time = subprocess.check_output(max_sql_time_cmd, shell=True).decode("UTF-8")
|
||||
print(f"{type.split('_')[1]} rows sql time : {max_sql_time}")
|
||||
|
||||
min_sql_time_cmd = f'''
|
||||
grep -o Spent.*s {result_file} |awk 'NR==1{{min=$2;next}}{{min=min<$2?min:$2}}END{{print "Min=",min,"s"}}'
|
||||
'''
|
||||
min_sql_time = subprocess.check_output(min_sql_time_cmd, shell=True).decode("UTF-8")
|
||||
print(f"{type.split('_')[1]} rows sql time : {min_sql_time}")
|
||||
|
||||
avg_sql_time_cmd = f'''
|
||||
grep -o Spent.*s {result_file} |awk '{{sum+=$2}}END{{print "Average=",sum/NR,"s"}}'
|
||||
'''
|
||||
avg_sql_time = subprocess.check_output(avg_sql_time_cmd, shell=True).decode("UTF-8")
|
||||
print(f"{type.split('_')[1]} rows sql time : {avg_sql_time}")
|
||||
|
||||
self.droptmpfile(query_file)
|
||||
self.droptmpfile(result_file)
|
||||
|
||||
drop_query_tmt_file_cmd = " find ./ -name 'querySystemInfo-*' -type f -exec rm {} \; "
|
||||
try:
|
||||
_ = subprocess.check_output(drop_query_tmt_file_cmd, shell=True).decode("utf-8")
|
||||
except subprocess.CalledProcessError as e:
|
||||
_ = e.output
|
||||
pass
|
||||
|
||||
def td4153(self):
|
||||
self.td4153insert()
|
||||
self.td4153query()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
'-r',
|
||||
'--remove-cache',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='clear cache before query (default: False)')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--database-name',
|
||||
action='store',
|
||||
default='db',
|
||||
type=str,
|
||||
help='Database name to be created (default: db)')
|
||||
parser.add_argument(
|
||||
'-k',
|
||||
'--keep-time',
|
||||
action='store',
|
||||
default=3650,
|
||||
type=int,
|
||||
help='Database keep parameters (default: 3650)')
|
||||
|
||||
args = parser.parse_args()
|
||||
taosdemo = Taosdemo(args.remove_cache, args.database_name, args.keep_time)
|
||||
# taosdemo.conn = taos.connect(
|
||||
# taosdemo.host,
|
||||
# taosdemo.user,
|
||||
# taosdemo.password,
|
||||
# taosdemo.config
|
||||
# )
|
||||
|
||||
debugdir = taosdemo.getExeToolsDir()
|
||||
cfgdir = taosdemo.getCfgDir()
|
||||
cmd = f"{debugdir}/taosd -c {cfgdir} >/dev/null 2>&1 &"
|
||||
try:
|
||||
_ = subprocess.check_output(cmd, shell=True).decode("utf-8")
|
||||
except subprocess.CalledProcessError as e:
|
||||
_ = e.output
|
||||
|
||||
if taosdemo.clearCache:
|
||||
# must be root permission
|
||||
subprocess.check_output("echo 3 > /proc/sys/vm/drop_caches", shell=True).decode("utf-8")
|
||||
|
||||
taosdemo.td4153()
|
|
@ -24,7 +24,7 @@ class taosdemoPerformace:
|
|||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taosperf"
|
||||
self.config = "/etc/perf"
|
||||
self.conn = taos.connect(
|
||||
self.host,
|
||||
self.user,
|
||||
|
@ -77,7 +77,7 @@ class taosdemoPerformace:
|
|||
|
||||
insert_data = {
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taosperf",
|
||||
"cfgdir": "/etc/perf",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
|
||||
system sh/stop_dnodes.sh
|
||||
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
|
|
|
@ -809,3 +809,5 @@ endi
|
|||
if $data00 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print ====================> TODO stddev + normal column filter
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c walLevel -v 0
|
||||
system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
|
||||
sleep 100
|
||||
sql connect
|
||||
print ======================== dnode1 start
|
||||
|
||||
$db = testdb
|
||||
|
||||
sql create database $db cachelast 2
|
||||
sql use $db
|
||||
|
||||
sql create stable st2 (ts timestamp, f1 int, f2 double, f3 binary(10), f4 timestamp) tags (id int)
|
||||
|
||||
sql create table tb1 using st2 tags (1);
|
||||
sql create table tb2 using st2 tags (2);
|
||||
sql create table tb3 using st2 tags (3);
|
||||
sql create table tb4 using st2 tags (4);
|
||||
sql create table tb5 using st2 tags (1);
|
||||
sql create table tb6 using st2 tags (2);
|
||||
sql create table tb7 using st2 tags (3);
|
||||
sql create table tb8 using st2 tags (4);
|
||||
sql create table tb9 using st2 tags (5);
|
||||
sql create table tba using st2 tags (5);
|
||||
sql create table tbb using st2 tags (5);
|
||||
sql create table tbc using st2 tags (5);
|
||||
sql create table tbd using st2 tags (5);
|
||||
sql create table tbe using st2 tags (5);
|
||||
|
||||
sql insert into tb1 values ("2021-05-09 10:10:10", 1, 2.0, '3', -1000)
|
||||
sql insert into tb1 values ("2021-05-10 10:10:11", 4, 5.0, NULL, -2000)
|
||||
sql insert into tb1 values ("2021-05-12 10:10:12", 6,NULL, NULL, -3000)
|
||||
|
||||
sql insert into tb2 values ("2021-05-09 10:11:13",-1,-2.0,'-3', -1001)
|
||||
sql insert into tb2 values ("2021-05-10 10:11:14",-4,-5.0, NULL, -2001)
|
||||
sql insert into tb2 values ("2021-05-11 10:11:15",-6, -7, '-8', -3001)
|
||||
|
||||
sql insert into tb3 values ("2021-05-09 10:12:17", 7, 8.0, '9' , -1002)
|
||||
sql insert into tb3 values ("2021-05-09 10:12:17",10,11.0, NULL, -2002)
|
||||
sql insert into tb3 values ("2021-05-09 10:12:18",12,NULL, NULL, -3002)
|
||||
|
||||
sql insert into tb4 values ("2021-05-09 10:12:19",13,14.0,'15' , -1003)
|
||||
sql insert into tb4 values ("2021-05-10 10:12:20",16,17.0, NULL, -2003)
|
||||
sql insert into tb4 values ("2021-05-11 10:12:21",18,NULL, NULL, -3003)
|
||||
|
||||
sql insert into tb5 values ("2021-05-09 10:12:22",19, 20, '21', -1004)
|
||||
sql insert into tb6 values ("2021-05-11 10:12:23",22, 23, NULL, -2004)
|
||||
sql insert into tb7 values ("2021-05-10 10:12:24",24,NULL, '25', -3004)
|
||||
sql insert into tb8 values ("2021-05-11 10:12:25",26,NULL, '27', -4004)
|
||||
|
||||
sql insert into tb9 values ("2021-05-09 10:12:26",28, 29, '30', -1005)
|
||||
sql insert into tba values ("2021-05-10 10:12:27",31, 32, NULL, -2005)
|
||||
sql insert into tbb values ("2021-05-10 10:12:28",33,NULL, '35', -3005)
|
||||
sql insert into tbc values ("2021-05-11 10:12:29",36, 37, NULL, -4005)
|
||||
sql insert into tbd values ("2021-05-11 10:12:29",NULL,NULL,NULL,NULL )
|
||||
|
||||
run general/parser/last_cache_query.sim
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
|
||||
run general/parser/last_cache_query.sim
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,416 @@
|
|||
|
||||
sleep 100
|
||||
sql connect
|
||||
|
||||
$db = testdb
|
||||
|
||||
sql use $db
|
||||
|
||||
print "test tb1"
|
||||
|
||||
sql select last(ts) from tb1
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(f1) from tb1
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 6 then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select last(*) from tb1
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 5.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(tb1.*,ts,f4) from tb1
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 5.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data05 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data06 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
|
||||
|
||||
print "test tb2"
|
||||
|
||||
sql select last(ts) from tb2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-11 10:11:15.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(f1) from tb2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != -6 then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select last(*) from tb2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-11 10:11:15.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != -6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != -7.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != -8 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:56.999@ then
|
||||
if $data04 != @70-01-01 07:59:57.-01@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
|
||||
sql select last(tb2.*,ts,f4) from tb2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-11 10:11:15.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != -6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != -7.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != -8 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:56.999@ then
|
||||
if $data04 != @70-01-01 07:59:57.-01@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
if $data05 != @21-05-11 10:11:15.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data06 != @70-01-01 07:59:56.999@ then
|
||||
if $data04 != @70-01-01 07:59:57.-01@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
print "test tbd"
|
||||
sql select last(*) from tbd
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-11 10:12:29.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != NULL then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != NULL then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != NULL then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != NULL then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
|
||||
print "test tbe"
|
||||
sql select last(*) from tbe
|
||||
if $rows != 0 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
print "test stable"
|
||||
sql select last(ts) from st2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(f1) from st2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 6 then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select last(*) from st2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 37.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 27 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(st2.*,ts,f4) from st2
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 37.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 27 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data05 != @21-05-12 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data06 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
sql select last(*) from st2 group by id
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-12 10:10:12.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 5.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 21 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data05 != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data10 != @21-05-11 10:12:23.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data11 != 22 then
|
||||
return -1
|
||||
endi
|
||||
if $data12 != 23.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data13 != -8 then
|
||||
return -1
|
||||
endi
|
||||
if $data14 != @70-01-01 07:59:57.996@ then
|
||||
if $data14 != @70-01-01 07:59:58.-04@ then
|
||||
print $data14
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
if $data15 != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data20 != @21-05-10 10:12:24.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data21 != 24 then
|
||||
return -1
|
||||
endi
|
||||
if $data22 != 8.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data23 != 25 then
|
||||
return -1
|
||||
endi
|
||||
if $data24 != @70-01-01 07:59:56.996@ then
|
||||
if $data24 != @70-01-01 07:59:57.-04@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
if $data25 != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data30 != @21-05-11 10:12:25.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data31 != 26 then
|
||||
return -1
|
||||
endi
|
||||
if $data32 != 17.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data33 != 27 then
|
||||
return -1
|
||||
endi
|
||||
if $data34 != @70-01-01 07:59:55.996@ then
|
||||
if $data34 != @70-01-01 07:59:56.-04@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
if $data35 != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data40 != @21-05-11 10:12:29.000@ then
|
||||
return -1
|
||||
endi
|
||||
if $data41 != 36 then
|
||||
return -1
|
||||
endi
|
||||
if $data42 != 37.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data43 != 35 then
|
||||
return -1
|
||||
endi
|
||||
if $data44 != @70-01-01 07:59:55.995@ then
|
||||
if $data44 != @70-01-01 07:59:56.-05@ then
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
if $data45 != 5 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
print "test tbn"
|
||||
sql create table tbn (ts timestamp, f1 int, f2 double, f3 binary(10), f4 timestamp)
|
||||
sql insert into tbn values ("2021-05-09 10:10:10", 1, 2.0, '3', -1000)
|
||||
sql insert into tbn values ("2021-05-10 10:10:11", 4, 5.0, NULL, -2000)
|
||||
sql insert into tbn values ("2021-05-12 10:10:12", 6,NULL, NULL, -3000)
|
||||
sql insert into tbn values ("2021-05-13 10:10:12", NULL,NULL, NULL,NULL)
|
||||
|
||||
sql select last(*) from tbn;
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != @21-05-13 10:10:12.000@ then
|
||||
print $data00
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 5.000000000 then
|
||||
print $data02
|
||||
return -1
|
||||
endi
|
||||
if $data03 != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data04 != @70-01-01 07:59:57.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
|
@ -1,6 +1,5 @@
|
|||
system sh/stop_dnodes.sh
|
||||
|
||||
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
|
|
|
@ -0,0 +1,207 @@
|
|||
system sh/stop_dnodes.sh
|
||||
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
|
||||
sleep 100
|
||||
sql connect
|
||||
|
||||
print ======================== dnode1 start
|
||||
|
||||
$dbPrefix = nest_query
|
||||
$tbPrefix = nest_tb
|
||||
$mtPrefix = nest_mt
|
||||
$tbNum = 10
|
||||
$rowNum = 10000
|
||||
$totalNum = $tbNum * $rowNum
|
||||
|
||||
print =============== nestquery.sim
|
||||
|
||||
$i = 0
|
||||
$db = $dbPrefix . $i
|
||||
$mt = $mtPrefix . $i
|
||||
|
||||
sql drop database if exists $db
|
||||
sql create database if not exists $db
|
||||
|
||||
sql use $db
|
||||
sql create table $mt (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool, c8 binary(10), c9 nchar(9)) TAGS(t1 int)
|
||||
|
||||
$half = $tbNum / 2
|
||||
|
||||
$i = 0
|
||||
while $i < $half
|
||||
$tb = $tbPrefix . $i
|
||||
|
||||
$nextSuffix = $i + $half
|
||||
$tb1 = $tbPrefix . $nextSuffix
|
||||
|
||||
sql create table $tb using $mt tags( $i )
|
||||
sql create table $tb1 using $mt tags( $nextSuffix )
|
||||
|
||||
$x = 0
|
||||
while $x < $rowNum
|
||||
$y = $x * 60000
|
||||
$ms = 1600099200000 + $y
|
||||
$c = $x / 100
|
||||
$c = $c * 100
|
||||
$c = $x - $c
|
||||
$binary = 'binary . $c
|
||||
$binary = $binary . '
|
||||
$nchar = 'nchar . $c
|
||||
$nchar = $nchar . '
|
||||
sql insert into $tb values ($ms , $c , $c , $c , $c , $c , $c , $c , $binary , $nchar ) $tb1 values ($ms , $c , $c , $c , $c , $c , $c , $c , $binary , $nchar )
|
||||
$x = $x + 1
|
||||
endw
|
||||
|
||||
$i = $i + 1
|
||||
endw
|
||||
|
||||
sleep 100
|
||||
|
||||
$i = 1
|
||||
$tb = $tbPrefix . $i
|
||||
|
||||
print ==============> simple nest query test
|
||||
sql select count(*) from (select count(*) from nest_mt0)
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select count(*) from (select count(*) from nest_mt0 group by tbname)
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select count(*) from (select count(*) from nest_mt0 interval(10h) group by tbname)
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 170 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select sum(a) from (select count(*) a from nest_mt0 interval(10h) group by tbname)
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 100000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =================> alias name test
|
||||
sql select ts from (select count(*) a from nest_tb0 interval(1h))
|
||||
if $rows != 167 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select count(a) from (select count(*) a from nest_tb0 interval(1h))
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 167 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print ================>master query + filter
|
||||
sql select t.* from (select count(*) a from nest_tb0 interval(10h)) t where t.a <= 520;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
print ===================> nest query interval
|
||||
|
||||
|
||||
|
||||
print ===================> complex query
|
||||
|
||||
|
||||
|
||||
print ===================> group by + having
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
print =========================> nest query join
|
||||
sql select a.ts,a.k,b.ts from (select count(*) k from nest_tb0 interval(30a)) a, (select count(*) f from nest_tb1 interval(30a)) b where a.ts = b.ts ;
|
||||
if $rows != 10000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data02 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data10 != @20-09-15 00:01:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data11 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data12 != @20-09-15 00:01:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select sum(a.k), sum(b.f) from (select count(*) k from nest_tb0 interval(30a)) a, (select count(*) f from nest_tb1 interval(30a)) b where a.ts = b.ts ;
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != 10000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 10000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select a.ts,a.k,b.ts,c.ts,c.ts,c.x from (select count(*) k from nest_tb0 interval(30a)) a, (select count(*) f from nest_tb1 interval(30a)) b, (select count(*) x from nest_tb2 interval(30a)) c where a.ts = b.ts and a.ts = c.ts
|
||||
if $rows != 10000 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data00 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data02 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data03 != @20-09-15 00:00:00.000@ then
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
@ -887,10 +887,16 @@ sql_error select tbname, t1 from select_tags_mt0 interval(1y);
|
|||
#valid sql: select first(c1), last(c2), count(*) from select_tags_mt0 group by tbname, t1;
|
||||
#valid sql: select first(c1), tbname, t1 from select_tags_mt0 group by t2;
|
||||
|
||||
print ==================================>TD-4231
|
||||
sql_error select t1,tbname from select_tags_mt0 where c1<0
|
||||
sql_error select t1,tbname from select_tags_mt0 where c1<0 and tbname in ('select_tags_tb12')
|
||||
|
||||
sql select tbname from select_tags_mt0 where tbname in ('select_tags_tb12');
|
||||
|
||||
sql_error select first(c1), last(c2), t1 from select_tags_mt0 group by tbname;
|
||||
sql_error select first(c1), last(c2), tbname, t2 from select_tags_mt0 group by tbname;
|
||||
sql_error select first(c1), count(*), t2, t1, tbname from select_tags_mt0 group by tbname;
|
||||
# this sql is valid: select first(c1), t2 from select_tags_mt0 group by tbname;
|
||||
#valid sql: select first(c1), t2 from select_tags_mt0 group by tbname;
|
||||
|
||||
#sql select first(ts), tbname from select_tags_mt0 group by tbname;
|
||||
#sql select count(c1) from select_tags_mt0 where c1=99 group by tbname;
|
||||
|
|
|
@ -54,9 +54,10 @@ run general/parser/timestamp.sim
|
|||
run general/parser/sliding.sim
|
||||
run general/parser/function.sim
|
||||
run general/parser/stableOp.sim
|
||||
|
||||
run general/parser/having.sim
|
||||
run general/parser/having_child.sim
|
||||
run general/parser/slimit_alter_tags.sim
|
||||
run general/parser/binary_escapeCharacter.sim
|
||||
run general/parser/between_and.sim
|
||||
run general/parser/last_cache.sim
|
||||
|
||||
|
|
|
@ -158,7 +158,7 @@ if $dnode4Vtatus != offline then
|
|||
sleep 2000
|
||||
goto wait_dnode4_vgroup_offline
|
||||
endi
|
||||
if $dnode3Vtatus != master then
|
||||
if $dnode3Vtatus != unsynced then
|
||||
sleep 2000
|
||||
goto wait_dnode4_vgroup_offline
|
||||
endi
|
||||
|
|
Loading…
Reference in New Issue