Merge branch '3.0' of https://github.com/taosdata/TDengine into feat/tsdb_snapshot
This commit is contained in:
commit
74b31e8c81
|
@ -1,19 +1,52 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Cache
|
sidebar_label: Cache
|
||||||
title: Cache
|
title: Cache
|
||||||
description: "The latest row of each table is kept in cache to provide high performance query of latest state."
|
description: "Caching System inside TDengine"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
To achieve the purpose of high performance data writing and querying, TDengine employs a lot of caching technologies in both server side and client side.
|
||||||
|
|
||||||
|
## Write Cache
|
||||||
|
|
||||||
The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
|
The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
|
||||||
|
|
||||||
Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as a caching system without deploying another separate caching system. This simplifies the system architecture and minimizes operational costs. The cache is emptied after TDengine is restarted. TDengine does not reload data from disk into cache, like a key-value caching system.
|
The memory space used by each vnode as write cache is determined when creating a database. Parameter `vgroups` and `buffer` can be used to specify the number of vnode and the size of write cache for each vnode when creating the database. Then, the total size of write cache for this database is `vgroups * buffer`.
|
||||||
|
|
||||||
The memory space used by the TDengine cache is fixed in size and configurable. It should be allocated based on application requirements and system resources. An independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine. There is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
|
|
||||||
|
|
||||||
The memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache` and the number of blocks for each vnode is determined by the parameter `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records, to be efficient.
|
|
||||||
|
|
||||||
`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco, California.
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
select last_row(voltage) from meters where location='California.SanFrancisco';
|
create database db0 vgroups 100 buffer 16MB
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The above statement creates a database of 100 vnodes while each vnode has a write cache of 16MB.
|
||||||
|
|
||||||
|
Even though in theory it's always better to have a larger cache, the extra effect would be very minor once the size of cache grows beyond a threshold. So normally it's enough to use the default value of `buffer` parameter.
|
||||||
|
|
||||||
|
## Read Cache
|
||||||
|
|
||||||
|
When creating a database, it's also possible to specify whether to cache the latest data of each sub table, using parameter `cachelast`. There are 3 cases:
|
||||||
|
- 0: No cache for latest data
|
||||||
|
- 1: The last row of each table is cached, `last_row` function can benefit significantly from it
|
||||||
|
- 2: The latest non-NULL value of each column for each table is cached, `last` function can benefit very much when there is no `where`, `group by`, `order by` or `interval` clause
|
||||||
|
- 3: Bot hthe last row and the latest non-NULL value of each column for each table are cached, identical to the behavior of both 1 and 2 are set together
|
||||||
|
|
||||||
|
|
||||||
|
## Meta Cache
|
||||||
|
|
||||||
|
To process data writing and querying efficiently, each vnode caches the metadata that's already retrieved. Parameters `pages` and `pagesize` are used to specify the size of metadata cache for each vnode.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
create database db0 pages 128 pagesize 16kb
|
||||||
|
```
|
||||||
|
|
||||||
|
The above statement will create a database db0 each of whose vnode is allocated a meta cache of `128 * 16 KB = 2 MB` .
|
||||||
|
|
||||||
|
## File System Cache
|
||||||
|
|
||||||
|
TDengine utilizes WAL to provide basic reliability. The essential of WAL is to append data in a disk file, so the file system cache also plays an important role in the writing performance. Parameter `wal` can be used to specify the policy of writing WAL, there are 2 cases:
|
||||||
|
- 1: Write data to WAL without calling fsync, the data is actually written to the file system cache without flushing immediately, in this way you can get better write performance
|
||||||
|
- 2: Write data to WAL and invoke fsync, the data is immediately flushed to disk, in this way you can get higher reliability
|
||||||
|
|
||||||
|
## Client Cache
|
||||||
|
|
||||||
|
To improve the overall efficiency of processing data, besides the above caches, the core library `libtaos.so` (also referred to as `taosc`) which all client programs depend on also has its own cache. `taosc` caches the metadata of the databases, super tables, tables that the invoking client has accessed, plus other critical metadata such as the cluster topology.
|
||||||
|
|
||||||
|
When multiple client programs are accessing a TDengine cluster, if one of the clients modifies some metadata, the cache may become invalid in other clients. If this case happens, the client programs need to "reset query cache" to invalidate the whole cache so that `taosc` is enforced to repull the metadata it needs to rebuild the cache.
|
||||||
|
|
|
@ -1,21 +1,49 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 缓存
|
sidebar_label: 缓存
|
||||||
title: 缓存
|
title: 缓存
|
||||||
description: "提供写驱动的缓存管理机制,将每个表最近写入的一条记录持续保存在缓存中,可以提供高性能的最近状态查询。"
|
description: "TDengine 内部的缓存设计"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
为了实现高效的写入和查询,TDengine 充分利用了各种缓存技术,本节将对 TDengine 中对缓存的使用做详细的说明。
|
||||||
|
|
||||||
|
## 写缓存
|
||||||
|
|
||||||
TDengine 采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Used,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine 充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。
|
TDengine 采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Used,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine 充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。
|
||||||
|
|
||||||
TDengine 通过查询函数向用户提供毫秒级的数据获取能力。直接将最近到达的数据保存在缓存中,可以更加快速地响应用户针对最近一条或一批数据的查询分析,整体上提供更快的数据库查询响应能力。从这个意义上来说,可通过设置合适的配置参数将 TDengine 作为数据缓存来使用,而不需要再部署额外的缓存系统,可有效地简化系统架构,降低运维的成本。需要注意的是,TDengine 重启以后系统的缓存将被清空,之前缓存的数据均会被批量写入磁盘,缓存的数据将不会像专门的 key-value 缓存系统再将之前缓存的数据重新加载到缓存中。
|
每个 vnode 的写入缓存大小在创建数据库时决定,创建数据库时的两个关键参数 vgroups 和 buffer 分别决定了该数据库中的数据由多少个 vgroup 处理,以及向其中的每个 vnode 分配多少写入缓存。
|
||||||
|
|
||||||
TDengine 分配固定大小的内存空间作为缓存空间,缓存空间可根据应用的需求和硬件资源配置。通过适当的设置缓存空间,TDengine 可以提供极高性能的写入和查询的支持。TDengine 中每个虚拟节点(virtual node)创建时分配独立的缓存池。每个虚拟节点管理自己的缓存池,不同虚拟节点间不共享缓存池。每个虚拟节点内部所属的全部表共享该虚拟节点的缓存池。
|
|
||||||
|
|
||||||
TDengine 将内存池按块划分进行管理,数据在内存块里是以行(row)的形式存储。一个 vnode 的内存池是在 vnode 创建时按块分配好,而且每个内存块按照先进先出的原则进行管理。在创建内存池时,块的大小由系统配置参数 cache 决定;每个 vnode 中内存块的数目则由配置参数 blocks 决定。因此对于一个 vnode,总的内存大小为:`cache * blocks`。一个 cache block 需要保证每张表能存储至少几十条以上记录,才会有效率。
|
|
||||||
|
|
||||||
你可以通过函数 last_row() 快速获取一张表或一张超级表的最后一条记录,这样很便于在大屏显示各设备的实时状态或采集值。例如:
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
select last_row(voltage) from meters where location='California.SanFrancisco';
|
create database db0 vgroups 100 buffer 16MB
|
||||||
```
|
```
|
||||||
|
|
||||||
该 SQL 语句将获取所有位于加利福尼亚州旧金山市的电表最后记录的电压值。
|
理论上缓存越大越好,但超过一定阈值后再增加缓存对写入性能提升并无帮助,一般情况下使用默认值即可。
|
||||||
|
|
||||||
|
## 读缓存
|
||||||
|
|
||||||
|
在创建数据库时可以选择是否缓存该数据库中每个子表的最新数据。由参数 cachelast 设置,分为三种情况:
|
||||||
|
- 0: 不缓存
|
||||||
|
- 1: 缓存子表最近一行数据,这将显著改善 last_row 函数的性能
|
||||||
|
- 2: 缓存子表每一列最近的非 NULL 值,这将显著改善无特殊影响(比如 WHERE, ORDER BY, GROUP BY, INTERVAL)时的 last 函数的性能
|
||||||
|
- 3: 同时缓存行和列,即等同于上述 cachelast 值为 1 或 2 时的行为同时生效
|
||||||
|
|
||||||
|
## 元数据缓存
|
||||||
|
|
||||||
|
为了更高效地处理查询和写入,每个 vnode 都会缓存自己曾经获取到的元数据。元数据缓存由创建数据库时的两个参数 pages 和 pagesize 决定。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
create database db0 pages 128 pagesize 16kb
|
||||||
|
```
|
||||||
|
|
||||||
|
上述语句会为数据库 db0 的每个 vnode 创建 128 个 page,每个 page 16kb 的元数据缓存。
|
||||||
|
|
||||||
|
## 文件系统缓存
|
||||||
|
|
||||||
|
TDengine 利用 WAL 技术来提供基本的数据可靠性。写入 WAL 本质上是以顺序追加的方式写入磁盘文件。此时文件系统缓存在写入性能中也会扮演关键角色。在创建数据库时可以利用 wal 参数来选择性能优先或者可靠性优先。
|
||||||
|
- 1: 写 WAL 但不执行 fsync ,新写入 WAL 的数据保存在文件系统缓存中但并未写入磁盘,这种方式性能优先
|
||||||
|
- 2: 写 WAL 且执行 fsync,新写入 WAL 的数据被立即同步到磁盘上,可靠性更高
|
||||||
|
|
||||||
|
## 客户端缓存
|
||||||
|
|
||||||
|
为了进一步提升整个系统的处理效率,除了以上提到的服务端缓存技术之外,在 TDengine 的所有客户端都要调用的核心库 libtaos.so (也称为 taosc )中也充分利用了缓存技术。在 taosc 中会缓存所访问过的各个数据库、超级表以及子表的元数据,集群的拓扑结构等关键元数据。
|
||||||
|
|
||||||
|
当有多个客户端同时访问 TDengine 集群,且其中一个客户端对某些元数据进行了修改的情况下,有可能会出现其它客户端所缓存的元数据不同步或失效的情况,此时需要在客户端执行 "reset query cache" 以让整个缓存失效从而强制重新拉取最新的元数据重新建立缓存。
|
||||||
|
|
|
@ -28,7 +28,6 @@ extern "C" {
|
||||||
|
|
||||||
typedef struct SIndex SIndex;
|
typedef struct SIndex SIndex;
|
||||||
typedef struct SIndexTerm SIndexTerm;
|
typedef struct SIndexTerm SIndexTerm;
|
||||||
typedef struct SIndexOpts SIndexOpts;
|
|
||||||
typedef struct SIndexMultiTermQuery SIndexMultiTermQuery;
|
typedef struct SIndexMultiTermQuery SIndexMultiTermQuery;
|
||||||
typedef struct SArray SIndexMultiTerm;
|
typedef struct SArray SIndexMultiTerm;
|
||||||
|
|
||||||
|
@ -62,6 +61,9 @@ typedef enum {
|
||||||
QUERY_MAX
|
QUERY_MAX
|
||||||
} EIndexQueryType;
|
} EIndexQueryType;
|
||||||
|
|
||||||
|
typedef struct SIndexOpts {
|
||||||
|
int32_t cacheSize; // MB
|
||||||
|
} SIndexOpts;
|
||||||
/*
|
/*
|
||||||
* create multi query
|
* create multi query
|
||||||
* @param oper (input, relation between querys)
|
* @param oper (input, relation between querys)
|
||||||
|
@ -173,7 +175,7 @@ void indexMultiTermDestroy(SIndexMultiTerm* terms);
|
||||||
* @param:
|
* @param:
|
||||||
* @param:
|
* @param:
|
||||||
*/
|
*/
|
||||||
SIndexOpts* indexOptsCreate();
|
SIndexOpts* indexOptsCreate(int32_t cacheSize);
|
||||||
void indexOptsDestroy(SIndexOpts* opts);
|
void indexOptsDestroy(SIndexOpts* opts);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -101,6 +101,7 @@ int32_t countScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam
|
||||||
int32_t sumScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
int32_t sumScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
int32_t minScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
int32_t minScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
int32_t maxScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
int32_t maxScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
|
int32_t avgScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
|
|
|
@ -25,11 +25,6 @@ extern "C" {
|
||||||
|
|
||||||
extern tsem_t schdRspSem;
|
extern tsem_t schdRspSem;
|
||||||
|
|
||||||
typedef struct SSchedulerCfg {
|
|
||||||
uint32_t maxJobNum;
|
|
||||||
int32_t maxNodeTableNum;
|
|
||||||
} SSchedulerCfg;
|
|
||||||
|
|
||||||
typedef struct SQueryProfileSummary {
|
typedef struct SQueryProfileSummary {
|
||||||
int64_t startTs; // Object created and added into the message queue
|
int64_t startTs; // Object created and added into the message queue
|
||||||
int64_t endTs; // the timestamp when the task is completed
|
int64_t endTs; // the timestamp when the task is completed
|
||||||
|
@ -84,7 +79,7 @@ typedef struct SSchedulerReq {
|
||||||
} SSchedulerReq;
|
} SSchedulerReq;
|
||||||
|
|
||||||
|
|
||||||
int32_t schedulerInit(SSchedulerCfg *cfg);
|
int32_t schedulerInit(void);
|
||||||
|
|
||||||
int32_t schedulerExecJob(SSchedulerReq *pReq, int64_t *pJob);
|
int32_t schedulerExecJob(SSchedulerReq *pReq, int64_t *pJob);
|
||||||
|
|
||||||
|
@ -96,6 +91,8 @@ int32_t schedulerGetTasksStatus(int64_t job, SArray *pSub);
|
||||||
|
|
||||||
void schedulerStopQueryHb(void *pTrans);
|
void schedulerStopQueryHb(void *pTrans);
|
||||||
|
|
||||||
|
int32_t schedulerUpdatePolicy(int32_t policy);
|
||||||
|
int32_t schedulerEnableReSchedule(bool enableResche);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Cancel query job
|
* Cancel query job
|
||||||
|
|
|
@ -314,6 +314,11 @@ int taos_options_imp(TSDB_OPTION option, const char* str);
|
||||||
|
|
||||||
void* openTransporter(const char* user, const char* auth, int32_t numOfThreads);
|
void* openTransporter(const char* user, const char* auth, int32_t numOfThreads);
|
||||||
|
|
||||||
|
typedef struct AsyncArg {
|
||||||
|
SRpcMsg msg;
|
||||||
|
SEpSet* pEpset;
|
||||||
|
} AsyncArg;
|
||||||
|
|
||||||
bool persistConnForSpecificMsg(void* parenct, tmsg_t msgType);
|
bool persistConnForSpecificMsg(void* parenct, tmsg_t msgType);
|
||||||
void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet);
|
void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet);
|
||||||
|
|
||||||
|
|
|
@ -363,8 +363,7 @@ void taos_init_imp(void) {
|
||||||
SCatalogCfg cfg = {.maxDBCacheNum = 100, .maxTblCacheNum = 100};
|
SCatalogCfg cfg = {.maxDBCacheNum = 100, .maxTblCacheNum = 100};
|
||||||
catalogInit(&cfg);
|
catalogInit(&cfg);
|
||||||
|
|
||||||
SSchedulerCfg scfg = {.maxJobNum = 100};
|
schedulerInit();
|
||||||
schedulerInit(&scfg);
|
|
||||||
tscDebug("starting to initialize TAOS driver");
|
tscDebug("starting to initialize TAOS driver");
|
||||||
|
|
||||||
taosSetCoreDump(true);
|
taosSetCoreDump(true);
|
||||||
|
|
|
@ -834,6 +834,7 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
|
||||||
tscDebug("0x%" PRIx64 " client retry to handle the error, code:%d - %s, tryCount:%d, reqId:0x%" PRIx64,
|
tscDebug("0x%" PRIx64 " client retry to handle the error, code:%d - %s, tryCount:%d, reqId:0x%" PRIx64,
|
||||||
pRequest->self, code, tstrerror(code), pRequest->retry, pRequest->requestId);
|
pRequest->self, code, tstrerror(code), pRequest->retry, pRequest->requestId);
|
||||||
pRequest->prevCode = code;
|
pRequest->prevCode = code;
|
||||||
|
schedulerFreeJob(&pRequest->body.queryJob, 0);
|
||||||
doAsyncQuery(pRequest, true);
|
doAsyncQuery(pRequest, true);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -1266,13 +1267,8 @@ void updateTargetEpSet(SMsgSendInfo* pSendInfo, STscObj* pTscObj, SRpcMsg* pMsg,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef struct SchedArg {
|
|
||||||
SRpcMsg msg;
|
|
||||||
SEpSet* pEpset;
|
|
||||||
} SchedArg;
|
|
||||||
|
|
||||||
int32_t doProcessMsgFromServer(void* param) {
|
int32_t doProcessMsgFromServer(void* param) {
|
||||||
SchedArg* arg = (SchedArg*)param;
|
AsyncArg* arg = (AsyncArg*)param;
|
||||||
SRpcMsg* pMsg = &arg->msg;
|
SRpcMsg* pMsg = &arg->msg;
|
||||||
SEpSet* pEpSet = arg->pEpset;
|
SEpSet* pEpSet = arg->pEpset;
|
||||||
|
|
||||||
|
@ -1335,7 +1331,7 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||||
memcpy((void*)tEpSet, (void*)pEpSet, sizeof(SEpSet));
|
memcpy((void*)tEpSet, (void*)pEpSet, sizeof(SEpSet));
|
||||||
}
|
}
|
||||||
|
|
||||||
SchedArg* arg = taosMemoryCalloc(1, sizeof(SchedArg));
|
AsyncArg* arg = taosMemoryCalloc(1, sizeof(AsyncArg));
|
||||||
arg->msg = *pMsg;
|
arg->msg = *pMsg;
|
||||||
arg->pEpset = tEpSet;
|
arg->pEpset = tEpSet;
|
||||||
|
|
||||||
|
|
|
@ -131,6 +131,7 @@ void taos_close(TAOS *taos) {
|
||||||
|
|
||||||
STscObj *pObj = acquireTscObj(*(int64_t *)taos);
|
STscObj *pObj = acquireTscObj(*(int64_t *)taos);
|
||||||
if (NULL == pObj) {
|
if (NULL == pObj) {
|
||||||
|
taosMemoryFree(taos);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -99,12 +99,12 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) {
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
|
|
||||||
// open pTagIdx
|
|
||||||
// TODO(yihaoDeng), refactor later
|
|
||||||
char indexFullPath[128] = {0};
|
char indexFullPath[128] = {0};
|
||||||
sprintf(indexFullPath, "%s/%s", pMeta->path, "invert");
|
sprintf(indexFullPath, "%s/%s", pMeta->path, "invert");
|
||||||
taosMkDir(indexFullPath);
|
taosMkDir(indexFullPath);
|
||||||
ret = indexOpen(indexOptsCreate(), indexFullPath, (SIndex **)&pMeta->pTagIvtIdx);
|
|
||||||
|
SIndexOpts opts = {.cacheSize = 8 * 1024 * 1024};
|
||||||
|
ret = indexOpen(&opts, indexFullPath, (SIndex **)&pMeta->pTagIvtIdx);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
metaError("vgId:%d, failed to open meta tag index since %s", TD_VID(pVnode), tstrerror(terrno));
|
metaError("vgId:%d, failed to open meta tag index since %s", TD_VID(pVnode), tstrerror(terrno));
|
||||||
goto _err;
|
goto _err;
|
||||||
|
|
|
@ -8,9 +8,9 @@ target_include_directories(
|
||||||
|
|
||||||
target_link_libraries(
|
target_link_libraries(
|
||||||
command
|
command
|
||||||
PRIVATE os util nodes catalog function transport qcom
|
PRIVATE os util nodes catalog function transport qcom scheduler
|
||||||
)
|
)
|
||||||
|
|
||||||
if(${BUILD_TEST})
|
if(${BUILD_TEST})
|
||||||
ADD_SUBDIRECTORY(test)
|
ADD_SUBDIRECTORY(test)
|
||||||
endif(${BUILD_TEST})
|
endif(${BUILD_TEST})
|
||||||
|
|
|
@ -77,6 +77,10 @@ extern "C" {
|
||||||
#define EXPLAIN_MODE_FORMAT "mode=%s"
|
#define EXPLAIN_MODE_FORMAT "mode=%s"
|
||||||
#define EXPLAIN_STRING_TYPE_FORMAT "%s"
|
#define EXPLAIN_STRING_TYPE_FORMAT "%s"
|
||||||
|
|
||||||
|
#define COMMAND_RESET_LOG "resetLog"
|
||||||
|
#define COMMAND_SCHEDULE_POLICY "schedulePolicy"
|
||||||
|
#define COMMAND_ENABLE_RESCHEDULE "enableReSchedule"
|
||||||
|
|
||||||
typedef struct SExplainGroup {
|
typedef struct SExplainGroup {
|
||||||
int32_t nodeNum;
|
int32_t nodeNum;
|
||||||
int32_t physiPlanExecNum;
|
int32_t physiPlanExecNum;
|
||||||
|
|
|
@ -17,6 +17,8 @@
|
||||||
#include "catalog.h"
|
#include "catalog.h"
|
||||||
#include "tdatablock.h"
|
#include "tdatablock.h"
|
||||||
#include "tglobal.h"
|
#include "tglobal.h"
|
||||||
|
#include "commandInt.h"
|
||||||
|
#include "scheduler.h"
|
||||||
|
|
||||||
extern SConfig* tsCfg;
|
extern SConfig* tsCfg;
|
||||||
|
|
||||||
|
@ -479,7 +481,42 @@ static int32_t execShowCreateSTable(SShowCreateTableStmt* pStmt, SRetrieveTableR
|
||||||
return execShowCreateTable(pStmt, pRsp);
|
return execShowCreateTable(pStmt, pRsp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int32_t execAlterCmd(char* cmd, char* value, bool* processed) {
|
||||||
|
int32_t code = 0;
|
||||||
|
|
||||||
|
if (0 == strcasecmp(cmd, COMMAND_RESET_LOG)) {
|
||||||
|
taosResetLog();
|
||||||
|
cfgDumpCfg(tsCfg, 0, false);
|
||||||
|
} else if (0 == strcasecmp(cmd, COMMAND_SCHEDULE_POLICY)) {
|
||||||
|
code = schedulerUpdatePolicy(atoi(value));
|
||||||
|
} else if (0 == strcasecmp(cmd, COMMAND_ENABLE_RESCHEDULE)) {
|
||||||
|
code = schedulerEnableReSchedule(atoi(value));
|
||||||
|
} else {
|
||||||
|
goto _return;
|
||||||
|
}
|
||||||
|
|
||||||
|
*processed = true;
|
||||||
|
|
||||||
|
_return:
|
||||||
|
|
||||||
|
if (code) {
|
||||||
|
terrno = code;
|
||||||
|
}
|
||||||
|
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t execAlterLocal(SAlterLocalStmt* pStmt) {
|
static int32_t execAlterLocal(SAlterLocalStmt* pStmt) {
|
||||||
|
bool processed = false;
|
||||||
|
|
||||||
|
if (execAlterCmd(pStmt->config, pStmt->value, &processed)) {
|
||||||
|
return terrno;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (processed) {
|
||||||
|
goto _return;
|
||||||
|
}
|
||||||
|
|
||||||
if (cfgSetItem(tsCfg, pStmt->config, pStmt->value, CFG_STYPE_ALTER_CMD)) {
|
if (cfgSetItem(tsCfg, pStmt->config, pStmt->value, CFG_STYPE_ALTER_CMD)) {
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
@ -488,6 +525,8 @@ static int32_t execAlterLocal(SAlterLocalStmt* pStmt) {
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_return:
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1994,6 +1994,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.getEnvFunc = getAvgFuncEnv,
|
.getEnvFunc = getAvgFuncEnv,
|
||||||
.initFunc = avgFunctionSetup,
|
.initFunc = avgFunctionSetup,
|
||||||
.processFunc = avgFunction,
|
.processFunc = avgFunction,
|
||||||
|
.sprocessFunc = avgScalarFunction,
|
||||||
.finalizeFunc = avgFinalize,
|
.finalizeFunc = avgFinalize,
|
||||||
.invertFunc = avgInvertFunction,
|
.invertFunc = avgInvertFunction,
|
||||||
.combineFunc = avgCombine,
|
.combineFunc = avgCombine,
|
||||||
|
|
|
@ -27,7 +27,7 @@ extern "C" {
|
||||||
#define DefaultMem 1024 * 1024
|
#define DefaultMem 1024 * 1024
|
||||||
|
|
||||||
static char tmpFile[] = "./index";
|
static char tmpFile[] = "./index";
|
||||||
typedef enum WriterType { TMemory, TFile } WriterType;
|
typedef enum WriterType { TMEMORY, TFILE } WriterType;
|
||||||
|
|
||||||
typedef struct IFileCtx {
|
typedef struct IFileCtx {
|
||||||
int (*write)(struct IFileCtx* ctx, uint8_t* buf, int len);
|
int (*write)(struct IFileCtx* ctx, uint8_t* buf, int len);
|
||||||
|
@ -35,6 +35,8 @@ typedef struct IFileCtx {
|
||||||
int (*flush)(struct IFileCtx* ctx);
|
int (*flush)(struct IFileCtx* ctx);
|
||||||
int (*readFrom)(struct IFileCtx* ctx, uint8_t* buf, int len, int32_t offset);
|
int (*readFrom)(struct IFileCtx* ctx, uint8_t* buf, int len, int32_t offset);
|
||||||
int (*size)(struct IFileCtx* ctx);
|
int (*size)(struct IFileCtx* ctx);
|
||||||
|
|
||||||
|
SLRUCache* lru;
|
||||||
WriterType type;
|
WriterType type;
|
||||||
union {
|
union {
|
||||||
struct {
|
struct {
|
||||||
|
|
|
@ -24,12 +24,9 @@
|
||||||
#include "tchecksum.h"
|
#include "tchecksum.h"
|
||||||
#include "thash.h"
|
#include "thash.h"
|
||||||
#include "tlog.h"
|
#include "tlog.h"
|
||||||
|
#include "tlrucache.h"
|
||||||
#include "tutil.h"
|
#include "tutil.h"
|
||||||
|
|
||||||
#ifdef USE_LUCENE
|
|
||||||
#include <lucene++/Lucene_c.h>
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
@ -61,28 +58,17 @@ struct SIndex {
|
||||||
void* tindex;
|
void* tindex;
|
||||||
SHashObj* colObj; // < field name, field id>
|
SHashObj* colObj; // < field name, field id>
|
||||||
|
|
||||||
int64_t suid; // current super table id, -1 is normal table
|
int64_t suid; // current super table id, -1 is normal table
|
||||||
int32_t cVersion; // current version allocated to cache
|
int32_t cVersion; // current version allocated to cache
|
||||||
|
SLRUCache* lru;
|
||||||
char* path;
|
char* path;
|
||||||
|
|
||||||
int8_t status;
|
int8_t status;
|
||||||
SIndexStat stat;
|
SIndexStat stat;
|
||||||
TdThreadMutex mtx;
|
TdThreadMutex mtx;
|
||||||
tsem_t sem;
|
tsem_t sem;
|
||||||
bool quit;
|
bool quit;
|
||||||
};
|
SIndexOpts opts;
|
||||||
|
|
||||||
struct SIndexOpts {
|
|
||||||
#ifdef USE_LUCENE
|
|
||||||
void* opts;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifdef USE_INVERTED_INDEX
|
|
||||||
int32_t cacheSize; // MB
|
|
||||||
// add cache module later
|
|
||||||
#endif
|
|
||||||
int32_t cacheOpt; // MB
|
|
||||||
};
|
};
|
||||||
|
|
||||||
struct SIndexMultiTermQuery {
|
struct SIndexMultiTermQuery {
|
||||||
|
|
|
@ -71,6 +71,7 @@ typedef struct TFileReader {
|
||||||
IFileCtx* ctx;
|
IFileCtx* ctx;
|
||||||
TFileHeader header;
|
TFileHeader header;
|
||||||
bool remove;
|
bool remove;
|
||||||
|
void* lru;
|
||||||
} TFileReader;
|
} TFileReader;
|
||||||
|
|
||||||
typedef struct IndexTFile {
|
typedef struct IndexTFile {
|
||||||
|
@ -95,14 +96,14 @@ typedef struct TFileReaderOpt {
|
||||||
} TFileReaderOpt;
|
} TFileReaderOpt;
|
||||||
|
|
||||||
// tfile cache, manage tindex reader
|
// tfile cache, manage tindex reader
|
||||||
TFileCache* tfileCacheCreate(const char* path);
|
TFileCache* tfileCacheCreate(SIndex* idx, const char* path);
|
||||||
void tfileCacheDestroy(TFileCache* tcache);
|
void tfileCacheDestroy(TFileCache* tcache);
|
||||||
TFileReader* tfileCacheGet(TFileCache* tcache, ICacheKey* key);
|
TFileReader* tfileCacheGet(TFileCache* tcache, ICacheKey* key);
|
||||||
void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* reader);
|
void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* reader);
|
||||||
|
|
||||||
TFileReader* tfileGetReaderByCol(IndexTFile* tf, uint64_t suid, char* colName);
|
TFileReader* tfileGetReaderByCol(IndexTFile* tf, uint64_t suid, char* colName);
|
||||||
|
|
||||||
TFileReader* tfileReaderOpen(char* path, uint64_t suid, int64_t version, const char* colName);
|
TFileReader* tfileReaderOpen(SIndex* idx, uint64_t suid, int64_t version, const char* colName);
|
||||||
TFileReader* tfileReaderCreate(IFileCtx* ctx);
|
TFileReader* tfileReaderCreate(IFileCtx* ctx);
|
||||||
void tfileReaderDestroy(TFileReader* reader);
|
void tfileReaderDestroy(TFileReader* reader);
|
||||||
int tfileReaderSearch(TFileReader* reader, SIndexTermQuery* query, SIdxTRslt* tr);
|
int tfileReaderSearch(TFileReader* reader, SIndexTermQuery* query, SIdxTRslt* tr);
|
||||||
|
@ -117,7 +118,7 @@ int tfileWriterPut(TFileWriter* tw, void* data, bool order);
|
||||||
int tfileWriterFinish(TFileWriter* tw);
|
int tfileWriterFinish(TFileWriter* tw);
|
||||||
|
|
||||||
//
|
//
|
||||||
IndexTFile* idxTFileCreate(const char* path);
|
IndexTFile* idxTFileCreate(SIndex* idx, const char* path);
|
||||||
void idxTFileDestroy(IndexTFile* tfile);
|
void idxTFileDestroy(IndexTFile* tfile);
|
||||||
int idxTFilePut(void* tfile, SIndexTerm* term, uint64_t uid);
|
int idxTFilePut(void* tfile, SIndexTerm* term, uint64_t uid);
|
||||||
int idxTFileSearch(void* tfile, SIndexTermQuery* query, SIdxTRslt* tr);
|
int idxTFileSearch(void* tfile, SIndexTermQuery* query, SIdxTRslt* tr);
|
||||||
|
|
|
@ -103,44 +103,59 @@ static void indexWait(void* idx) {
|
||||||
int indexOpen(SIndexOpts* opts, const char* path, SIndex** index) {
|
int indexOpen(SIndexOpts* opts, const char* path, SIndex** index) {
|
||||||
int ret = TSDB_CODE_SUCCESS;
|
int ret = TSDB_CODE_SUCCESS;
|
||||||
taosThreadOnce(&isInit, indexInit);
|
taosThreadOnce(&isInit, indexInit);
|
||||||
SIndex* sIdx = taosMemoryCalloc(1, sizeof(SIndex));
|
SIndex* idx = taosMemoryCalloc(1, sizeof(SIndex));
|
||||||
if (sIdx == NULL) {
|
if (idx == NULL) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
sIdx->tindex = idxTFileCreate(path);
|
idx->lru = taosLRUCacheInit(opts->cacheSize, -1, .5);
|
||||||
if (sIdx->tindex == NULL) {
|
if (idx->lru == NULL) {
|
||||||
|
ret = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
goto END;
|
||||||
|
}
|
||||||
|
taosLRUCacheSetStrictCapacity(idx->lru, false);
|
||||||
|
|
||||||
|
idx->tindex = idxTFileCreate(idx, path);
|
||||||
|
if (idx->tindex == NULL) {
|
||||||
ret = TSDB_CODE_OUT_OF_MEMORY;
|
ret = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
|
|
||||||
sIdx->colObj = taosHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
|
idx->colObj = taosHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
|
||||||
sIdx->cVersion = 1;
|
idx->cVersion = 1;
|
||||||
sIdx->path = tstrdup(path);
|
idx->path = tstrdup(path);
|
||||||
taosThreadMutexInit(&sIdx->mtx, NULL);
|
taosThreadMutexInit(&idx->mtx, NULL);
|
||||||
tsem_init(&sIdx->sem, 0, 0);
|
tsem_init(&idx->sem, 0, 0);
|
||||||
|
|
||||||
sIdx->refId = idxAddRef(sIdx);
|
idx->refId = idxAddRef(idx);
|
||||||
idxAcquireRef(sIdx->refId);
|
idx->opts = *opts;
|
||||||
|
idxAcquireRef(idx->refId);
|
||||||
|
|
||||||
*index = sIdx;
|
*index = idx;
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
END:
|
END:
|
||||||
if (sIdx != NULL) {
|
if (idx != NULL) {
|
||||||
indexClose(sIdx);
|
indexClose(idx);
|
||||||
}
|
}
|
||||||
*index = NULL;
|
*index = NULL;
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
void indexDestroy(void* handle) {
|
void indexDestroy(void* handle) {
|
||||||
SIndex* sIdx = handle;
|
SIndex* idx = handle;
|
||||||
taosThreadMutexDestroy(&sIdx->mtx);
|
taosThreadMutexDestroy(&idx->mtx);
|
||||||
tsem_destroy(&sIdx->sem);
|
tsem_destroy(&idx->sem);
|
||||||
idxTFileDestroy(sIdx->tindex);
|
idxTFileDestroy(idx->tindex);
|
||||||
taosMemoryFree(sIdx->path);
|
taosMemoryFree(idx->path);
|
||||||
taosMemoryFree(sIdx);
|
|
||||||
|
SLRUCache* lru = idx->lru;
|
||||||
|
if (lru != NULL) {
|
||||||
|
taosLRUCacheEraseUnrefEntries(lru);
|
||||||
|
taosLRUCacheCleanup(lru);
|
||||||
|
}
|
||||||
|
idx->lru = NULL;
|
||||||
|
taosMemoryFree(idx);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
void indexClose(SIndex* sIdx) {
|
void indexClose(SIndex* sIdx) {
|
||||||
|
@ -159,6 +174,7 @@ void indexClose(SIndex* sIdx) {
|
||||||
taosHashCleanup(sIdx->colObj);
|
taosHashCleanup(sIdx->colObj);
|
||||||
sIdx->colObj = NULL;
|
sIdx->colObj = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
idxReleaseRef(sIdx->refId);
|
idxReleaseRef(sIdx->refId);
|
||||||
idxRemoveRef(sIdx->refId);
|
idxRemoveRef(sIdx->refId);
|
||||||
}
|
}
|
||||||
|
@ -234,8 +250,12 @@ int indexSearch(SIndex* index, SIndexMultiTermQuery* multiQuerys, SArray* result
|
||||||
int indexDelete(SIndex* index, SIndexMultiTermQuery* query) { return 1; }
|
int indexDelete(SIndex* index, SIndexMultiTermQuery* query) { return 1; }
|
||||||
// int indexRebuild(SIndex* index, SIndexOpts* opts) { return 0; }
|
// int indexRebuild(SIndex* index, SIndexOpts* opts) { return 0; }
|
||||||
|
|
||||||
SIndexOpts* indexOptsCreate() { return NULL; }
|
SIndexOpts* indexOptsCreate(int32_t cacheSize) {
|
||||||
void indexOptsDestroy(SIndexOpts* opts) { return; }
|
SIndexOpts* opts = taosMemoryCalloc(1, sizeof(SIndexOpts));
|
||||||
|
opts->cacheSize = cacheSize;
|
||||||
|
return opts;
|
||||||
|
}
|
||||||
|
void indexOptsDestroy(SIndexOpts* opts) { return taosMemoryFree(opts); }
|
||||||
/*
|
/*
|
||||||
* @param: oper
|
* @param: oper
|
||||||
*
|
*
|
||||||
|
@ -641,7 +661,7 @@ static int idxGenTFile(SIndex* sIdx, IndexCache* cache, SArray* batch) {
|
||||||
}
|
}
|
||||||
tfileWriterClose(tw);
|
tfileWriterClose(tw);
|
||||||
|
|
||||||
TFileReader* reader = tfileReaderOpen(sIdx->path, cache->suid, version, cache->colName);
|
TFileReader* reader = tfileReaderOpen(sIdx, cache->suid, version, cache->colName);
|
||||||
if (reader == NULL) {
|
if (reader == NULL) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
|
@ -462,8 +462,8 @@ Iterate* idxCacheIteratorCreate(IndexCache* cache) {
|
||||||
if (cache->imm == NULL) {
|
if (cache->imm == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
Iterate* iiter = taosMemoryCalloc(1, sizeof(Iterate));
|
Iterate* iter = taosMemoryCalloc(1, sizeof(Iterate));
|
||||||
if (iiter == NULL) {
|
if (iter == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
taosThreadMutexLock(&cache->mtx);
|
taosThreadMutexLock(&cache->mtx);
|
||||||
|
@ -471,15 +471,15 @@ Iterate* idxCacheIteratorCreate(IndexCache* cache) {
|
||||||
idxMemRef(cache->imm);
|
idxMemRef(cache->imm);
|
||||||
|
|
||||||
MemTable* tbl = cache->imm;
|
MemTable* tbl = cache->imm;
|
||||||
iiter->val.val = taosArrayInit(1, sizeof(uint64_t));
|
iter->val.val = taosArrayInit(1, sizeof(uint64_t));
|
||||||
iiter->val.colVal = NULL;
|
iter->val.colVal = NULL;
|
||||||
iiter->iter = tbl != NULL ? tSkipListCreateIter(tbl->mem) : NULL;
|
iter->iter = tbl != NULL ? tSkipListCreateIter(tbl->mem) : NULL;
|
||||||
iiter->next = idxCacheIteratorNext;
|
iter->next = idxCacheIteratorNext;
|
||||||
iiter->getValue = idxCacheIteratorGetValue;
|
iter->getValue = idxCacheIteratorGetValue;
|
||||||
|
|
||||||
taosThreadMutexUnlock(&cache->mtx);
|
taosThreadMutexUnlock(&cache->mtx);
|
||||||
|
|
||||||
return iiter;
|
return iter;
|
||||||
}
|
}
|
||||||
void idxCacheIteratorDestroy(Iterate* iter) {
|
void idxCacheIteratorDestroy(Iterate* iter) {
|
||||||
if (iter == NULL) {
|
if (iter == NULL) {
|
||||||
|
@ -564,13 +564,13 @@ int idxCachePut(void* cache, SIndexTerm* term, uint64_t uid) {
|
||||||
idxMemUnRef(tbl);
|
idxMemUnRef(tbl);
|
||||||
|
|
||||||
taosThreadMutexUnlock(&pCache->mtx);
|
taosThreadMutexUnlock(&pCache->mtx);
|
||||||
|
|
||||||
idxCacheUnRef(pCache);
|
idxCacheUnRef(pCache);
|
||||||
return 0;
|
return 0;
|
||||||
// encode end
|
// encode end
|
||||||
}
|
}
|
||||||
void idxCacheForceToMerge(void* cache) {
|
void idxCacheForceToMerge(void* cache) {
|
||||||
IndexCache* pCache = cache;
|
IndexCache* pCache = cache;
|
||||||
|
|
||||||
idxCacheRef(pCache);
|
idxCacheRef(pCache);
|
||||||
taosThreadMutexLock(&pCache->mtx);
|
taosThreadMutexLock(&pCache->mtx);
|
||||||
|
|
||||||
|
|
|
@ -31,7 +31,7 @@ typedef struct SIFParam {
|
||||||
SHashObj *pFilter;
|
SHashObj *pFilter;
|
||||||
|
|
||||||
SArray *result;
|
SArray *result;
|
||||||
char * condValue;
|
char *condValue;
|
||||||
|
|
||||||
SIdxFltStatus status;
|
SIdxFltStatus status;
|
||||||
uint8_t colValType;
|
uint8_t colValType;
|
||||||
|
@ -45,7 +45,7 @@ typedef struct SIFParam {
|
||||||
|
|
||||||
typedef struct SIFCtx {
|
typedef struct SIFCtx {
|
||||||
int32_t code;
|
int32_t code;
|
||||||
SHashObj * pRes; /* element is SIFParam */
|
SHashObj *pRes; /* element is SIFParam */
|
||||||
bool noExec; // true: just iterate condition tree, and add hint to executor plan
|
bool noExec; // true: just iterate condition tree, and add hint to executor plan
|
||||||
SIndexMetaArg arg;
|
SIndexMetaArg arg;
|
||||||
// SIdxFltStatus st;
|
// SIdxFltStatus st;
|
||||||
|
@ -137,7 +137,7 @@ static int32_t sifGetValueFromNode(SNode *node, char **value) {
|
||||||
// covert data From snode;
|
// covert data From snode;
|
||||||
SValueNode *vn = (SValueNode *)node;
|
SValueNode *vn = (SValueNode *)node;
|
||||||
|
|
||||||
char * pData = nodesGetValueFromNode(vn);
|
char *pData = nodesGetValueFromNode(vn);
|
||||||
SDataType *pType = &vn->node.resType;
|
SDataType *pType = &vn->node.resType;
|
||||||
int32_t type = pType->type;
|
int32_t type = pType->type;
|
||||||
int32_t valLen = 0;
|
int32_t valLen = 0;
|
||||||
|
@ -175,7 +175,7 @@ static int32_t sifInitJsonParam(SNode *node, SIFParam *param, SIFCtx *ctx) {
|
||||||
SOperatorNode *nd = (SOperatorNode *)node;
|
SOperatorNode *nd = (SOperatorNode *)node;
|
||||||
assert(nodeType(node) == QUERY_NODE_OPERATOR);
|
assert(nodeType(node) == QUERY_NODE_OPERATOR);
|
||||||
SColumnNode *l = (SColumnNode *)nd->pLeft;
|
SColumnNode *l = (SColumnNode *)nd->pLeft;
|
||||||
SValueNode * r = (SValueNode *)nd->pRight;
|
SValueNode *r = (SValueNode *)nd->pRight;
|
||||||
|
|
||||||
param->colId = l->colId;
|
param->colId = l->colId;
|
||||||
param->colValType = l->node.resType.type;
|
param->colValType = l->node.resType.type;
|
||||||
|
@ -357,7 +357,7 @@ static Filter sifGetFilterFunc(EIndexQueryType type, bool *reverse) {
|
||||||
static int32_t sifDoIndex(SIFParam *left, SIFParam *right, int8_t operType, SIFParam *output) {
|
static int32_t sifDoIndex(SIFParam *left, SIFParam *right, int8_t operType, SIFParam *output) {
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
SIndexMetaArg * arg = &output->arg;
|
SIndexMetaArg *arg = &output->arg;
|
||||||
EIndexQueryType qtype = 0;
|
EIndexQueryType qtype = 0;
|
||||||
SIF_ERR_RET(sifGetFuncFromSql(operType, &qtype));
|
SIF_ERR_RET(sifGetFuncFromSql(operType, &qtype));
|
||||||
if (left->colValType == TSDB_DATA_TYPE_JSON) {
|
if (left->colValType == TSDB_DATA_TYPE_JSON) {
|
||||||
|
@ -749,7 +749,7 @@ int32_t doFilterTag(SNode *pFilterNode, SIndexMetaArg *metaArg, SArray *result,
|
||||||
|
|
||||||
SFilterInfo *filter = NULL;
|
SFilterInfo *filter = NULL;
|
||||||
|
|
||||||
SArray * output = taosArrayInit(8, sizeof(uint64_t));
|
SArray *output = taosArrayInit(8, sizeof(uint64_t));
|
||||||
SIFParam param = {.arg = *metaArg, .result = output};
|
SIFParam param = {.arg = *metaArg, .result = output};
|
||||||
SIF_ERR_RET(sifCalculate((SNode *)pFilterNode, ¶m));
|
SIF_ERR_RET(sifCalculate((SNode *)pFilterNode, ¶m));
|
||||||
|
|
||||||
|
|
|
@ -772,6 +772,7 @@ void fstBuilderDestroy(FstBuilder* b) {
|
||||||
if (b == NULL) {
|
if (b == NULL) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
fstBuilderFinish(b);
|
||||||
|
|
||||||
idxFileDestroy(b->wrt);
|
idxFileDestroy(b->wrt);
|
||||||
fstUnFinishedNodesDestroy(b->unfinished);
|
fstUnFinishedNodesDestroy(b->unfinished);
|
||||||
|
@ -1074,8 +1075,8 @@ FStmStBuilder* fstSearchWithState(Fst* fst, FAutoCtx* ctx) {
|
||||||
}
|
}
|
||||||
|
|
||||||
FstNode* fstGetRoot(Fst* fst) {
|
FstNode* fstGetRoot(Fst* fst) {
|
||||||
CompiledAddr rAddr = fstGetRootAddr(fst);
|
CompiledAddr addr = fstGetRootAddr(fst);
|
||||||
return fstGetNode(fst, rAddr);
|
return fstGetNode(fst, addr);
|
||||||
}
|
}
|
||||||
|
|
||||||
FstNode* fstGetNode(Fst* fst, CompiledAddr addr) {
|
FstNode* fstGetNode(Fst* fst, CompiledAddr addr) {
|
||||||
|
|
|
@ -4,8 +4,7 @@
|
||||||
* This program is free software: you can use, redistribute, and/or modify
|
* This program is free software: you can use, redistribute, and/or modify
|
||||||
* it under the terms of the GNU Affero General Public License, version 3
|
* it under the terms of the GNU Affero General Public License, version 3
|
||||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||||
*
|
* * This program is distributed in the hope that it will be useful, but WITHOUT
|
||||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
|
||||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||||
*
|
*
|
||||||
|
@ -14,13 +13,32 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include "indexFstFile.h"
|
#include "indexFstFile.h"
|
||||||
|
#include "indexComm.h"
|
||||||
#include "indexFstUtil.h"
|
#include "indexFstUtil.h"
|
||||||
#include "indexInt.h"
|
#include "indexInt.h"
|
||||||
|
#include "indexUtil.h"
|
||||||
#include "os.h"
|
#include "os.h"
|
||||||
#include "tutil.h"
|
#include "tutil.h"
|
||||||
|
|
||||||
|
static int32_t kBlockSize = 4096;
|
||||||
|
|
||||||
|
typedef struct {
|
||||||
|
int32_t blockId;
|
||||||
|
int32_t nread;
|
||||||
|
char buf[0];
|
||||||
|
} SDataBlock;
|
||||||
|
|
||||||
|
static void deleteDataBlockFromLRU(const void* key, size_t keyLen, void* value) { taosMemoryFree(value); }
|
||||||
|
|
||||||
|
static void idxGenLRUKey(char* buf, const char* path, int32_t blockId) {
|
||||||
|
char* p = buf;
|
||||||
|
SERIALIZE_STR_VAR_TO_BUF(p, path, strlen(path));
|
||||||
|
SERIALIZE_VAR_TO_BUF(p, '_', char);
|
||||||
|
idxInt2str(blockId, p, 0);
|
||||||
|
return;
|
||||||
|
}
|
||||||
static int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
|
static int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
|
||||||
if (ctx->type == TFile) {
|
if (ctx->type == TFILE) {
|
||||||
assert(len == taosWriteFile(ctx->file.pFile, buf, len));
|
assert(len == taosWriteFile(ctx->file.pFile, buf, len));
|
||||||
} else {
|
} else {
|
||||||
memcpy(ctx->mem.buf + ctx->offset, buf, len);
|
memcpy(ctx->mem.buf + ctx->offset, buf, len);
|
||||||
|
@ -30,7 +48,7 @@ static int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
|
||||||
}
|
}
|
||||||
static int idxFileCtxDoRead(IFileCtx* ctx, uint8_t* buf, int len) {
|
static int idxFileCtxDoRead(IFileCtx* ctx, uint8_t* buf, int len) {
|
||||||
int nRead = 0;
|
int nRead = 0;
|
||||||
if (ctx->type == TFile) {
|
if (ctx->type == TFILE) {
|
||||||
#ifdef USE_MMAP
|
#ifdef USE_MMAP
|
||||||
nRead = len < ctx->file.size ? len : ctx->file.size;
|
nRead = len < ctx->file.size ? len : ctx->file.size;
|
||||||
memcpy(buf, ctx->file.ptr, nRead);
|
memcpy(buf, ctx->file.ptr, nRead);
|
||||||
|
@ -45,24 +63,54 @@ static int idxFileCtxDoRead(IFileCtx* ctx, uint8_t* buf, int len) {
|
||||||
return nRead;
|
return nRead;
|
||||||
}
|
}
|
||||||
static int idxFileCtxDoReadFrom(IFileCtx* ctx, uint8_t* buf, int len, int32_t offset) {
|
static int idxFileCtxDoReadFrom(IFileCtx* ctx, uint8_t* buf, int len, int32_t offset) {
|
||||||
int nRead = 0;
|
int32_t total = 0, nread = 0;
|
||||||
if (ctx->type == TFile) {
|
int32_t blkId = offset / kBlockSize;
|
||||||
// tfLseek(ctx->file.pFile, offset, 0);
|
int32_t blkOffset = offset % kBlockSize;
|
||||||
#ifdef USE_MMAP
|
int32_t blkLeft = kBlockSize - blkOffset;
|
||||||
int32_t last = ctx->file.size - offset;
|
|
||||||
nRead = last >= len ? len : last;
|
do {
|
||||||
memcpy(buf, ctx->file.ptr + offset, nRead);
|
char key[128] = {0};
|
||||||
#else
|
idxGenLRUKey(key, ctx->file.buf, blkId);
|
||||||
nRead = taosPReadFile(ctx->file.pFile, buf, len, offset);
|
LRUHandle* h = taosLRUCacheLookup(ctx->lru, key, strlen(key));
|
||||||
#endif
|
|
||||||
} else {
|
if (h) {
|
||||||
// refactor later
|
SDataBlock* blk = taosLRUCacheValue(ctx->lru, h);
|
||||||
assert(0);
|
nread = TMIN(blkLeft, len);
|
||||||
}
|
memcpy(buf + total, blk->buf + blkOffset, nread);
|
||||||
return nRead;
|
taosLRUCacheRelease(ctx->lru, h, false);
|
||||||
|
} else {
|
||||||
|
int32_t cacheMemSize = sizeof(SDataBlock) + kBlockSize;
|
||||||
|
|
||||||
|
SDataBlock* blk = taosMemoryCalloc(1, cacheMemSize);
|
||||||
|
blk->blockId = blkId;
|
||||||
|
blk->nread = taosPReadFile(ctx->file.pFile, blk->buf, kBlockSize, blkId * kBlockSize);
|
||||||
|
assert(blk->nread <= kBlockSize);
|
||||||
|
nread = TMIN(blkLeft, len);
|
||||||
|
|
||||||
|
if (blk->nread < kBlockSize && blk->nread < len) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
memcpy(buf + total, blk->buf + blkOffset, nread);
|
||||||
|
|
||||||
|
LRUStatus s = taosLRUCacheInsert(ctx->lru, key, strlen(key), blk, cacheMemSize, deleteDataBlockFromLRU, NULL,
|
||||||
|
TAOS_LRU_PRIORITY_LOW);
|
||||||
|
if (s != TAOS_LRU_STATUS_OK) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
total += nread;
|
||||||
|
len -= nread;
|
||||||
|
offset += nread;
|
||||||
|
|
||||||
|
blkId = offset / kBlockSize;
|
||||||
|
blkOffset = offset % kBlockSize;
|
||||||
|
blkLeft = kBlockSize - blkOffset;
|
||||||
|
|
||||||
|
} while (len > 0);
|
||||||
|
return total;
|
||||||
}
|
}
|
||||||
static int idxFileCtxGetSize(IFileCtx* ctx) {
|
static int idxFileCtxGetSize(IFileCtx* ctx) {
|
||||||
if (ctx->type == TFile) {
|
if (ctx->type == TFILE) {
|
||||||
int64_t file_size = 0;
|
int64_t file_size = 0;
|
||||||
taosStatFile(ctx->file.buf, &file_size, NULL);
|
taosStatFile(ctx->file.buf, &file_size, NULL);
|
||||||
return (int)file_size;
|
return (int)file_size;
|
||||||
|
@ -70,7 +118,7 @@ static int idxFileCtxGetSize(IFileCtx* ctx) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
static int idxFileCtxDoFlush(IFileCtx* ctx) {
|
static int idxFileCtxDoFlush(IFileCtx* ctx) {
|
||||||
if (ctx->type == TFile) {
|
if (ctx->type == TFILE) {
|
||||||
taosFsyncFile(ctx->file.pFile);
|
taosFsyncFile(ctx->file.pFile);
|
||||||
} else {
|
} else {
|
||||||
// do nothing
|
// do nothing
|
||||||
|
@ -85,7 +133,7 @@ IFileCtx* idxFileCtxCreate(WriterType type, const char* path, bool readOnly, int
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx->type = type;
|
ctx->type = type;
|
||||||
if (ctx->type == TFile) {
|
if (ctx->type == TFILE) {
|
||||||
// ugly code, refactor later
|
// ugly code, refactor later
|
||||||
ctx->file.readOnly = readOnly;
|
ctx->file.readOnly = readOnly;
|
||||||
memcpy(ctx->file.buf, path, strlen(path));
|
memcpy(ctx->file.buf, path, strlen(path));
|
||||||
|
@ -93,8 +141,6 @@ IFileCtx* idxFileCtxCreate(WriterType type, const char* path, bool readOnly, int
|
||||||
ctx->file.pFile = taosOpenFile(path, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
ctx->file.pFile = taosOpenFile(path, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
taosFtruncateFile(ctx->file.pFile, 0);
|
taosFtruncateFile(ctx->file.pFile, 0);
|
||||||
taosStatFile(path, &ctx->file.size, NULL);
|
taosStatFile(path, &ctx->file.size, NULL);
|
||||||
// ctx->file.size = (int)size;
|
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
ctx->file.pFile = taosOpenFile(path, TD_FILE_READ);
|
ctx->file.pFile = taosOpenFile(path, TD_FILE_READ);
|
||||||
|
|
||||||
|
@ -109,10 +155,11 @@ IFileCtx* idxFileCtxCreate(WriterType type, const char* path, bool readOnly, int
|
||||||
indexError("failed to open file, error %d", errno);
|
indexError("failed to open file, error %d", errno);
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
} else if (ctx->type == TMemory) {
|
} else if (ctx->type == TMEMORY) {
|
||||||
ctx->mem.buf = taosMemoryCalloc(1, sizeof(char) * capacity);
|
ctx->mem.buf = taosMemoryCalloc(1, sizeof(char) * capacity);
|
||||||
ctx->mem.cap = capacity;
|
ctx->mem.cap = capacity;
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx->write = idxFileCtxDoWrite;
|
ctx->write = idxFileCtxDoWrite;
|
||||||
ctx->read = idxFileCtxDoRead;
|
ctx->read = idxFileCtxDoRead;
|
||||||
ctx->flush = idxFileCtxDoFlush;
|
ctx->flush = idxFileCtxDoFlush;
|
||||||
|
@ -124,14 +171,14 @@ IFileCtx* idxFileCtxCreate(WriterType type, const char* path, bool readOnly, int
|
||||||
|
|
||||||
return ctx;
|
return ctx;
|
||||||
END:
|
END:
|
||||||
if (ctx->type == TMemory) {
|
if (ctx->type == TMEMORY) {
|
||||||
taosMemoryFree(ctx->mem.buf);
|
taosMemoryFree(ctx->mem.buf);
|
||||||
}
|
}
|
||||||
taosMemoryFree(ctx);
|
taosMemoryFree(ctx);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
void idxFileCtxDestroy(IFileCtx* ctx, bool remove) {
|
void idxFileCtxDestroy(IFileCtx* ctx, bool remove) {
|
||||||
if (ctx->type == TMemory) {
|
if (ctx->type == TMEMORY) {
|
||||||
taosMemoryFree(ctx->mem.buf);
|
taosMemoryFree(ctx->mem.buf);
|
||||||
} else {
|
} else {
|
||||||
ctx->flush(ctx);
|
ctx->flush(ctx);
|
||||||
|
@ -183,6 +230,7 @@ int idxFileWrite(IdxFstFile* write, uint8_t* buf, uint32_t len) {
|
||||||
write->summer = taosCalcChecksum(write->summer, buf, len);
|
write->summer = taosCalcChecksum(write->summer, buf, len);
|
||||||
return len;
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
int idxFileRead(IdxFstFile* write, uint8_t* buf, uint32_t len) {
|
int idxFileRead(IdxFstFile* write, uint8_t* buf, uint32_t len) {
|
||||||
if (write == NULL) {
|
if (write == NULL) {
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -90,7 +90,7 @@ static int32_t (*tfSearch[][QUERY_MAX])(void* reader, SIndexTerm* tem, SIdxTRslt
|
||||||
{tfSearchEqual_JSON, tfSearchPrefix_JSON, tfSearchSuffix_JSON, tfSearchRegex_JSON, tfSearchLessThan_JSON,
|
{tfSearchEqual_JSON, tfSearchPrefix_JSON, tfSearchSuffix_JSON, tfSearchRegex_JSON, tfSearchLessThan_JSON,
|
||||||
tfSearchLessEqual_JSON, tfSearchGreaterThan_JSON, tfSearchGreaterEqual_JSON, tfSearchRange_JSON}};
|
tfSearchLessEqual_JSON, tfSearchGreaterThan_JSON, tfSearchGreaterEqual_JSON, tfSearchRange_JSON}};
|
||||||
|
|
||||||
TFileCache* tfileCacheCreate(const char* path) {
|
TFileCache* tfileCacheCreate(SIndex* idx, const char* path) {
|
||||||
TFileCache* tcache = taosMemoryCalloc(1, sizeof(TFileCache));
|
TFileCache* tcache = taosMemoryCalloc(1, sizeof(TFileCache));
|
||||||
if (tcache == NULL) {
|
if (tcache == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -103,17 +103,20 @@ TFileCache* tfileCacheCreate(const char* path) {
|
||||||
for (size_t i = 0; i < taosArrayGetSize(files); i++) {
|
for (size_t i = 0; i < taosArrayGetSize(files); i++) {
|
||||||
char* file = taosArrayGetP(files, i);
|
char* file = taosArrayGetP(files, i);
|
||||||
|
|
||||||
IFileCtx* wc = idxFileCtxCreate(TFile, file, true, 1024 * 1024 * 64);
|
IFileCtx* ctx = idxFileCtxCreate(TFILE, file, true, 1024 * 1024 * 64);
|
||||||
if (wc == NULL) {
|
if (ctx == NULL) {
|
||||||
indexError("failed to open index:%s", file);
|
indexError("failed to open index:%s", file);
|
||||||
goto End;
|
goto End;
|
||||||
}
|
}
|
||||||
|
ctx->lru = idx->lru;
|
||||||
|
|
||||||
TFileReader* reader = tfileReaderCreate(wc);
|
TFileReader* reader = tfileReaderCreate(ctx);
|
||||||
if (reader == NULL) {
|
if (reader == NULL) {
|
||||||
indexInfo("skip invalid file: %s", file);
|
indexInfo("skip invalid file: %s", file);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
reader->lru = idx->lru;
|
||||||
|
|
||||||
TFileHeader* header = &reader->header;
|
TFileHeader* header = &reader->header;
|
||||||
ICacheKey key = {.suid = header->suid, .colName = header->colName, .nColName = (int32_t)strlen(header->colName)};
|
ICacheKey key = {.suid = header->suid, .colName = header->colName, .nColName = (int32_t)strlen(header->colName)};
|
||||||
|
|
||||||
|
@ -160,9 +163,8 @@ TFileReader* tfileCacheGet(TFileCache* tcache, ICacheKey* key) {
|
||||||
return *reader;
|
return *reader;
|
||||||
}
|
}
|
||||||
void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* reader) {
|
void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* reader) {
|
||||||
char buf[128] = {0};
|
char buf[128] = {0};
|
||||||
int32_t sz = idxSerialCacheKey(key, buf);
|
int32_t sz = idxSerialCacheKey(key, buf);
|
||||||
// remove last version index reader
|
|
||||||
TFileReader** p = taosHashGet(tcache->tableCache, buf, sz);
|
TFileReader** p = taosHashGet(tcache->tableCache, buf, sz);
|
||||||
if (p != NULL && *p != NULL) {
|
if (p != NULL && *p != NULL) {
|
||||||
TFileReader* oldRdr = *p;
|
TFileReader* oldRdr = *p;
|
||||||
|
@ -493,7 +495,7 @@ TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int64_t version, const c
|
||||||
char fullname[256] = {0};
|
char fullname[256] = {0};
|
||||||
tfileGenFileFullName(fullname, path, suid, colName, version);
|
tfileGenFileFullName(fullname, path, suid, colName, version);
|
||||||
// indexInfo("open write file name %s", fullname);
|
// indexInfo("open write file name %s", fullname);
|
||||||
IFileCtx* wcx = idxFileCtxCreate(TFile, fullname, false, 1024 * 1024 * 64);
|
IFileCtx* wcx = idxFileCtxCreate(TFILE, fullname, false, 1024 * 1024 * 64);
|
||||||
if (wcx == NULL) {
|
if (wcx == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
@ -506,16 +508,17 @@ TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int64_t version, const c
|
||||||
|
|
||||||
return tfileWriterCreate(wcx, &tfh);
|
return tfileWriterCreate(wcx, &tfh);
|
||||||
}
|
}
|
||||||
TFileReader* tfileReaderOpen(char* path, uint64_t suid, int64_t version, const char* colName) {
|
TFileReader* tfileReaderOpen(SIndex* idx, uint64_t suid, int64_t version, const char* colName) {
|
||||||
char fullname[256] = {0};
|
char fullname[256] = {0};
|
||||||
tfileGenFileFullName(fullname, path, suid, colName, version);
|
tfileGenFileFullName(fullname, idx->path, suid, colName, version);
|
||||||
|
|
||||||
IFileCtx* wc = idxFileCtxCreate(TFile, fullname, true, 1024 * 1024 * 1024);
|
IFileCtx* wc = idxFileCtxCreate(TFILE, fullname, true, 1024 * 1024 * 1024);
|
||||||
if (wc == NULL) {
|
if (wc == NULL) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
indexError("failed to open readonly file: %s, reason: %s", fullname, terrstr());
|
indexError("failed to open readonly file: %s, reason: %s", fullname, terrstr());
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
wc->lru = idx->lru;
|
||||||
indexTrace("open read file name:%s, file size: %" PRId64 "", wc->file.buf, wc->file.size);
|
indexTrace("open read file name:%s, file size: %" PRId64 "", wc->file.buf, wc->file.size);
|
||||||
|
|
||||||
TFileReader* reader = tfileReaderCreate(wc);
|
TFileReader* reader = tfileReaderCreate(wc);
|
||||||
|
@ -598,17 +601,11 @@ int tfileWriterPut(TFileWriter* tw, void* data, bool order) {
|
||||||
indexError("failed to write data: %s, offset: %d len: %d", v->colVal, v->offset,
|
indexError("failed to write data: %s, offset: %d len: %d", v->colVal, v->offset,
|
||||||
(int)taosArrayGetSize(v->tableId));
|
(int)taosArrayGetSize(v->tableId));
|
||||||
} else {
|
} else {
|
||||||
// indexInfo("success to write data: %s, offset: %d len: %d", v->colVal, v->offset,
|
indexInfo("success to write data: %s, offset: %d len: %d", v->colVal, v->offset,
|
||||||
// (int)taosArrayGetSize(v->tableId));
|
(int)taosArrayGetSize(v->tableId));
|
||||||
|
|
||||||
// indexInfo("tfile write data size: %d", tw->ctx->size(tw->ctx));
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fstBuilderFinish(tw->fb);
|
|
||||||
fstBuilderDestroy(tw->fb);
|
fstBuilderDestroy(tw->fb);
|
||||||
tw->fb = NULL;
|
|
||||||
|
|
||||||
tfileWriteFooter(tw);
|
tfileWriteFooter(tw);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -627,8 +624,8 @@ void tfileWriterDestroy(TFileWriter* tw) {
|
||||||
taosMemoryFree(tw);
|
taosMemoryFree(tw);
|
||||||
}
|
}
|
||||||
|
|
||||||
IndexTFile* idxTFileCreate(const char* path) {
|
IndexTFile* idxTFileCreate(SIndex* idx, const char* path) {
|
||||||
TFileCache* cache = tfileCacheCreate(path);
|
TFileCache* cache = tfileCacheCreate(idx, path);
|
||||||
if (cache == NULL) {
|
if (cache == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
@ -859,18 +856,6 @@ static int tfileWriteData(TFileWriter* write, TFileValue* tval) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
// if (colType == TSDB_DATA_TYPE_BINARY || colType == TSDB_DATA_TYPE_NCHAR) {
|
|
||||||
// FstSlice key = fstSliceCreate((uint8_t*)(tval->colVal), (size_t)strlen(tval->colVal));
|
|
||||||
// if (fstBuilderInsert(write->fb, key, tval->offset)) {
|
|
||||||
// fstSliceDestroy(&key);
|
|
||||||
// return 0;
|
|
||||||
// }
|
|
||||||
// fstSliceDestroy(&key);
|
|
||||||
// return -1;
|
|
||||||
//} else {
|
|
||||||
// // handle other type later
|
|
||||||
//}
|
|
||||||
}
|
}
|
||||||
static int tfileWriteFooter(TFileWriter* write) {
|
static int tfileWriteFooter(TFileWriter* write) {
|
||||||
char buf[sizeof(FILE_MAGIC_NUMBER) + 1] = {0};
|
char buf[sizeof(FILE_MAGIC_NUMBER) + 1] = {0};
|
||||||
|
@ -887,6 +872,7 @@ static int tfileReaderLoadHeader(TFileReader* reader) {
|
||||||
char buf[TFILE_HEADER_SIZE] = {0};
|
char buf[TFILE_HEADER_SIZE] = {0};
|
||||||
|
|
||||||
int64_t nread = reader->ctx->readFrom(reader->ctx, buf, sizeof(buf), 0);
|
int64_t nread = reader->ctx->readFrom(reader->ctx, buf, sizeof(buf), 0);
|
||||||
|
|
||||||
if (nread == -1) {
|
if (nread == -1) {
|
||||||
indexError("actual Read: %d, to read: %d, errno: %d, filename: %s", (int)(nread), (int)sizeof(buf), errno,
|
indexError("actual Read: %d, to read: %d, errno: %d, filename: %s", (int)(nread), (int)sizeof(buf), errno,
|
||||||
reader->ctx->file.buf);
|
reader->ctx->file.buf);
|
||||||
|
@ -914,7 +900,7 @@ static int tfileReaderLoadFst(TFileReader* reader) {
|
||||||
int64_t cost = taosGetTimestampUs() - ts;
|
int64_t cost = taosGetTimestampUs() - ts;
|
||||||
indexInfo("nread = %d, and fst offset=%d, fst size: %d, filename: %s, file size: %" PRId64 ", time cost: %" PRId64
|
indexInfo("nread = %d, and fst offset=%d, fst size: %d, filename: %s, file size: %" PRId64 ", time cost: %" PRId64
|
||||||
"us",
|
"us",
|
||||||
nread, reader->header.fstOffset, fstSize, ctx->file.buf, ctx->file.size, cost);
|
nread, reader->header.fstOffset, fstSize, ctx->file.buf, size, cost);
|
||||||
// we assuse fst size less than FST_MAX_SIZE
|
// we assuse fst size less than FST_MAX_SIZE
|
||||||
assert(nread > 0 && nread <= fstSize);
|
assert(nread > 0 && nread <= fstSize);
|
||||||
|
|
||||||
|
|
|
@ -19,7 +19,7 @@ class FstWriter {
|
||||||
public:
|
public:
|
||||||
FstWriter() {
|
FstWriter() {
|
||||||
taosRemoveFile(fileName.c_str());
|
taosRemoveFile(fileName.c_str());
|
||||||
_wc = idxFileCtxCreate(TFile, fileName.c_str(), false, 64 * 1024 * 1024);
|
_wc = idxFileCtxCreate(TFILE, fileName.c_str(), false, 64 * 1024 * 1024);
|
||||||
_b = fstBuilderCreate(_wc, 0);
|
_b = fstBuilderCreate(_wc, 0);
|
||||||
}
|
}
|
||||||
bool Put(const std::string& key, uint64_t val) {
|
bool Put(const std::string& key, uint64_t val) {
|
||||||
|
@ -34,7 +34,7 @@ class FstWriter {
|
||||||
return ok;
|
return ok;
|
||||||
}
|
}
|
||||||
~FstWriter() {
|
~FstWriter() {
|
||||||
fstBuilderFinish(_b);
|
// fstBuilderFinish(_b);
|
||||||
fstBuilderDestroy(_b);
|
fstBuilderDestroy(_b);
|
||||||
|
|
||||||
idxFileCtxDestroy(_wc, false);
|
idxFileCtxDestroy(_wc, false);
|
||||||
|
@ -48,7 +48,7 @@ class FstWriter {
|
||||||
class FstReadMemory {
|
class FstReadMemory {
|
||||||
public:
|
public:
|
||||||
FstReadMemory(int32_t size, const std::string& fileName = TD_TMP_DIR_PATH "tindex.tindex") {
|
FstReadMemory(int32_t size, const std::string& fileName = TD_TMP_DIR_PATH "tindex.tindex") {
|
||||||
_wc = idxFileCtxCreate(TFile, fileName.c_str(), true, 64 * 1024);
|
_wc = idxFileCtxCreate(TFILE, fileName.c_str(), true, 64 * 1024);
|
||||||
_w = idxFileCreate(_wc);
|
_w = idxFileCreate(_wc);
|
||||||
_size = size;
|
_size = size;
|
||||||
memset((void*)&_s, 0, sizeof(_s));
|
memset((void*)&_s, 0, sizeof(_s));
|
||||||
|
@ -598,7 +598,9 @@ void fst_get(Fst* fst) {
|
||||||
void validateTFile(char* arg) {
|
void validateTFile(char* arg) {
|
||||||
std::thread threads[NUM_OF_THREAD];
|
std::thread threads[NUM_OF_THREAD];
|
||||||
// std::vector<std::thread> threads;
|
// std::vector<std::thread> threads;
|
||||||
TFileReader* reader = tfileReaderOpen(arg, 0, 20000000, "tag1");
|
SIndex* index = (SIndex*)taosMemoryCalloc(1, sizeof(SIndex));
|
||||||
|
index->path = strdup(arg);
|
||||||
|
TFileReader* reader = tfileReaderOpen(index, 0, 20000000, "tag1");
|
||||||
|
|
||||||
for (int i = 0; i < NUM_OF_THREAD; i++) {
|
for (int i = 0; i < NUM_OF_THREAD; i++) {
|
||||||
threads[i] = std::thread(fst_get, reader->fst);
|
threads[i] = std::thread(fst_get, reader->fst);
|
||||||
|
@ -617,7 +619,7 @@ void iterTFileReader(char* path, char* uid, char* colName, char* ver) {
|
||||||
uint64_t suid = atoi(uid);
|
uint64_t suid = atoi(uid);
|
||||||
int version = atoi(ver);
|
int version = atoi(ver);
|
||||||
|
|
||||||
TFileReader* reader = tfileReaderOpen(path, suid, version, colName);
|
TFileReader* reader = tfileReaderOpen(NULL, suid, version, colName);
|
||||||
|
|
||||||
Iterate* iter = tfileIteratorCreate(reader);
|
Iterate* iter = tfileIteratorCreate(reader);
|
||||||
bool tn = iter ? iter->next(iter) : false;
|
bool tn = iter ? iter->next(iter) : false;
|
||||||
|
|
|
@ -39,7 +39,7 @@ static void EnvCleanup() {}
|
||||||
class FstWriter {
|
class FstWriter {
|
||||||
public:
|
public:
|
||||||
FstWriter() {
|
FstWriter() {
|
||||||
_wc = idxFileCtxCreate(TFile, tindex, false, 64 * 1024 * 1024);
|
_wc = idxFileCtxCreate(TFILE, tindex, false, 64 * 1024 * 1024);
|
||||||
_b = fstBuilderCreate(_wc, 0);
|
_b = fstBuilderCreate(_wc, 0);
|
||||||
}
|
}
|
||||||
bool Put(const std::string& key, uint64_t val) {
|
bool Put(const std::string& key, uint64_t val) {
|
||||||
|
@ -54,7 +54,6 @@ class FstWriter {
|
||||||
return ok;
|
return ok;
|
||||||
}
|
}
|
||||||
~FstWriter() {
|
~FstWriter() {
|
||||||
fstBuilderFinish(_b);
|
|
||||||
fstBuilderDestroy(_b);
|
fstBuilderDestroy(_b);
|
||||||
|
|
||||||
idxFileCtxDestroy(_wc, false);
|
idxFileCtxDestroy(_wc, false);
|
||||||
|
@ -68,7 +67,7 @@ class FstWriter {
|
||||||
class FstReadMemory {
|
class FstReadMemory {
|
||||||
public:
|
public:
|
||||||
FstReadMemory(size_t size) {
|
FstReadMemory(size_t size) {
|
||||||
_wc = idxFileCtxCreate(TFile, tindex, true, 64 * 1024);
|
_wc = idxFileCtxCreate(TFILE, tindex, true, 64 * 1024);
|
||||||
_w = idxFileCreate(_wc);
|
_w = idxFileCreate(_wc);
|
||||||
_size = size;
|
_size = size;
|
||||||
memset((void*)&_s, 0, sizeof(_s));
|
memset((void*)&_s, 0, sizeof(_s));
|
||||||
|
|
|
@ -50,7 +50,7 @@ class DebugInfo {
|
||||||
class FstWriter {
|
class FstWriter {
|
||||||
public:
|
public:
|
||||||
FstWriter() {
|
FstWriter() {
|
||||||
_wc = idxFileCtxCreate(TFile, TD_TMP_DIR_PATH "tindex", false, 64 * 1024 * 1024);
|
_wc = idxFileCtxCreate(TFILE, TD_TMP_DIR_PATH "tindex", false, 64 * 1024 * 1024);
|
||||||
_b = fstBuilderCreate(NULL, 0);
|
_b = fstBuilderCreate(NULL, 0);
|
||||||
}
|
}
|
||||||
bool Put(const std::string& key, uint64_t val) {
|
bool Put(const std::string& key, uint64_t val) {
|
||||||
|
@ -60,7 +60,7 @@ class FstWriter {
|
||||||
return ok;
|
return ok;
|
||||||
}
|
}
|
||||||
~FstWriter() {
|
~FstWriter() {
|
||||||
fstBuilderFinish(_b);
|
// fstBuilderFinish(_b);
|
||||||
fstBuilderDestroy(_b);
|
fstBuilderDestroy(_b);
|
||||||
|
|
||||||
idxFileCtxDestroy(_wc, false);
|
idxFileCtxDestroy(_wc, false);
|
||||||
|
@ -74,7 +74,7 @@ class FstWriter {
|
||||||
class FstReadMemory {
|
class FstReadMemory {
|
||||||
public:
|
public:
|
||||||
FstReadMemory(size_t size) {
|
FstReadMemory(size_t size) {
|
||||||
_wc = idxFileCtxCreate(TFile, TD_TMP_DIR_PATH "tindex", true, 64 * 1024);
|
_wc = idxFileCtxCreate(TFILE, TD_TMP_DIR_PATH "tindex", true, 64 * 1024);
|
||||||
_w = idxFileCreate(_wc);
|
_w = idxFileCreate(_wc);
|
||||||
_size = size;
|
_size = size;
|
||||||
memset((void*)&_s, 0, sizeof(_s));
|
memset((void*)&_s, 0, sizeof(_s));
|
||||||
|
@ -292,14 +292,12 @@ class IndexEnv : public ::testing::Test {
|
||||||
virtual void SetUp() {
|
virtual void SetUp() {
|
||||||
initLog();
|
initLog();
|
||||||
taosRemoveDir(path);
|
taosRemoveDir(path);
|
||||||
opts = indexOptsCreate();
|
SIndexOpts opts;
|
||||||
int ret = indexOpen(opts, path, &index);
|
opts.cacheSize = 1024 * 1024 * 4;
|
||||||
|
int ret = indexOpen(&opts, path, &index);
|
||||||
assert(ret == 0);
|
assert(ret == 0);
|
||||||
}
|
}
|
||||||
virtual void TearDown() {
|
virtual void TearDown() { indexClose(index); }
|
||||||
indexClose(index);
|
|
||||||
indexOptsDestroy(opts);
|
|
||||||
}
|
|
||||||
|
|
||||||
const char* path = TD_TMP_DIR_PATH "tindex";
|
const char* path = TD_TMP_DIR_PATH "tindex";
|
||||||
SIndexOpts* opts;
|
SIndexOpts* opts;
|
||||||
|
@ -391,13 +389,15 @@ class TFileObj {
|
||||||
|
|
||||||
fileName_ = path;
|
fileName_ = path;
|
||||||
|
|
||||||
IFileCtx* ctx = idxFileCtxCreate(TFile, path.c_str(), false, 64 * 1024 * 1024);
|
IFileCtx* ctx = idxFileCtxCreate(TFILE, path.c_str(), false, 64 * 1024 * 1024);
|
||||||
|
ctx->lru = taosLRUCacheInit(1024 * 1024 * 4, -1, .5);
|
||||||
|
|
||||||
writer_ = tfileWriterCreate(ctx, &header);
|
writer_ = tfileWriterCreate(ctx, &header);
|
||||||
return writer_ != NULL ? true : false;
|
return writer_ != NULL ? true : false;
|
||||||
}
|
}
|
||||||
bool InitReader() {
|
bool InitReader() {
|
||||||
IFileCtx* ctx = idxFileCtxCreate(TFile, fileName_.c_str(), true, 64 * 1024 * 1024);
|
IFileCtx* ctx = idxFileCtxCreate(TFILE, fileName_.c_str(), true, 64 * 1024 * 1024);
|
||||||
|
ctx->lru = taosLRUCacheInit(1024 * 1024 * 4, -1, .5);
|
||||||
reader_ = tfileReaderCreate(ctx);
|
reader_ = tfileReaderCreate(ctx);
|
||||||
return reader_ != NULL ? true : false;
|
return reader_ != NULL ? true : false;
|
||||||
}
|
}
|
||||||
|
@ -657,7 +657,7 @@ TEST_F(IndexCacheEnv, cache_test) {
|
||||||
{
|
{
|
||||||
std::string colVal("v3");
|
std::string colVal("v3");
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexTermQuery query = {term, QUERY_TERM};
|
SIndexTermQuery query = {term, QUERY_TERM};
|
||||||
SArray* ret = (SArray*)taosArrayInit(4, sizeof(suid));
|
SArray* ret = (SArray*)taosArrayInit(4, sizeof(suid));
|
||||||
STermValueType valType;
|
STermValueType valType;
|
||||||
|
@ -672,7 +672,7 @@ TEST_F(IndexCacheEnv, cache_test) {
|
||||||
{
|
{
|
||||||
std::string colVal("v2");
|
std::string colVal("v2");
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexTermQuery query = {term, QUERY_TERM};
|
SIndexTermQuery query = {term, QUERY_TERM};
|
||||||
SArray* ret = (SArray*)taosArrayInit(4, sizeof(suid));
|
SArray* ret = (SArray*)taosArrayInit(4, sizeof(suid));
|
||||||
STermValueType valType;
|
STermValueType valType;
|
||||||
|
@ -698,6 +698,9 @@ class IndexObj {
|
||||||
taosMkDir(dir.c_str());
|
taosMkDir(dir.c_str());
|
||||||
}
|
}
|
||||||
taosMkDir(dir.c_str());
|
taosMkDir(dir.c_str());
|
||||||
|
SIndexOpts opts;
|
||||||
|
opts.cacheSize = 1024 * 1024 * 4;
|
||||||
|
|
||||||
int ret = indexOpen(&opts, dir.c_str(), &idx);
|
int ret = indexOpen(&opts, dir.c_str(), &idx);
|
||||||
if (ret != 0) {
|
if (ret != 0) {
|
||||||
// opt
|
// opt
|
||||||
|
@ -707,7 +710,7 @@ class IndexObj {
|
||||||
}
|
}
|
||||||
void Del(const std::string& colName, const std::string& colVal, uint64_t uid) {
|
void Del(const std::string& colName, const std::string& colVal, uint64_t uid) {
|
||||||
SIndexTerm* term = indexTermCreateT(0, DEL_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, DEL_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
Put(terms, uid);
|
Put(terms, uid);
|
||||||
|
@ -716,7 +719,7 @@ class IndexObj {
|
||||||
int WriteMillonData(const std::string& colName, const std::string& colVal = "Hello world",
|
int WriteMillonData(const std::string& colName, const std::string& colVal = "Hello world",
|
||||||
size_t numOfTable = 100 * 10000) {
|
size_t numOfTable = 100 * 10000) {
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t i = 0; i < numOfTable; i++) {
|
for (size_t i = 0; i < numOfTable; i++) {
|
||||||
|
@ -738,7 +741,7 @@ class IndexObj {
|
||||||
tColVal[taosRand() % colValSize] = 'a' + k % 26;
|
tColVal[taosRand() % colValSize] = 'a' + k % 26;
|
||||||
}
|
}
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
tColVal.c_str(), tColVal.size());
|
tColVal.c_str(), tColVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t j = 0; j < skip; j++) {
|
for (size_t j = 0; j < skip; j++) {
|
||||||
|
@ -774,7 +777,7 @@ class IndexObj {
|
||||||
int SearchOne(const std::string& colName, const std::string& colVal) {
|
int SearchOne(const std::string& colName, const std::string& colVal) {
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
||||||
|
|
||||||
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
||||||
|
@ -796,7 +799,7 @@ class IndexObj {
|
||||||
int SearchOneTarget(const std::string& colName, const std::string& colVal, uint64_t val) {
|
int SearchOneTarget(const std::string& colName, const std::string& colVal, uint64_t val) {
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
||||||
|
|
||||||
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
||||||
|
@ -821,7 +824,7 @@ class IndexObj {
|
||||||
void PutOne(const std::string& colName, const std::string& colVal) {
|
void PutOne(const std::string& colName, const std::string& colVal) {
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
Put(terms, 10);
|
Put(terms, 10);
|
||||||
indexMultiTermDestroy(terms);
|
indexMultiTermDestroy(terms);
|
||||||
|
@ -829,7 +832,7 @@ class IndexObj {
|
||||||
void PutOneTarge(const std::string& colName, const std::string& colVal, uint64_t val) {
|
void PutOneTarge(const std::string& colName, const std::string& colVal, uint64_t val) {
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
Put(terms, val);
|
Put(terms, val);
|
||||||
indexMultiTermDestroy(terms);
|
indexMultiTermDestroy(terms);
|
||||||
|
@ -845,10 +848,10 @@ class IndexObj {
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
SIndexOpts opts;
|
SIndexOpts* opts;
|
||||||
SIndex* idx;
|
SIndex* idx;
|
||||||
int numOfWrite;
|
int numOfWrite;
|
||||||
int numOfRead;
|
int numOfRead;
|
||||||
};
|
};
|
||||||
|
|
||||||
class IndexEnv2 : public ::testing::Test {
|
class IndexEnv2 : public ::testing::Test {
|
||||||
|
@ -875,7 +878,7 @@ TEST_F(IndexEnv2, testIndexOpen) {
|
||||||
std::string colName("tag1"), colVal("Hello");
|
std::string colName("tag1"), colVal("Hello");
|
||||||
|
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t i = 0; i < targetSize; i++) {
|
for (size_t i = 0; i < targetSize; i++) {
|
||||||
|
@ -890,7 +893,7 @@ TEST_F(IndexEnv2, testIndexOpen) {
|
||||||
std::string colName("tag1"), colVal("hello");
|
std::string colName("tag1"), colVal("hello");
|
||||||
|
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t i = 0; i < size; i++) {
|
for (size_t i = 0; i < size; i++) {
|
||||||
|
@ -905,7 +908,7 @@ TEST_F(IndexEnv2, testIndexOpen) {
|
||||||
std::string colName("tag1"), colVal("Hello");
|
std::string colName("tag1"), colVal("Hello");
|
||||||
|
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t i = size * 3; i < size * 4; i++) {
|
for (size_t i = size * 3; i < size * 4; i++) {
|
||||||
|
@ -920,7 +923,7 @@ TEST_F(IndexEnv2, testIndexOpen) {
|
||||||
std::string colName("tag1"), colVal("Hello");
|
std::string colName("tag1"), colVal("Hello");
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, term, QUERY_TERM);
|
||||||
|
|
||||||
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = (SArray*)taosArrayInit(1, sizeof(uint64_t));
|
||||||
|
@ -943,7 +946,7 @@ TEST_F(IndexEnv2, testEmptyIndexOpen) {
|
||||||
std::string colName("tag1"), colVal("Hello");
|
std::string colName("tag1"), colVal("Hello");
|
||||||
|
|
||||||
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
for (size_t i = 0; i < targetSize; i++) {
|
for (size_t i = 0; i < targetSize; i++) {
|
||||||
|
|
|
@ -54,13 +54,12 @@ class JsonEnv : public ::testing::Test {
|
||||||
printf("set up\n");
|
printf("set up\n");
|
||||||
|
|
||||||
initLog();
|
initLog();
|
||||||
opts = indexOptsCreate();
|
opts = indexOptsCreate(1024 * 1024 * 4);
|
||||||
int ret = indexJsonOpen(opts, dir.c_str(), &index);
|
int ret = indexJsonOpen(opts, dir.c_str(), &index);
|
||||||
assert(ret == 0);
|
assert(ret == 0);
|
||||||
}
|
}
|
||||||
virtual void TearDown() {
|
virtual void TearDown() {
|
||||||
indexJsonClose(index);
|
indexJsonClose(index);
|
||||||
indexOptsDestroy(opts);
|
|
||||||
printf("destory\n");
|
printf("destory\n");
|
||||||
taosMsleep(1000);
|
taosMsleep(1000);
|
||||||
}
|
}
|
||||||
|
@ -71,7 +70,7 @@ class JsonEnv : public ::testing::Test {
|
||||||
static void WriteData(SIndexJson* index, const std::string& colName, int8_t dtype, void* data, int dlen, int tableId,
|
static void WriteData(SIndexJson* index, const std::string& colName, int8_t dtype, void* data, int dlen, int tableId,
|
||||||
int8_t operType = ADD_VALUE) {
|
int8_t operType = ADD_VALUE) {
|
||||||
SIndexTerm* term = indexTermCreateT(1, (SIndexOperOnColumn)operType, dtype, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(1, (SIndexOperOnColumn)operType, dtype, colName.c_str(), colName.size(),
|
||||||
(const char*)data, dlen);
|
(const char*)data, dlen);
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
indexJsonPut(index, terms, (int64_t)tableId);
|
indexJsonPut(index, terms, (int64_t)tableId);
|
||||||
|
@ -82,7 +81,7 @@ static void WriteData(SIndexJson* index, const std::string& colName, int8_t dtyp
|
||||||
static void delData(SIndexJson* index, const std::string& colName, int8_t dtype, void* data, int dlen, int tableId,
|
static void delData(SIndexJson* index, const std::string& colName, int8_t dtype, void* data, int dlen, int tableId,
|
||||||
int8_t operType = DEL_VALUE) {
|
int8_t operType = DEL_VALUE) {
|
||||||
SIndexTerm* term = indexTermCreateT(1, (SIndexOperOnColumn)operType, dtype, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(1, (SIndexOperOnColumn)operType, dtype, colName.c_str(), colName.size(),
|
||||||
(const char*)data, dlen);
|
(const char*)data, dlen);
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
indexJsonPut(index, terms, (int64_t)tableId);
|
indexJsonPut(index, terms, (int64_t)tableId);
|
||||||
|
@ -108,7 +107,7 @@ TEST_F(JsonEnv, testWrite) {
|
||||||
std::string colVal("ab");
|
std::string colVal("ab");
|
||||||
for (int i = 0; i < 100; i++) {
|
for (int i = 0; i < 100; i++) {
|
||||||
SIndexTerm* term = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* term = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
SIndexMultiTerm* terms = indexMultiTermCreate();
|
SIndexMultiTerm* terms = indexMultiTermCreate();
|
||||||
indexMultiTermAdd(terms, term);
|
indexMultiTermAdd(terms, term);
|
||||||
indexJsonPut(index, terms, i);
|
indexJsonPut(index, terms, i);
|
||||||
|
@ -147,7 +146,7 @@ TEST_F(JsonEnv, testWrite) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
||||||
|
@ -205,7 +204,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
||||||
|
@ -220,7 +219,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
||||||
|
@ -235,7 +234,7 @@ TEST_F(JsonEnv, testWriteMillonData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
|
||||||
colVal.c_str(), colVal.size());
|
colVal.c_str(), colVal.size());
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
||||||
|
@ -305,7 +304,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
|
||||||
int val = 15;
|
int val = 15;
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
||||||
|
@ -319,7 +318,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
||||||
|
@ -334,7 +333,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(int));
|
(const char*)&val, sizeof(int));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
||||||
|
@ -349,7 +348,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
|
||||||
|
@ -364,7 +363,7 @@ TEST_F(JsonEnv, testWriteJsonNumberData) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
|
||||||
|
@ -407,7 +406,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
indexMultiTermQueryAdd(mq, q, QUERY_TERM);
|
||||||
|
@ -421,7 +420,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(int));
|
(const char*)&val, sizeof(int));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
||||||
|
@ -436,7 +435,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
||||||
|
@ -450,7 +449,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_THAN);
|
||||||
|
@ -464,7 +463,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_LESS_EQUAL);
|
||||||
|
@ -493,7 +492,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
|
indexMultiTermQueryAdd(mq, q, QUERY_LESS_THAN);
|
||||||
|
@ -521,7 +520,7 @@ TEST_F(JsonEnv, testWriteJsonTfileAndCache_INT) {
|
||||||
|
|
||||||
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
SIndexMultiTermQuery* mq = indexMultiTermQueryCreate(MUST);
|
||||||
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
SIndexTerm* q = indexTermCreateT(1, ADD_VALUE, TSDB_DATA_TYPE_INT, colName.c_str(), colName.size(),
|
||||||
(const char*)&val, sizeof(val));
|
(const char*)&val, sizeof(val));
|
||||||
|
|
||||||
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
SArray* result = taosArrayInit(1, sizeof(uint64_t));
|
||||||
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
indexMultiTermQueryAdd(mq, q, QUERY_GREATER_EQUAL);
|
||||||
|
|
|
@ -388,6 +388,11 @@ static void destroyDataSinkNode(SDataSinkNode* pNode) { nodesDestroyNode((SNode*
|
||||||
|
|
||||||
static void destroyExprNode(SExprNode* pExpr) { taosArrayDestroy(pExpr->pAssociation); }
|
static void destroyExprNode(SExprNode* pExpr) { taosArrayDestroy(pExpr->pAssociation); }
|
||||||
|
|
||||||
|
static void nodesDestroyNodePointer(void* node) {
|
||||||
|
SNode* pNode = *(SNode**)node;
|
||||||
|
nodesDestroyNode(pNode);
|
||||||
|
}
|
||||||
|
|
||||||
void nodesDestroyNode(SNode* pNode) {
|
void nodesDestroyNode(SNode* pNode) {
|
||||||
if (NULL == pNode) {
|
if (NULL == pNode) {
|
||||||
return;
|
return;
|
||||||
|
@ -718,6 +723,7 @@ void nodesDestroyNode(SNode* pNode) {
|
||||||
}
|
}
|
||||||
taosArrayDestroy(pQuery->pDbList);
|
taosArrayDestroy(pQuery->pDbList);
|
||||||
taosArrayDestroy(pQuery->pTableList);
|
taosArrayDestroy(pQuery->pTableList);
|
||||||
|
taosArrayDestroyEx(pQuery->pPlaceholderValues, nodesDestroyNodePointer);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_LOGIC_PLAN_SCAN: {
|
case QUERY_NODE_LOGIC_PLAN_SCAN: {
|
||||||
|
|
|
@ -118,36 +118,33 @@ static bool needGetTableIndex(SNode* pStmt) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t collectMetaKeyFromRealTableImpl(SCollectMetaKeyCxt* pCxt, SRealTableNode* pRealTable,
|
static int32_t collectMetaKeyFromRealTableImpl(SCollectMetaKeyCxt* pCxt, const char* pDb, const char* pTable,
|
||||||
AUTH_TYPE authType) {
|
AUTH_TYPE authType) {
|
||||||
int32_t code = reserveTableMetaInCache(pCxt->pParseCxt->acctId, pRealTable->table.dbName, pRealTable->table.tableName,
|
int32_t code = reserveTableMetaInCache(pCxt->pParseCxt->acctId, pDb, pTable, pCxt->pMetaCache);
|
||||||
pCxt->pMetaCache);
|
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = reserveTableVgroupInCache(pCxt->pParseCxt->acctId, pRealTable->table.dbName, pRealTable->table.tableName,
|
code = reserveTableVgroupInCache(pCxt->pParseCxt->acctId, pDb, pTable, pCxt->pMetaCache);
|
||||||
pCxt->pMetaCache);
|
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = reserveUserAuthInCache(pCxt->pParseCxt->acctId, pCxt->pParseCxt->pUser, pRealTable->table.dbName, authType,
|
code = reserveUserAuthInCache(pCxt->pParseCxt->acctId, pCxt->pParseCxt->pUser, pDb, authType, pCxt->pMetaCache);
|
||||||
pCxt->pMetaCache);
|
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = reserveDbVgInfoInCache(pCxt->pParseCxt->acctId, pRealTable->table.dbName, pCxt->pMetaCache);
|
code = reserveDbVgInfoInCache(pCxt->pParseCxt->acctId, pDb, pCxt->pMetaCache);
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code && needGetTableIndex(pCxt->pStmt)) {
|
if (TSDB_CODE_SUCCESS == code && needGetTableIndex(pCxt->pStmt)) {
|
||||||
code = reserveTableIndexInCache(pCxt->pParseCxt->acctId, pRealTable->table.dbName, pRealTable->table.tableName,
|
code = reserveTableIndexInCache(pCxt->pParseCxt->acctId, pDb, pTable, pCxt->pMetaCache);
|
||||||
pCxt->pMetaCache);
|
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code && (0 == strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_DNODE_VARIABLES))) {
|
if (TSDB_CODE_SUCCESS == code && (0 == strcmp(pTable, TSDB_INS_TABLE_DNODE_VARIABLES))) {
|
||||||
code = reserveDnodeRequiredInCache(pCxt->pMetaCache);
|
code = reserveDnodeRequiredInCache(pCxt->pMetaCache);
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = reserveDbCfgInCache(pCxt->pParseCxt->acctId, pRealTable->table.dbName, pCxt->pMetaCache);
|
code = reserveDbCfgInCache(pCxt->pParseCxt->acctId, pDb, pCxt->pMetaCache);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static EDealRes collectMetaKeyFromRealTable(SCollectMetaKeyFromExprCxt* pCxt, SRealTableNode* pRealTable) {
|
static EDealRes collectMetaKeyFromRealTable(SCollectMetaKeyFromExprCxt* pCxt, SRealTableNode* pRealTable) {
|
||||||
pCxt->errCode = collectMetaKeyFromRealTableImpl(pCxt->pComCxt, pRealTable, AUTH_TYPE_READ);
|
pCxt->errCode = collectMetaKeyFromRealTableImpl(pCxt->pComCxt, pRealTable->table.dbName, pRealTable->table.tableName,
|
||||||
|
AUTH_TYPE_READ);
|
||||||
return TSDB_CODE_SUCCESS == pCxt->errCode ? DEAL_RES_CONTINUE : DEAL_RES_ERROR;
|
return TSDB_CODE_SUCCESS == pCxt->errCode ? DEAL_RES_CONTINUE : DEAL_RES_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -454,11 +451,13 @@ static int32_t collectMetaKeyFromShowTransactions(SCollectMetaKeyCxt* pCxt, SSho
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t collectMetaKeyFromDelete(SCollectMetaKeyCxt* pCxt, SDeleteStmt* pStmt) {
|
static int32_t collectMetaKeyFromDelete(SCollectMetaKeyCxt* pCxt, SDeleteStmt* pStmt) {
|
||||||
return collectMetaKeyFromRealTableImpl(pCxt, (SRealTableNode*)pStmt->pFromTable, AUTH_TYPE_WRITE);
|
STableNode* pTable = (STableNode*)pStmt->pFromTable;
|
||||||
|
return collectMetaKeyFromRealTableImpl(pCxt, pTable->dbName, pTable->tableName, AUTH_TYPE_WRITE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t collectMetaKeyFromInsert(SCollectMetaKeyCxt* pCxt, SInsertStmt* pStmt) {
|
static int32_t collectMetaKeyFromInsert(SCollectMetaKeyCxt* pCxt, SInsertStmt* pStmt) {
|
||||||
int32_t code = collectMetaKeyFromRealTableImpl(pCxt, (SRealTableNode*)pStmt->pTable, AUTH_TYPE_WRITE);
|
STableNode* pTable = (STableNode*)pStmt->pTable;
|
||||||
|
int32_t code = collectMetaKeyFromRealTableImpl(pCxt, pTable->dbName, pTable->tableName, AUTH_TYPE_WRITE);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = collectMetaKeyFromQuery(pCxt, pStmt->pQuery);
|
code = collectMetaKeyFromQuery(pCxt, pStmt->pQuery);
|
||||||
}
|
}
|
||||||
|
@ -471,14 +470,7 @@ static int32_t collectMetaKeyFromShowBlockDist(SCollectMetaKeyCxt* pCxt, SShowTa
|
||||||
strcpy(name.tname, pStmt->tableName);
|
strcpy(name.tname, pStmt->tableName);
|
||||||
int32_t code = catalogRemoveTableMeta(pCxt->pParseCxt->pCatalog, &name);
|
int32_t code = catalogRemoveTableMeta(pCxt->pParseCxt->pCatalog, &name);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = reserveTableMetaInCache(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tableName, pCxt->pMetaCache);
|
code = collectMetaKeyFromRealTableImpl(pCxt, pStmt->dbName, pStmt->tableName, AUTH_TYPE_READ);
|
||||||
}
|
|
||||||
|
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
|
||||||
code = reserveTableVgroupInCache(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tableName, pCxt->pMetaCache);
|
|
||||||
}
|
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
|
||||||
code = reserveDbVgInfoInCache(pCxt->pParseCxt->acctId, pStmt->dbName, pCxt->pMetaCache);
|
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1177,6 +1177,29 @@ static int32_t translateRepeatScanFunc(STranslateContext* pCxt, SFunctionNode* p
|
||||||
"%s is only supported in single table query", pFunc->functionName);
|
"%s is only supported in single table query", pFunc->functionName);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool isStar(SNode* pNode) {
|
||||||
|
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
|
||||||
|
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool isTableStar(SNode* pNode) {
|
||||||
|
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' != ((SColumnNode*)pNode)->tableAlias[0]) &&
|
||||||
|
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t translateMultiResFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||||
|
if (!fmIsMultiResFunc(pFunc->funcId)) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
if (SQL_CLAUSE_SELECT != pCxt->currClause ) {
|
||||||
|
SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
|
||||||
|
if (isStar(pPara) || isTableStar(pPara)) {
|
||||||
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
|
||||||
|
"%s(*) is only supported in SELECTed list", pFunc->functionName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
static void setFuncClassification(SNode* pCurrStmt, SFunctionNode* pFunc) {
|
static void setFuncClassification(SNode* pCurrStmt, SFunctionNode* pFunc) {
|
||||||
if (NULL != pCurrStmt && QUERY_NODE_SELECT_STMT == nodeType(pCurrStmt)) {
|
if (NULL != pCurrStmt && QUERY_NODE_SELECT_STMT == nodeType(pCurrStmt)) {
|
||||||
SSelectStmt* pSelect = (SSelectStmt*)pCurrStmt;
|
SSelectStmt* pSelect = (SSelectStmt*)pCurrStmt;
|
||||||
|
@ -1311,6 +1334,9 @@ static int32_t translateNoramlFunction(STranslateContext* pCxt, SFunctionNode* p
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = translateRepeatScanFunc(pCxt, pFunc);
|
code = translateRepeatScanFunc(pCxt, pFunc);
|
||||||
}
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = translateMultiResFunc(pCxt, pFunc);
|
||||||
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
setFuncClassification(pCxt->pCurrStmt, pFunc);
|
setFuncClassification(pCxt->pCurrStmt, pFunc);
|
||||||
}
|
}
|
||||||
|
@ -1908,16 +1934,6 @@ static int32_t createTableAllCols(STranslateContext* pCxt, SColumnNode* pCol, bo
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool isStar(SNode* pNode) {
|
|
||||||
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
|
|
||||||
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool isTableStar(SNode* pNode) {
|
|
||||||
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' != ((SColumnNode*)pNode)->tableAlias[0]) &&
|
|
||||||
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
|
||||||
}
|
|
||||||
|
|
||||||
static int32_t createMultiResFuncsParas(STranslateContext* pCxt, SNodeList* pSrcParas, SNodeList** pOutput) {
|
static int32_t createMultiResFuncsParas(STranslateContext* pCxt, SNodeList* pSrcParas, SNodeList** pOutput) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
|
|
|
@ -159,8 +159,8 @@ void generatePerformanceSchema(MockCatalogService* mcs) {
|
||||||
* c4 | column | DOUBLE | 8 |
|
* c4 | column | DOUBLE | 8 |
|
||||||
* c5 | column | DOUBLE | 8 |
|
* c5 | column | DOUBLE | 8 |
|
||||||
*/
|
*/
|
||||||
void generateTestTables(MockCatalogService* mcs) {
|
void generateTestTables(MockCatalogService* mcs, const std::string& db) {
|
||||||
ITableBuilder& builder = mcs->createTableBuilder("test", "t1", TSDB_NORMAL_TABLE, 6)
|
ITableBuilder& builder = mcs->createTableBuilder(db, "t1", TSDB_NORMAL_TABLE, 6)
|
||||||
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
||||||
.setVgid(1)
|
.setVgid(1)
|
||||||
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
||||||
|
@ -193,9 +193,9 @@ void generateTestTables(MockCatalogService* mcs) {
|
||||||
* jtag | tag | json | -- |
|
* jtag | tag | json | -- |
|
||||||
* Child Table: st2s1, st2s2
|
* Child Table: st2s1, st2s2
|
||||||
*/
|
*/
|
||||||
void generateTestStables(MockCatalogService* mcs) {
|
void generateTestStables(MockCatalogService* mcs, const std::string& db) {
|
||||||
{
|
{
|
||||||
ITableBuilder& builder = mcs->createTableBuilder("test", "st1", TSDB_SUPER_TABLE, 3, 3)
|
ITableBuilder& builder = mcs->createTableBuilder(db, "st1", TSDB_SUPER_TABLE, 3, 3)
|
||||||
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
||||||
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
||||||
.addColumn("c1", TSDB_DATA_TYPE_INT)
|
.addColumn("c1", TSDB_DATA_TYPE_INT)
|
||||||
|
@ -204,20 +204,20 @@ void generateTestStables(MockCatalogService* mcs) {
|
||||||
.addTag("tag2", TSDB_DATA_TYPE_BINARY, 20)
|
.addTag("tag2", TSDB_DATA_TYPE_BINARY, 20)
|
||||||
.addTag("tag3", TSDB_DATA_TYPE_TIMESTAMP);
|
.addTag("tag3", TSDB_DATA_TYPE_TIMESTAMP);
|
||||||
builder.done();
|
builder.done();
|
||||||
mcs->createSubTable("test", "st1", "st1s1", 1);
|
mcs->createSubTable(db, "st1", "st1s1", 1);
|
||||||
mcs->createSubTable("test", "st1", "st1s2", 2);
|
mcs->createSubTable(db, "st1", "st1s2", 2);
|
||||||
mcs->createSubTable("test", "st1", "st1s3", 1);
|
mcs->createSubTable(db, "st1", "st1s3", 1);
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
ITableBuilder& builder = mcs->createTableBuilder("test", "st2", TSDB_SUPER_TABLE, 3, 1)
|
ITableBuilder& builder = mcs->createTableBuilder(db, "st2", TSDB_SUPER_TABLE, 3, 1)
|
||||||
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
.setPrecision(TSDB_TIME_PRECISION_MILLI)
|
||||||
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
|
||||||
.addColumn("c1", TSDB_DATA_TYPE_INT)
|
.addColumn("c1", TSDB_DATA_TYPE_INT)
|
||||||
.addColumn("c2", TSDB_DATA_TYPE_BINARY, 20)
|
.addColumn("c2", TSDB_DATA_TYPE_BINARY, 20)
|
||||||
.addTag("jtag", TSDB_DATA_TYPE_JSON);
|
.addTag("jtag", TSDB_DATA_TYPE_JSON);
|
||||||
builder.done();
|
builder.done();
|
||||||
mcs->createSubTable("test", "st2", "st2s1", 1);
|
mcs->createSubTable(db, "st2", "st2s1", 1);
|
||||||
mcs->createSubTable("test", "st2", "st2s2", 2);
|
mcs->createSubTable(db, "st2", "st2s2", 2);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -237,6 +237,11 @@ void generateDatabases(MockCatalogService* mcs) {
|
||||||
mcs->createDatabase(TSDB_INFORMATION_SCHEMA_DB);
|
mcs->createDatabase(TSDB_INFORMATION_SCHEMA_DB);
|
||||||
mcs->createDatabase(TSDB_PERFORMANCE_SCHEMA_DB);
|
mcs->createDatabase(TSDB_PERFORMANCE_SCHEMA_DB);
|
||||||
mcs->createDatabase("test");
|
mcs->createDatabase("test");
|
||||||
|
generateTestTables(g_mockCatalogService.get(), "test");
|
||||||
|
generateTestStables(g_mockCatalogService.get(), "test");
|
||||||
|
mcs->createDatabase("cache_db", false, 1);
|
||||||
|
generateTestTables(g_mockCatalogService.get(), "cache_db");
|
||||||
|
generateTestStables(g_mockCatalogService.get(), "cache_db");
|
||||||
mcs->createDatabase("rollup_db", true);
|
mcs->createDatabase("rollup_db", true);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -369,11 +374,8 @@ void generateMetaData() {
|
||||||
generateDatabases(g_mockCatalogService.get());
|
generateDatabases(g_mockCatalogService.get());
|
||||||
generateInformationSchema(g_mockCatalogService.get());
|
generateInformationSchema(g_mockCatalogService.get());
|
||||||
generatePerformanceSchema(g_mockCatalogService.get());
|
generatePerformanceSchema(g_mockCatalogService.get());
|
||||||
generateTestTables(g_mockCatalogService.get());
|
|
||||||
generateTestStables(g_mockCatalogService.get());
|
|
||||||
generateFunctions(g_mockCatalogService.get());
|
generateFunctions(g_mockCatalogService.get());
|
||||||
generateDnodes(g_mockCatalogService.get());
|
generateDnodes(g_mockCatalogService.get());
|
||||||
g_mockCatalogService->showTables();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void destroyMetaDataEnv() { g_mockCatalogService.reset(); }
|
void destroyMetaDataEnv() { g_mockCatalogService.reset(); }
|
||||||
|
|
|
@ -334,11 +334,12 @@ class MockCatalogServiceImpl {
|
||||||
dnode_.insert(std::make_pair(dnodeId, epSet));
|
dnode_.insert(std::make_pair(dnodeId, epSet));
|
||||||
}
|
}
|
||||||
|
|
||||||
void createDatabase(const std::string& db, bool rollup) {
|
void createDatabase(const std::string& db, bool rollup, int8_t cacheLast) {
|
||||||
SDbCfgInfo cfg = {0};
|
SDbCfgInfo cfg = {0};
|
||||||
if (rollup) {
|
if (rollup) {
|
||||||
cfg.pRetensions = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SRetention));
|
cfg.pRetensions = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SRetention));
|
||||||
}
|
}
|
||||||
|
cfg.cacheLast = cacheLast;
|
||||||
dbCfg_.insert(std::make_pair(db, cfg));
|
dbCfg_.insert(std::make_pair(db, cfg));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -627,7 +628,9 @@ void MockCatalogService::createDnode(int32_t dnodeId, const std::string& host, i
|
||||||
impl_->createDnode(dnodeId, host, port);
|
impl_->createDnode(dnodeId, host, port);
|
||||||
}
|
}
|
||||||
|
|
||||||
void MockCatalogService::createDatabase(const std::string& db, bool rollup) { impl_->createDatabase(db, rollup); }
|
void MockCatalogService::createDatabase(const std::string& db, bool rollup, int8_t cacheLast) {
|
||||||
|
impl_->createDatabase(db, rollup, cacheLast);
|
||||||
|
}
|
||||||
|
|
||||||
int32_t MockCatalogService::catalogGetTableMeta(const SName* pTableName, STableMeta** pTableMeta) const {
|
int32_t MockCatalogService::catalogGetTableMeta(const SName* pTableName, STableMeta** pTableMeta) const {
|
||||||
return impl_->catalogGetTableMeta(pTableName, pTableMeta);
|
return impl_->catalogGetTableMeta(pTableName, pTableMeta);
|
||||||
|
|
|
@ -63,7 +63,7 @@ class MockCatalogService {
|
||||||
void createFunction(const std::string& func, int8_t funcType, int8_t outputType, int32_t outputLen, int32_t bufSize);
|
void createFunction(const std::string& func, int8_t funcType, int8_t outputType, int32_t outputLen, int32_t bufSize);
|
||||||
void createSmaIndex(const SMCreateSmaReq* pReq);
|
void createSmaIndex(const SMCreateSmaReq* pReq);
|
||||||
void createDnode(int32_t dnodeId, const std::string& host, int16_t port);
|
void createDnode(int32_t dnodeId, const std::string& host, int16_t port);
|
||||||
void createDatabase(const std::string& db, bool rollup = false);
|
void createDatabase(const std::string& db, bool rollup = false, int8_t cacheLast = 0);
|
||||||
|
|
||||||
int32_t catalogGetTableMeta(const SName* pTableName, STableMeta** pTableMeta) const;
|
int32_t catalogGetTableMeta(const SName* pTableName, STableMeta** pTableMeta) const;
|
||||||
int32_t catalogGetTableHashVgroup(const SName* pTableName, SVgroupInfo* vgInfo) const;
|
int32_t catalogGetTableHashVgroup(const SName* pTableName, SVgroupInfo* vgInfo) const;
|
||||||
|
|
|
@ -179,6 +179,12 @@ TEST_F(ParserShowToUseTest, showTables) {
|
||||||
run("SHOW test.tables like 'c%'");
|
run("SHOW test.tables like 'c%'");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST_F(ParserShowToUseTest, showTableDistributed) {
|
||||||
|
useDb("root", "test");
|
||||||
|
|
||||||
|
run("SHOW TABLE DISTRIBUTED st1");
|
||||||
|
}
|
||||||
|
|
||||||
// todo SHOW topics
|
// todo SHOW topics
|
||||||
|
|
||||||
TEST_F(ParserShowToUseTest, showUsers) {
|
TEST_F(ParserShowToUseTest, showUsers) {
|
||||||
|
|
|
@ -2011,11 +2011,13 @@ static int32_t lastRowScanOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogic
|
||||||
SNode* pNode = NULL;
|
SNode* pNode = NULL;
|
||||||
FOREACH(pNode, pAgg->pAggFuncs) {
|
FOREACH(pNode, pAgg->pAggFuncs) {
|
||||||
SFunctionNode* pFunc = (SFunctionNode*)pNode;
|
SFunctionNode* pFunc = (SFunctionNode*)pNode;
|
||||||
int32_t len = snprintf(pFunc->functionName, sizeof(pFunc->functionName), "_cache_last_row");
|
if (FUNCTION_TYPE_LAST_ROW == pFunc->funcType) {
|
||||||
pFunc->functionName[len] = '\0';
|
int32_t len = snprintf(pFunc->functionName, sizeof(pFunc->functionName), "_cache_last_row");
|
||||||
int32_t code = fmGetFuncInfo(pFunc, NULL, 0);
|
pFunc->functionName[len] = '\0';
|
||||||
if (TSDB_CODE_SUCCESS != code) {
|
int32_t code = fmGetFuncInfo(pFunc, NULL, 0);
|
||||||
return code;
|
if (TSDB_CODE_SUCCESS != code) {
|
||||||
|
return code;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
pAgg->hasLastRow = false;
|
pAgg->hasLastRow = false;
|
||||||
|
|
|
@ -98,6 +98,24 @@ TEST_F(PlanBasicTest, interpFunc) {
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(PlanBasicTest, lastRowFunc) {
|
TEST_F(PlanBasicTest, lastRowFunc) {
|
||||||
|
useDb("root", "cache_db");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1) FROM t1");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(*) FROM t1");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1, c2) FROM t1");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1), c2 FROM t1");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1) FROM st1");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1) FROM st1 PARTITION BY TBNAME");
|
||||||
|
|
||||||
|
run("SELECT LAST_ROW(c1), SUM(c3) FROM t1");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(PlanBasicTest, lastRowFuncWithoutCache) {
|
||||||
useDb("root", "test");
|
useDb("root", "test");
|
||||||
|
|
||||||
run("SELECT LAST_ROW(c1) FROM t1");
|
run("SELECT LAST_ROW(c1) FROM t1");
|
||||||
|
|
|
@ -378,6 +378,7 @@ void qwDbgDumpMgmtInfo(SQWorker *mgmt);
|
||||||
int32_t qwDbgValidateStatus(QW_FPARAMS_DEF, int8_t oriStatus, int8_t newStatus, bool *ignore);
|
int32_t qwDbgValidateStatus(QW_FPARAMS_DEF, int8_t oriStatus, int8_t newStatus, bool *ignore);
|
||||||
int32_t qwDbgBuildAndSendRedirectRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, SEpSet *pEpSet);
|
int32_t qwDbgBuildAndSendRedirectRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, SEpSet *pEpSet);
|
||||||
int32_t qwAddTaskCtx(QW_FPARAMS_DEF);
|
int32_t qwAddTaskCtx(QW_FPARAMS_DEF);
|
||||||
|
int32_t qwDbgResponseRedirect(SQWMsg *qwMsg, SQWTaskCtx *ctx);
|
||||||
|
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
|
|
|
@ -24,7 +24,7 @@ extern "C" {
|
||||||
#include "dataSinkMgt.h"
|
#include "dataSinkMgt.h"
|
||||||
|
|
||||||
int32_t qwAbortPrerocessQuery(QW_FPARAMS_DEF);
|
int32_t qwAbortPrerocessQuery(QW_FPARAMS_DEF);
|
||||||
int32_t qwPrerocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
int32_t qwPreprocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||||
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, const char* sql);
|
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, const char* sql);
|
||||||
int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||||
int32_t qwProcessReady(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
int32_t qwProcessReady(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||||
|
|
|
@ -9,7 +9,7 @@
|
||||||
#include "tmsg.h"
|
#include "tmsg.h"
|
||||||
#include "tname.h"
|
#include "tname.h"
|
||||||
|
|
||||||
SQWDebug gQWDebug = {.statusEnable = true, .dumpEnable = false, .tmp = true};
|
SQWDebug gQWDebug = {.statusEnable = true, .dumpEnable = false, .tmp = false};
|
||||||
|
|
||||||
int32_t qwDbgValidateStatus(QW_FPARAMS_DEF, int8_t oriStatus, int8_t newStatus, bool *ignore) {
|
int32_t qwDbgValidateStatus(QW_FPARAMS_DEF, int8_t oriStatus, int8_t newStatus, bool *ignore) {
|
||||||
if (!gQWDebug.statusEnable) {
|
if (!gQWDebug.statusEnable) {
|
||||||
|
@ -147,9 +147,9 @@ int32_t qwDbgBuildAndSendRedirectRsp(int32_t rspType, SRpcHandleInfo *pConn, int
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t qwDbgResponseREdirect(SQWMsg *qwMsg, SQWTaskCtx *ctx) {
|
int32_t qwDbgResponseRedirect(SQWMsg *qwMsg, SQWTaskCtx *ctx) {
|
||||||
if (gQWDebug.tmp) {
|
if (gQWDebug.tmp) {
|
||||||
if (TDMT_SCH_QUERY == qwMsg->msgType) {
|
if (TDMT_SCH_QUERY == qwMsg->msgType && (0 == taosRand() % 3)) {
|
||||||
SEpSet epSet = {0};
|
SEpSet epSet = {0};
|
||||||
epSet.inUse = 1;
|
epSet.inUse = 1;
|
||||||
epSet.numOfEps = 3;
|
epSet.numOfEps = 3;
|
||||||
|
@ -159,16 +159,15 @@ int32_t qwDbgResponseREdirect(SQWMsg *qwMsg, SQWTaskCtx *ctx) {
|
||||||
epSet.eps[1].port = 7200;
|
epSet.eps[1].port = 7200;
|
||||||
strcpy(epSet.eps[2].fqdn, "localhost");
|
strcpy(epSet.eps[2].fqdn, "localhost");
|
||||||
epSet.eps[2].port = 7300;
|
epSet.eps[2].port = 7300;
|
||||||
|
|
||||||
|
ctx->phase = QW_PHASE_POST_QUERY;
|
||||||
qwDbgBuildAndSendRedirectRsp(qwMsg->msgType + 1, &qwMsg->connInfo, TSDB_CODE_RPC_REDIRECT, &epSet);
|
qwDbgBuildAndSendRedirectRsp(qwMsg->msgType + 1, &qwMsg->connInfo, TSDB_CODE_RPC_REDIRECT, &epSet);
|
||||||
gQWDebug.tmp = false;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (TDMT_SCH_MERGE_QUERY == qwMsg->msgType) {
|
if (TDMT_SCH_MERGE_QUERY == qwMsg->msgType && (0 == taosRand() % 3)) {
|
||||||
ctx->phase = QW_PHASE_POST_QUERY;
|
ctx->phase = QW_PHASE_POST_QUERY;
|
||||||
qwDbgBuildAndSendRedirectRsp(qwMsg->msgType + 1, &qwMsg->connInfo, TSDB_CODE_RPC_REDIRECT, NULL);
|
qwDbgBuildAndSendRedirectRsp(qwMsg->msgType + 1, &qwMsg->connInfo, TSDB_CODE_RPC_REDIRECT, NULL);
|
||||||
gQWDebug.tmp = false;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -315,10 +315,10 @@ int32_t qWorkerPreprocessQueryMsg(void *qWorkerMgmt, SRpcMsg *pMsg) {
|
||||||
int64_t rId = msg->refId;
|
int64_t rId = msg->refId;
|
||||||
int32_t eId = msg->execId;
|
int32_t eId = msg->execId;
|
||||||
|
|
||||||
SQWMsg qwMsg = {.msg = msg->msg + msg->sqlLen, .msgLen = msg->phyLen, .connInfo = pMsg->info};
|
SQWMsg qwMsg = {.msgType = pMsg->msgType, .msg = msg->msg + msg->sqlLen, .msgLen = msg->phyLen, .connInfo = pMsg->info};
|
||||||
|
|
||||||
QW_SCH_TASK_DLOG("prerocessQuery start, handle:%p", pMsg->info.handle);
|
QW_SCH_TASK_DLOG("prerocessQuery start, handle:%p", pMsg->info.handle);
|
||||||
QW_ERR_RET(qwPrerocessQuery(QW_FPARAMS(), &qwMsg));
|
QW_ERR_RET(qwPreprocessQuery(QW_FPARAMS(), &qwMsg));
|
||||||
QW_SCH_TASK_DLOG("prerocessQuery end, handle:%p", pMsg->info.handle);
|
QW_SCH_TASK_DLOG("prerocessQuery end, handle:%p", pMsg->info.handle);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
|
|
@ -469,7 +469,7 @@ int32_t qwAbortPrerocessQuery(QW_FPARAMS_DEF) {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t qwPrerocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
int32_t qwPreprocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
bool queryRsped = false;
|
bool queryRsped = false;
|
||||||
SSubplan *plan = NULL;
|
SSubplan *plan = NULL;
|
||||||
|
@ -488,6 +488,8 @@ int32_t qwPrerocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
||||||
|
|
||||||
QW_ERR_JRET(qwAddTaskStatus(QW_FPARAMS(), JOB_TASK_STATUS_INIT));
|
QW_ERR_JRET(qwAddTaskStatus(QW_FPARAMS(), JOB_TASK_STATUS_INIT));
|
||||||
|
|
||||||
|
qwDbgResponseRedirect(qwMsg, ctx);
|
||||||
|
|
||||||
_return:
|
_return:
|
||||||
|
|
||||||
if (ctx) {
|
if (ctx) {
|
||||||
|
|
|
@ -1919,6 +1919,113 @@ int32_t maxScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *
|
||||||
return doMinMaxScalarFunction(pInput, inputNum, pOutput, false);
|
return doMinMaxScalarFunction(pInput, inputNum, pOutput, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t avgScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||||
|
SColumnInfoData *pInputData = pInput->columnData;
|
||||||
|
SColumnInfoData *pOutputData = pOutput->columnData;
|
||||||
|
|
||||||
|
int32_t type = GET_PARAM_TYPE(pInput);
|
||||||
|
int64_t count = 0;
|
||||||
|
bool hasNull = false;
|
||||||
|
|
||||||
|
for (int32_t i = 0; i < pInput->numOfRows; ++i) {
|
||||||
|
if (colDataIsNull_s(pInputData, i)) {
|
||||||
|
hasNull = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
switch(type) {
|
||||||
|
case TSDB_DATA_TYPE_TINYINT: {
|
||||||
|
int8_t *in = (int8_t *)pInputData->pData;
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_SMALLINT: {
|
||||||
|
int16_t *in = (int16_t *)pInputData->pData;
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_INT: {
|
||||||
|
int32_t *in = (int32_t *)pInputData->pData;
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_BIGINT: {
|
||||||
|
int64_t *in = (int64_t *)pInputData->pData;
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UTINYINT: {
|
||||||
|
uint8_t *in = (uint8_t *)pInputData->pData;
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_USMALLINT: {
|
||||||
|
uint16_t *in = (uint16_t *)pInputData->pData;
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UINT: {
|
||||||
|
uint32_t *in = (uint32_t *)pInputData->pData;
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UBIGINT: {
|
||||||
|
uint64_t *in = (uint64_t *)pInputData->pData;
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_FLOAT: {
|
||||||
|
float *in = (float *)pInputData->pData;
|
||||||
|
float *out = (float *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_DOUBLE: {
|
||||||
|
double *in = (double *)pInputData->pData;
|
||||||
|
double *out = (double *)pOutputData->pData;
|
||||||
|
*out += in[i];
|
||||||
|
count++;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (hasNull) {
|
||||||
|
colDataAppendNULL(pOutputData, 0);
|
||||||
|
} else {
|
||||||
|
if (IS_SIGNED_NUMERIC_TYPE(type)) {
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
*(double *)out = *out / (double)count;
|
||||||
|
} else if (IS_UNSIGNED_NUMERIC_TYPE(type)) {
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
*(double *)out = *out / (double)count;
|
||||||
|
} else if (IS_FLOAT_TYPE(type)) {
|
||||||
|
double *out = (double *)pOutputData->pData;
|
||||||
|
*(double *)out = *out / (double)count;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pOutput->numOfRows = 1;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||||
SColumnInfoData *pInputData = pInput->columnData;
|
SColumnInfoData *pInputData = pInput->columnData;
|
||||||
SColumnInfoData *pOutputData = pOutput->columnData;
|
SColumnInfoData *pOutputData = pOutput->columnData;
|
||||||
|
@ -2031,3 +2138,4 @@ int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarPara
|
||||||
pOutput->numOfRows = 1;
|
pOutput->numOfRows = 1;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -28,15 +28,6 @@ extern "C" {
|
||||||
#include "trpc.h"
|
#include "trpc.h"
|
||||||
#include "command.h"
|
#include "command.h"
|
||||||
|
|
||||||
#define SCHEDULE_DEFAULT_MAX_JOB_NUM 1000
|
|
||||||
#define SCHEDULE_DEFAULT_MAX_TASK_NUM 1000
|
|
||||||
#define SCHEDULE_DEFAULT_MAX_NODE_TABLE_NUM 200 // unit is TSDB_TABLE_NUM_UNIT
|
|
||||||
|
|
||||||
#define SCH_DEFAULT_TASK_TIMEOUT_USEC 10000000
|
|
||||||
#define SCH_MAX_TASK_TIMEOUT_USEC 60000000
|
|
||||||
|
|
||||||
#define SCH_MAX_CANDIDATE_EP_NUM TSDB_MAX_REPLICA
|
|
||||||
|
|
||||||
enum {
|
enum {
|
||||||
SCH_READ = 1,
|
SCH_READ = 1,
|
||||||
SCH_WRITE,
|
SCH_WRITE,
|
||||||
|
@ -54,6 +45,24 @@ typedef enum {
|
||||||
SCH_OP_GET_STATUS,
|
SCH_OP_GET_STATUS,
|
||||||
} SCH_OP_TYPE;
|
} SCH_OP_TYPE;
|
||||||
|
|
||||||
|
typedef enum {
|
||||||
|
SCH_LOAD_SEQ = 1,
|
||||||
|
SCH_RANDOM,
|
||||||
|
SCH_ALL,
|
||||||
|
} SCH_POLICY;
|
||||||
|
|
||||||
|
#define SCHEDULE_DEFAULT_MAX_JOB_NUM 1000
|
||||||
|
#define SCHEDULE_DEFAULT_MAX_TASK_NUM 1000
|
||||||
|
#define SCHEDULE_DEFAULT_MAX_NODE_TABLE_NUM 200 // unit is TSDB_TABLE_NUM_UNIT
|
||||||
|
#define SCHEDULE_DEFAULT_POLICY SCH_LOAD_SEQ
|
||||||
|
|
||||||
|
#define SCH_DEFAULT_TASK_TIMEOUT_USEC 10000000
|
||||||
|
#define SCH_MAX_TASK_TIMEOUT_USEC 60000000
|
||||||
|
#define SCH_MAX_CANDIDATE_EP_NUM TSDB_MAX_REPLICA
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
typedef struct SSchDebug {
|
typedef struct SSchDebug {
|
||||||
bool lockEnable;
|
bool lockEnable;
|
||||||
bool apiEnable;
|
bool apiEnable;
|
||||||
|
@ -126,6 +135,13 @@ typedef struct SSchStatusFps {
|
||||||
schStatusEventFp eventFp;
|
schStatusEventFp eventFp;
|
||||||
} SSchStatusFps;
|
} SSchStatusFps;
|
||||||
|
|
||||||
|
typedef struct SSchedulerCfg {
|
||||||
|
uint32_t maxJobNum;
|
||||||
|
int32_t maxNodeTableNum;
|
||||||
|
SCH_POLICY schPolicy;
|
||||||
|
bool enableReSchedule;
|
||||||
|
} SSchedulerCfg;
|
||||||
|
|
||||||
typedef struct SSchedulerMgmt {
|
typedef struct SSchedulerMgmt {
|
||||||
uint64_t taskId; // sequential taksId
|
uint64_t taskId; // sequential taksId
|
||||||
uint64_t sId; // schedulerId
|
uint64_t sId; // schedulerId
|
||||||
|
@ -184,34 +200,36 @@ typedef struct SSchLevel {
|
||||||
|
|
||||||
typedef struct SSchTaskProfile {
|
typedef struct SSchTaskProfile {
|
||||||
int64_t startTs;
|
int64_t startTs;
|
||||||
int64_t* execTime;
|
SArray* execTime;
|
||||||
int64_t waitTime;
|
int64_t waitTime;
|
||||||
int64_t endTs;
|
int64_t endTs;
|
||||||
} SSchTaskProfile;
|
} SSchTaskProfile;
|
||||||
|
|
||||||
typedef struct SSchTask {
|
typedef struct SSchTask {
|
||||||
uint64_t taskId; // task id
|
uint64_t taskId; // task id
|
||||||
SRWLatch lock; // task reentrant lock
|
SRWLatch lock; // task reentrant lock
|
||||||
int32_t maxExecTimes; // task may exec times
|
int32_t maxExecTimes; // task max exec times
|
||||||
int32_t execId; // task current execute try index
|
int32_t maxRetryTimes; // task max retry times
|
||||||
SSchLevel *level; // level
|
int32_t retryTimes; // task retry times
|
||||||
SRWLatch planLock; // task update plan lock
|
int32_t execId; // task current execute index
|
||||||
SSubplan *plan; // subplan
|
SSchLevel *level; // level
|
||||||
char *msg; // operator tree
|
SRWLatch planLock; // task update plan lock
|
||||||
int32_t msgLen; // msg length
|
SSubplan *plan; // subplan
|
||||||
int8_t status; // task status
|
char *msg; // operator tree
|
||||||
int32_t lastMsgType; // last sent msg type
|
int32_t msgLen; // msg length
|
||||||
int64_t timeoutUsec; // taks timeout useconds before reschedule
|
int8_t status; // task status
|
||||||
SQueryNodeAddr succeedAddr; // task executed success node address
|
int32_t lastMsgType; // last sent msg type
|
||||||
int8_t candidateIdx; // current try condidation index
|
int64_t timeoutUsec; // task timeout useconds before reschedule
|
||||||
SArray *candidateAddrs; // condidate node addresses, element is SQueryNodeAddr
|
SQueryNodeAddr succeedAddr; // task executed success node address
|
||||||
SHashObj *execNodes; // all tried node for current task, element is SSchNodeInfo
|
int8_t candidateIdx; // current try condidation index
|
||||||
SSchTaskProfile profile; // task execution profile
|
SArray *candidateAddrs; // condidate node addresses, element is SQueryNodeAddr
|
||||||
int32_t childReady; // child task ready number
|
SHashObj *execNodes; // all tried node for current task, element is SSchNodeInfo
|
||||||
SArray *children; // the datasource tasks,from which to fetch the result, element is SQueryTask*
|
SSchTaskProfile profile; // task execution profile
|
||||||
SArray *parents; // the data destination tasks, get data from current task, element is SQueryTask*
|
int32_t childReady; // child task ready number
|
||||||
void* handle; // task send handle
|
SArray *children; // the datasource tasks,from which to fetch the result, element is SQueryTask*
|
||||||
bool registerdHb; // registered in hb
|
SArray *parents; // the data destination tasks, get data from current task, element is SQueryTask*
|
||||||
|
void* handle; // task send handle
|
||||||
|
bool registerdHb; // registered in hb
|
||||||
} SSchTask;
|
} SSchTask;
|
||||||
|
|
||||||
typedef struct SSchJobAttr {
|
typedef struct SSchJobAttr {
|
||||||
|
@ -265,7 +283,7 @@ typedef struct SSchJob {
|
||||||
|
|
||||||
extern SSchedulerMgmt schMgmt;
|
extern SSchedulerMgmt schMgmt;
|
||||||
|
|
||||||
#define SCH_TASK_TIMEOUT(_task) ((taosGetTimestampUs() - (_task)->profile.execTime[(_task)->execId % (_task)->maxExecTimes]) > (_task)->timeoutUsec)
|
#define SCH_TASK_TIMEOUT(_task) ((taosGetTimestampUs() - *(int64_t*)taosArrayGet((_task)->profile.execTime, (_task)->execId)) > (_task)->timeoutUsec)
|
||||||
|
|
||||||
#define SCH_TASK_READY_FOR_LAUNCH(readyNum, task) ((readyNum) >= taosArrayGetSize((task)->children))
|
#define SCH_TASK_READY_FOR_LAUNCH(readyNum, task) ((readyNum) >= taosArrayGetSize((task)->children))
|
||||||
|
|
||||||
|
@ -299,7 +317,6 @@ extern SSchedulerMgmt schMgmt;
|
||||||
#define SCH_TASK_NEED_FLOW_CTRL(_job, _task) (SCH_IS_DATA_BIND_QRY_TASK(_task) && SCH_JOB_NEED_FLOW_CTRL(_job) && SCH_IS_LEVEL_UNFINISHED((_task)->level))
|
#define SCH_TASK_NEED_FLOW_CTRL(_job, _task) (SCH_IS_DATA_BIND_QRY_TASK(_task) && SCH_JOB_NEED_FLOW_CTRL(_job) && SCH_IS_LEVEL_UNFINISHED((_task)->level))
|
||||||
#define SCH_FETCH_TYPE(_pSrcTask) (SCH_IS_DATA_BIND_QRY_TASK(_pSrcTask) ? TDMT_SCH_FETCH : TDMT_SCH_MERGE_FETCH)
|
#define SCH_FETCH_TYPE(_pSrcTask) (SCH_IS_DATA_BIND_QRY_TASK(_pSrcTask) ? TDMT_SCH_FETCH : TDMT_SCH_MERGE_FETCH)
|
||||||
#define SCH_TASK_NEED_FETCH(_task) ((_task)->plan->subplanType != SUBPLAN_TYPE_MODIFY)
|
#define SCH_TASK_NEED_FETCH(_task) ((_task)->plan->subplanType != SUBPLAN_TYPE_MODIFY)
|
||||||
#define SCH_TASK_MAX_EXEC_TIMES(_levelIdx, _levelNum) (SCH_MAX_CANDIDATE_EP_NUM * ((_levelNum) - (_levelIdx)))
|
|
||||||
|
|
||||||
#define SCH_SET_JOB_TYPE(_job, type) do { if ((type) != SUBPLAN_TYPE_MODIFY) { (_job)->attr.queryJob = true; } } while (0)
|
#define SCH_SET_JOB_TYPE(_job, type) do { if ((type) != SUBPLAN_TYPE_MODIFY) { (_job)->attr.queryJob = true; } } while (0)
|
||||||
#define SCH_IS_QUERY_JOB(_job) ((_job)->attr.queryJob)
|
#define SCH_IS_QUERY_JOB(_job) ((_job)->attr.queryJob)
|
||||||
|
@ -321,8 +338,7 @@ extern SSchedulerMgmt schMgmt;
|
||||||
#define SCH_LOG_TASK_START_TS(_task) \
|
#define SCH_LOG_TASK_START_TS(_task) \
|
||||||
do { \
|
do { \
|
||||||
int64_t us = taosGetTimestampUs(); \
|
int64_t us = taosGetTimestampUs(); \
|
||||||
int32_t idx = (_task)->execId % (_task)->maxExecTimes; \
|
taosArrayPush((_task)->profile.execTime, &us); \
|
||||||
(_task)->profile.execTime[idx] = us; \
|
|
||||||
if (0 == (_task)->execId) { \
|
if (0 == (_task)->execId) { \
|
||||||
(_task)->profile.startTs = us; \
|
(_task)->profile.startTs = us; \
|
||||||
} \
|
} \
|
||||||
|
@ -331,8 +347,7 @@ extern SSchedulerMgmt schMgmt;
|
||||||
#define SCH_LOG_TASK_WAIT_TS(_task) \
|
#define SCH_LOG_TASK_WAIT_TS(_task) \
|
||||||
do { \
|
do { \
|
||||||
int64_t us = taosGetTimestampUs(); \
|
int64_t us = taosGetTimestampUs(); \
|
||||||
int32_t idx = (_task)->execId % (_task)->maxExecTimes; \
|
(_task)->profile.waitTime += us - *(int64_t*)taosArrayGet((_task)->profile.execTime, (_task)->execId); \
|
||||||
(_task)->profile.waitTime += us - (_task)->profile.execTime[idx]; \
|
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
|
||||||
|
@ -340,7 +355,8 @@ extern SSchedulerMgmt schMgmt;
|
||||||
do { \
|
do { \
|
||||||
int64_t us = taosGetTimestampUs(); \
|
int64_t us = taosGetTimestampUs(); \
|
||||||
int32_t idx = (_task)->execId % (_task)->maxExecTimes; \
|
int32_t idx = (_task)->execId % (_task)->maxExecTimes; \
|
||||||
(_task)->profile.execTime[idx] = us - (_task)->profile.execTime[idx]; \
|
int64_t *startts = taosArrayGet((_task)->profile.execTime, (_task)->execId); \
|
||||||
|
*startts = us - *startts; \
|
||||||
(_task)->profile.endTs = us; \
|
(_task)->profile.endTs = us; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
@ -471,9 +487,11 @@ void schFreeTask(SSchJob *pJob, SSchTask *pTask);
|
||||||
void schDropTaskInHashList(SSchJob *pJob, SHashObj *list);
|
void schDropTaskInHashList(SSchJob *pJob, SHashObj *list);
|
||||||
int32_t schLaunchLevelTasks(SSchJob *pJob, SSchLevel *level);
|
int32_t schLaunchLevelTasks(SSchJob *pJob, SSchLevel *level);
|
||||||
int32_t schGetTaskFromList(SHashObj *pTaskList, uint64_t taskId, SSchTask **pTask);
|
int32_t schGetTaskFromList(SHashObj *pTaskList, uint64_t taskId, SSchTask **pTask);
|
||||||
int32_t schInitTask(SSchJob *pJob, SSchTask *pTask, SSubplan *pPlan, SSchLevel *pLevel, int32_t levelNum);
|
int32_t schInitTask(SSchJob *pJob, SSchTask *pTask, SSubplan *pPlan, SSchLevel *pLevel);
|
||||||
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask);
|
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask);
|
||||||
void schDirectPostJobRes(SSchedulerReq* pReq, int32_t errCode);
|
void schDirectPostJobRes(SSchedulerReq* pReq, int32_t errCode);
|
||||||
|
int32_t schHandleJobFailure(SSchJob *pJob, int32_t errCode);
|
||||||
|
int32_t schHandleJobDrop(SSchJob *pJob, int32_t errCode);
|
||||||
bool schChkCurrentOp(SSchJob *pJob, int32_t op, bool sync);
|
bool schChkCurrentOp(SSchJob *pJob, int32_t op, bool sync);
|
||||||
|
|
||||||
extern SSchDebug gSCHDebug;
|
extern SSchDebug gSCHDebug;
|
||||||
|
|
|
@ -343,7 +343,7 @@ int32_t schValidateAndBuildJob(SQueryPlan *pDag, SSchJob *pJob) {
|
||||||
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
|
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
|
||||||
}
|
}
|
||||||
|
|
||||||
SCH_ERR_JRET(schInitTask(pJob, pTask, plan, pLevel, levelNum));
|
SCH_ERR_JRET(schInitTask(pJob, pTask, plan, pLevel));
|
||||||
|
|
||||||
SCH_ERR_JRET(schAppendJobDataSrc(pJob, pTask));
|
SCH_ERR_JRET(schAppendJobDataSrc(pJob, pTask));
|
||||||
|
|
||||||
|
@ -476,7 +476,7 @@ _return:
|
||||||
SCH_UNLOCK(SCH_WRITE, &pJob->opStatus.lock);
|
SCH_UNLOCK(SCH_WRITE, &pJob->opStatus.lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t schProcessOnJobFailureImpl(SSchJob *pJob, int32_t status, int32_t errCode) {
|
int32_t schProcessOnJobFailure(SSchJob *pJob, int32_t errCode) {
|
||||||
schUpdateJobErrCode(pJob, errCode);
|
schUpdateJobErrCode(pJob, errCode);
|
||||||
|
|
||||||
int32_t code = atomic_load_32(&pJob->errCode);
|
int32_t code = atomic_load_32(&pJob->errCode);
|
||||||
|
@ -489,21 +489,29 @@ int32_t schProcessOnJobFailureImpl(SSchJob *pJob, int32_t status, int32_t errCod
|
||||||
SCH_RET(TSDB_CODE_SCH_IGNORE_ERROR);
|
SCH_RET(TSDB_CODE_SCH_IGNORE_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Note: no more task error processing, handled in function internal
|
int32_t schHandleJobFailure(SSchJob *pJob, int32_t errCode) {
|
||||||
int32_t schProcessOnJobFailure(SSchJob *pJob, int32_t errCode) {
|
|
||||||
if (TSDB_CODE_SCH_IGNORE_ERROR == errCode) {
|
if (TSDB_CODE_SCH_IGNORE_ERROR == errCode) {
|
||||||
return TSDB_CODE_SCH_IGNORE_ERROR;
|
return TSDB_CODE_SCH_IGNORE_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
schProcessOnJobFailureImpl(pJob, JOB_TASK_STATUS_FAIL, errCode);
|
schSwitchJobStatus(pJob, JOB_TASK_STATUS_FAIL, &errCode);
|
||||||
return TSDB_CODE_SCH_IGNORE_ERROR;
|
return TSDB_CODE_SCH_IGNORE_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Note: no more error processing, handled in function internal
|
|
||||||
int32_t schProcessOnJobDropped(SSchJob *pJob, int32_t errCode) {
|
int32_t schProcessOnJobDropped(SSchJob *pJob, int32_t errCode) {
|
||||||
SCH_RET(schProcessOnJobFailureImpl(pJob, JOB_TASK_STATUS_DROP, errCode));
|
SCH_RET(schProcessOnJobFailure(pJob, errCode));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t schHandleJobDrop(SSchJob *pJob, int32_t errCode) {
|
||||||
|
if (TSDB_CODE_SCH_IGNORE_ERROR == errCode) {
|
||||||
|
return TSDB_CODE_SCH_IGNORE_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
schSwitchJobStatus(pJob, JOB_TASK_STATUS_DROP, &errCode);
|
||||||
|
return TSDB_CODE_SCH_IGNORE_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t schProcessOnJobPartialSuccess(SSchJob *pJob) {
|
int32_t schProcessOnJobPartialSuccess(SSchJob *pJob) {
|
||||||
schPostJobRes(pJob, SCH_OP_EXEC);
|
schPostJobRes(pJob, SCH_OP_EXEC);
|
||||||
|
|
||||||
|
@ -828,7 +836,7 @@ void schProcessOnOpEnd(SSchJob *pJob, SCH_OP_TYPE type, SSchedulerReq* pReq, int
|
||||||
}
|
}
|
||||||
|
|
||||||
if (errCode) {
|
if (errCode) {
|
||||||
schSwitchJobStatus(pJob, JOB_TASK_STATUS_FAIL, (void*)&errCode);
|
schHandleJobFailure(pJob, errCode);
|
||||||
}
|
}
|
||||||
|
|
||||||
SCH_JOB_DLOG("job end %s operation with code %s", schGetOpStr(type), tstrerror(errCode));
|
SCH_JOB_DLOG("job end %s operation with code %s", schGetOpStr(type), tstrerror(errCode));
|
||||||
|
@ -907,7 +915,7 @@ void schProcessOnCbEnd(SSchJob *pJob, SSchTask *pTask, int32_t errCode) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (errCode) {
|
if (errCode) {
|
||||||
schSwitchJobStatus(pJob, JOB_TASK_STATUS_FAIL, (void*)&errCode);
|
schHandleJobFailure(pJob, errCode);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pJob) {
|
if (pJob) {
|
||||||
|
|
|
@ -42,32 +42,47 @@ void schFreeTask(SSchJob *pJob, SSchTask *pTask) {
|
||||||
taosHashCleanup(pTask->execNodes);
|
taosHashCleanup(pTask->execNodes);
|
||||||
}
|
}
|
||||||
|
|
||||||
taosMemoryFree(pTask->profile.execTime);
|
taosArrayDestroy(pTask->profile.execTime);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t schInitTask(SSchJob *pJob, SSchTask *pTask, SSubplan *pPlan, SSchLevel *pLevel, int32_t levelNum) {
|
void schInitTaskRetryTimes(SSchJob *pJob, SSchTask *pTask, SSchLevel *pLevel) {
|
||||||
|
if (SCH_IS_DATA_BIND_TASK(pTask) || (!SCH_IS_QUERY_JOB(pJob)) || (SCH_ALL != schMgmt.cfg.schPolicy)) {
|
||||||
|
pTask->maxRetryTimes = SCH_MAX_CANDIDATE_EP_NUM;
|
||||||
|
} else {
|
||||||
|
int32_t nodeNum = taosArrayGetSize(pJob->nodeList);
|
||||||
|
pTask->maxRetryTimes = TMAX(nodeNum, SCH_MAX_CANDIDATE_EP_NUM);
|
||||||
|
}
|
||||||
|
|
||||||
|
pTask->maxExecTimes = pTask->maxRetryTimes * (pLevel->level + 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t schInitTask(SSchJob *pJob, SSchTask *pTask, SSubplan *pPlan, SSchLevel *pLevel) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
pTask->plan = pPlan;
|
pTask->plan = pPlan;
|
||||||
pTask->level = pLevel;
|
pTask->level = pLevel;
|
||||||
pTask->execId = -1;
|
pTask->execId = -1;
|
||||||
pTask->maxExecTimes = SCH_TASK_MAX_EXEC_TIMES(pLevel->level, levelNum);
|
|
||||||
pTask->timeoutUsec = SCH_DEFAULT_TASK_TIMEOUT_USEC;
|
pTask->timeoutUsec = SCH_DEFAULT_TASK_TIMEOUT_USEC;
|
||||||
pTask->taskId = schGenTaskId();
|
pTask->taskId = schGenTaskId();
|
||||||
pTask->execNodes =
|
pTask->execNodes =
|
||||||
taosHashInit(SCH_MAX_CANDIDATE_EP_NUM, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
|
taosHashInit(SCH_MAX_CANDIDATE_EP_NUM, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
|
||||||
pTask->profile.execTime = taosMemoryCalloc(pTask->maxExecTimes, sizeof(int64_t));
|
|
||||||
|
schInitTaskRetryTimes(pJob, pTask, pLevel);
|
||||||
|
|
||||||
|
pTask->profile.execTime = taosArrayInit(pTask->maxExecTimes, sizeof(int64_t));
|
||||||
if (NULL == pTask->execNodes || NULL == pTask->profile.execTime) {
|
if (NULL == pTask->execNodes || NULL == pTask->profile.execTime) {
|
||||||
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
|
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
|
||||||
}
|
}
|
||||||
|
|
||||||
SCH_SET_TASK_STATUS(pTask, JOB_TASK_STATUS_INIT);
|
SCH_SET_TASK_STATUS(pTask, JOB_TASK_STATUS_INIT);
|
||||||
|
|
||||||
|
SCH_TASK_DLOG("task initialized, max times %d:%d", pTask->maxRetryTimes, pTask->maxExecTimes);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
_return:
|
_return:
|
||||||
|
|
||||||
taosMemoryFreeClear(pTask->profile.execTime);
|
taosArrayDestroy(pTask->profile.execTime);
|
||||||
taosHashCleanup(pTask->execNodes);
|
taosHashCleanup(pTask->execNodes);
|
||||||
|
|
||||||
SCH_RET(code);
|
SCH_RET(code);
|
||||||
|
@ -105,7 +120,7 @@ int32_t schDropTaskExecNode(SSchJob *pJob, SSchTask *pTask, void *handle, int32_
|
||||||
}
|
}
|
||||||
|
|
||||||
if (taosHashRemove(pTask->execNodes, &execId, sizeof(execId))) {
|
if (taosHashRemove(pTask->execNodes, &execId, sizeof(execId))) {
|
||||||
SCH_TASK_ELOG("fail to remove execId %d from execNodeList", execId);
|
SCH_TASK_DLOG("execId %d already not in execNodeList", execId);
|
||||||
} else {
|
} else {
|
||||||
SCH_TASK_DLOG("execId %d removed from execNodeList", execId);
|
SCH_TASK_DLOG("execId %d removed from execNodeList", execId);
|
||||||
}
|
}
|
||||||
|
@ -235,7 +250,7 @@ int32_t schProcessOnTaskSuccess(SSchJob *pJob, SSchTask *pTask) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pTask->level->taskFailed > 0) {
|
if (pTask->level->taskFailed > 0) {
|
||||||
SCH_RET(schSwitchJobStatus(pJob, JOB_TASK_STATUS_FAIL, NULL));
|
SCH_RET(schHandleJobFailure(pJob, pJob->errCode));
|
||||||
} else {
|
} else {
|
||||||
SCH_RET(schSwitchJobStatus(pJob, JOB_TASK_STATUS_PART_SUCC, NULL));
|
SCH_RET(schSwitchJobStatus(pJob, JOB_TASK_STATUS_PART_SUCC, NULL));
|
||||||
}
|
}
|
||||||
|
@ -285,6 +300,10 @@ int32_t schProcessOnTaskSuccess(SSchJob *pJob, SSchTask *pTask) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t schRescheduleTask(SSchJob *pJob, SSchTask *pTask) {
|
int32_t schRescheduleTask(SSchJob *pJob, SSchTask *pTask) {
|
||||||
|
if (!schMgmt.cfg.enableReSchedule) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -304,13 +323,17 @@ int32_t schRescheduleTask(SSchJob *pJob, SSchTask *pTask) {
|
||||||
int32_t schDoTaskRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf *pData, int32_t rspCode) {
|
int32_t schDoTaskRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf *pData, int32_t rspCode) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
if ((pTask->execId + 1) >= pTask->maxExecTimes) {
|
SCH_TASK_DLOG("task will be redirected now, status:%s", SCH_GET_TASK_STATUS_STR(pTask));
|
||||||
SCH_TASK_DLOG("task no more retry since reach max try times, execId:%d", pTask->execId);
|
|
||||||
schSwitchJobStatus(pJob, JOB_TASK_STATUS_FAIL, (void *)&rspCode);
|
if (NULL == pData) {
|
||||||
return TSDB_CODE_SUCCESS;
|
pTask->retryTimes = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
SCH_TASK_DLOG("task will be redirected now, status:%s", SCH_GET_TASK_STATUS_STR(pTask));
|
if (((pTask->execId + 1) >= pTask->maxExecTimes) || ((pTask->retryTimes + 1) > pTask->maxRetryTimes)) {
|
||||||
|
SCH_TASK_DLOG("task no more retry since reach max times %d:%d, execId %d", pTask->maxRetryTimes, pTask->maxExecTimes, pTask->execId);
|
||||||
|
schHandleJobFailure(pJob, rspCode);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
schDropTaskOnExecNode(pJob, pTask);
|
schDropTaskOnExecNode(pJob, pTask);
|
||||||
taosHashClear(pTask->execNodes);
|
taosHashClear(pTask->execNodes);
|
||||||
|
@ -493,9 +516,15 @@ int32_t schTaskCheckSetRetry(SSchJob *pJob, SSchTask *pTask, int32_t errCode, bo
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ((pTask->retryTimes + 1) > pTask->maxRetryTimes) {
|
||||||
|
*needRetry = false;
|
||||||
|
SCH_TASK_DLOG("task no more retry since reach max retry times, retryTimes:%d/%d", pTask->retryTimes, pTask->maxRetryTimes);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
if ((pTask->execId + 1) >= pTask->maxExecTimes) {
|
if ((pTask->execId + 1) >= pTask->maxExecTimes) {
|
||||||
*needRetry = false;
|
*needRetry = false;
|
||||||
SCH_TASK_DLOG("task no more retry since reach max try times, execId:%d", pTask->execId);
|
SCH_TASK_DLOG("task no more retry since reach max exec times, execId:%d/%d", pTask->execId, pTask->maxExecTimes);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -649,10 +678,31 @@ int32_t schUpdateTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask, SEpSet *pEpSe
|
||||||
|
|
||||||
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask) {
|
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask) {
|
||||||
int32_t candidateNum = taosArrayGetSize(pTask->candidateAddrs);
|
int32_t candidateNum = taosArrayGetSize(pTask->candidateAddrs);
|
||||||
if (++pTask->candidateIdx >= candidateNum) {
|
if (candidateNum <= 1) {
|
||||||
pTask->candidateIdx = 0;
|
goto _return;
|
||||||
}
|
}
|
||||||
SCH_TASK_DLOG("switch task candiateIdx to %d", pTask->candidateIdx);
|
|
||||||
|
switch (schMgmt.cfg.schPolicy) {
|
||||||
|
case SCH_LOAD_SEQ:
|
||||||
|
case SCH_ALL:
|
||||||
|
default:
|
||||||
|
if (++pTask->candidateIdx >= candidateNum) {
|
||||||
|
pTask->candidateIdx = 0;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
case SCH_RANDOM: {
|
||||||
|
int32_t lastIdx = pTask->candidateIdx;
|
||||||
|
while (lastIdx == pTask->candidateIdx) {
|
||||||
|
pTask->candidateIdx = taosRand() % candidateNum;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_return:
|
||||||
|
|
||||||
|
SCH_TASK_DLOG("switch task candiateIdx to %d/%d", pTask->candidateIdx, candidateNum);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -739,8 +789,9 @@ int32_t schLaunchTaskImpl(SSchJob *pJob, SSchTask *pTask) {
|
||||||
|
|
||||||
atomic_add_fetch_32(&pTask->level->taskLaunchedNum, 1);
|
atomic_add_fetch_32(&pTask->level->taskLaunchedNum, 1);
|
||||||
pTask->execId++;
|
pTask->execId++;
|
||||||
|
pTask->retryTimes++;
|
||||||
|
|
||||||
SCH_TASK_DLOG("start to launch task's %dth exec", pTask->execId);
|
SCH_TASK_DLOG("start to launch task, execId %d, retry %d", pTask->execId, pTask->retryTimes);
|
||||||
|
|
||||||
SCH_LOG_TASK_START_TS(pTask);
|
SCH_LOG_TASK_START_TS(pTask);
|
||||||
|
|
||||||
|
|
|
@ -22,26 +22,19 @@ SSchedulerMgmt schMgmt = {
|
||||||
.jobRef = -1,
|
.jobRef = -1,
|
||||||
};
|
};
|
||||||
|
|
||||||
int32_t schedulerInit(SSchedulerCfg *cfg) {
|
int32_t schedulerInit() {
|
||||||
if (schMgmt.jobRef >= 0) {
|
if (schMgmt.jobRef >= 0) {
|
||||||
qError("scheduler already initialized");
|
qError("scheduler already initialized");
|
||||||
return TSDB_CODE_QRY_INVALID_INPUT;
|
return TSDB_CODE_QRY_INVALID_INPUT;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (cfg) {
|
schMgmt.cfg.maxJobNum = SCHEDULE_DEFAULT_MAX_JOB_NUM;
|
||||||
schMgmt.cfg = *cfg;
|
schMgmt.cfg.maxNodeTableNum = SCHEDULE_DEFAULT_MAX_NODE_TABLE_NUM;
|
||||||
|
schMgmt.cfg.schPolicy = SCHEDULE_DEFAULT_POLICY;
|
||||||
if (schMgmt.cfg.maxJobNum == 0) {
|
schMgmt.cfg.enableReSchedule = true;
|
||||||
schMgmt.cfg.maxJobNum = SCHEDULE_DEFAULT_MAX_JOB_NUM;
|
|
||||||
}
|
|
||||||
if (schMgmt.cfg.maxNodeTableNum <= 0) {
|
|
||||||
schMgmt.cfg.maxNodeTableNum = SCHEDULE_DEFAULT_MAX_NODE_TABLE_NUM;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
schMgmt.cfg.maxJobNum = SCHEDULE_DEFAULT_MAX_JOB_NUM;
|
|
||||||
schMgmt.cfg.maxNodeTableNum = SCHEDULE_DEFAULT_MAX_NODE_TABLE_NUM;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
qDebug("schedule policy init to %d", schMgmt.cfg.schPolicy);
|
||||||
|
|
||||||
schMgmt.jobRef = taosOpenRef(schMgmt.cfg.maxJobNum, schFreeJobImpl);
|
schMgmt.jobRef = taosOpenRef(schMgmt.cfg.maxJobNum, schFreeJobImpl);
|
||||||
if (schMgmt.jobRef < 0) {
|
if (schMgmt.jobRef < 0) {
|
||||||
qError("init schduler jobRef failed, num:%u", schMgmt.cfg.maxJobNum);
|
qError("init schduler jobRef failed, num:%u", schMgmt.cfg.maxJobNum);
|
||||||
|
@ -130,6 +123,26 @@ void schedulerStopQueryHb(void *pTrans) {
|
||||||
schCleanClusterHb(pTrans);
|
schCleanClusterHb(pTrans);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t schedulerUpdatePolicy(int32_t policy) {
|
||||||
|
switch (policy) {
|
||||||
|
case SCH_LOAD_SEQ:
|
||||||
|
case SCH_RANDOM:
|
||||||
|
case SCH_ALL:
|
||||||
|
schMgmt.cfg.schPolicy = policy;
|
||||||
|
qDebug("schedule policy updated to %d", schMgmt.cfg.schPolicy);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||||
|
}
|
||||||
|
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t schedulerEnableReSchedule(bool enableResche) {
|
||||||
|
schMgmt.cfg.enableReSchedule = enableResche;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
void schedulerFreeJob(int64_t* jobId, int32_t errCode) {
|
void schedulerFreeJob(int64_t* jobId, int32_t errCode) {
|
||||||
if (0 == *jobId) {
|
if (0 == *jobId) {
|
||||||
return;
|
return;
|
||||||
|
@ -141,7 +154,7 @@ void schedulerFreeJob(int64_t* jobId, int32_t errCode) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
schSwitchJobStatus(pJob, JOB_TASK_STATUS_DROP, (void*)&errCode);
|
schHandleJobDrop(pJob, errCode);
|
||||||
|
|
||||||
schReleaseJob(*jobId);
|
schReleaseJob(*jobId);
|
||||||
*jobId = 0;
|
*jobId = 0;
|
||||||
|
|
|
@ -477,7 +477,7 @@ void* schtRunJobThread(void *aa) {
|
||||||
schtInitLogFile();
|
schtInitLogFile();
|
||||||
|
|
||||||
|
|
||||||
int32_t code = schedulerInit(NULL);
|
int32_t code = schedulerInit();
|
||||||
assert(code == 0);
|
assert(code == 0);
|
||||||
|
|
||||||
|
|
||||||
|
@ -649,7 +649,7 @@ TEST(queryTest, normalCase) {
|
||||||
qnodeAddr.port = 6031;
|
qnodeAddr.port = 6031;
|
||||||
taosArrayPush(qnodeList, &qnodeAddr);
|
taosArrayPush(qnodeList, &qnodeAddr);
|
||||||
|
|
||||||
int32_t code = schedulerInit(NULL);
|
int32_t code = schedulerInit();
|
||||||
ASSERT_EQ(code, 0);
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
schtBuildQueryDag(&dag);
|
schtBuildQueryDag(&dag);
|
||||||
|
@ -756,7 +756,7 @@ TEST(queryTest, readyFirstCase) {
|
||||||
qnodeAddr.port = 6031;
|
qnodeAddr.port = 6031;
|
||||||
taosArrayPush(qnodeList, &qnodeAddr);
|
taosArrayPush(qnodeList, &qnodeAddr);
|
||||||
|
|
||||||
int32_t code = schedulerInit(NULL);
|
int32_t code = schedulerInit();
|
||||||
ASSERT_EQ(code, 0);
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
schtBuildQueryDag(&dag);
|
schtBuildQueryDag(&dag);
|
||||||
|
@ -866,7 +866,7 @@ TEST(queryTest, flowCtrlCase) {
|
||||||
qnodeAddr.port = 6031;
|
qnodeAddr.port = 6031;
|
||||||
taosArrayPush(qnodeList, &qnodeAddr);
|
taosArrayPush(qnodeList, &qnodeAddr);
|
||||||
|
|
||||||
int32_t code = schedulerInit(NULL);
|
int32_t code = schedulerInit();
|
||||||
ASSERT_EQ(code, 0);
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
schtBuildQueryFlowCtrlDag(&dag);
|
schtBuildQueryFlowCtrlDag(&dag);
|
||||||
|
@ -975,7 +975,7 @@ TEST(insertTest, normalCase) {
|
||||||
qnodeAddr.port = 6031;
|
qnodeAddr.port = 6031;
|
||||||
taosArrayPush(qnodeList, &qnodeAddr);
|
taosArrayPush(qnodeList, &qnodeAddr);
|
||||||
|
|
||||||
int32_t code = schedulerInit(NULL);
|
int32_t code = schedulerInit();
|
||||||
ASSERT_EQ(code, 0);
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
schtBuildInsertDag(&dag);
|
schtBuildInsertDag(&dag);
|
||||||
|
|
|
@ -1853,8 +1853,8 @@ void syncNodeDoConfigChange(SSyncNode* pSyncNode, SSyncCfg* pNewConfig, SyncInde
|
||||||
syncNodeBecomeLeader(pSyncNode, tmpbuf);
|
syncNodeBecomeLeader(pSyncNode, tmpbuf);
|
||||||
|
|
||||||
// Raft 3.6.2 Committing entries from previous terms
|
// Raft 3.6.2 Committing entries from previous terms
|
||||||
syncNodeReplicate(pSyncNode);
|
|
||||||
syncNodeAppendNoop(pSyncNode);
|
syncNodeAppendNoop(pSyncNode);
|
||||||
|
syncNodeReplicate(pSyncNode);
|
||||||
syncMaybeAdvanceCommitIndex(pSyncNode);
|
syncMaybeAdvanceCommitIndex(pSyncNode);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
|
@ -2029,8 +2029,8 @@ void syncNodeCandidate2Leader(SSyncNode* pSyncNode) {
|
||||||
syncNodeLog2("==state change syncNodeCandidate2Leader==", pSyncNode);
|
syncNodeLog2("==state change syncNodeCandidate2Leader==", pSyncNode);
|
||||||
|
|
||||||
// Raft 3.6.2 Committing entries from previous terms
|
// Raft 3.6.2 Committing entries from previous terms
|
||||||
syncNodeReplicate(pSyncNode);
|
|
||||||
syncNodeAppendNoop(pSyncNode);
|
syncNodeAppendNoop(pSyncNode);
|
||||||
|
syncNodeReplicate(pSyncNode);
|
||||||
syncMaybeAdvanceCommitIndex(pSyncNode);
|
syncMaybeAdvanceCommitIndex(pSyncNode);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -771,6 +771,7 @@ static void cliHandleRelease(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
||||||
if (exh == NULL) {
|
if (exh == NULL) {
|
||||||
tDebug("%" PRId64 " already release", refId);
|
tDebug("%" PRId64 " already release", refId);
|
||||||
|
destroyCmsg(pMsg);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -135,7 +135,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_API_ERROR, "Stmt API usage error"
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_TBNAME_ERROR, "Stmt table name not set")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_TBNAME_ERROR, "Stmt table name not set")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_CLAUSE_ERROR, "not supported stmt clause")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_CLAUSE_ERROR, "not supported stmt clause")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_QUERY_KILLED, "Query killed")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_QUERY_KILLED, "Query killed")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NO_EXEC_NODE, "No available execution node")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NO_EXEC_NODE, "No available execution node in current query policy configuration")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NOT_STABLE_ERROR, "Table is not a super table")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NOT_STABLE_ERROR, "Table is not a super table")
|
||||||
|
|
||||||
// mnode-common
|
// mnode-common
|
||||||
|
|
|
@ -2685,6 +2685,8 @@ int main(int argc, char *argv[])
|
||||||
|
|
||||||
runAll(taos);
|
runAll(taos);
|
||||||
|
|
||||||
|
taos_close(taos);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -82,12 +82,12 @@
|
||||||
./test.sh -f tsim/insert/update0.sim
|
./test.sh -f tsim/insert/update0.sim
|
||||||
|
|
||||||
# ---- parser
|
# ---- parser
|
||||||
# ./test.sh -f tsim/parser/alter.sim
|
./test.sh -f tsim/parser/alter.sim
|
||||||
# ./test.sh -f tsim/parser/alter1.sim
|
# nojira ./test.sh -f tsim/parser/alter1.sim
|
||||||
## ./test.sh -f tsim/parser/alter__for_community_version.sim
|
./test.sh -f tsim/parser/alter__for_community_version.sim
|
||||||
## ./test.sh -f tsim/parser/alter_column.sim
|
./test.sh -f tsim/parser/alter_column.sim
|
||||||
# ./test.sh -f tsim/parser/alter_stable.sim
|
./test.sh -f tsim/parser/alter_stable.sim
|
||||||
# ./test.sh -f tsim/parser/auto_create_tb.sim
|
# jira ./test.sh -f tsim/parser/auto_create_tb.sim
|
||||||
# ./test.sh -f tsim/parser/auto_create_tb_drop_tb.sim
|
# ./test.sh -f tsim/parser/auto_create_tb_drop_tb.sim
|
||||||
# ./test.sh -f tsim/parser/between_and.sim
|
# ./test.sh -f tsim/parser/between_and.sim
|
||||||
# ./test.sh -f tsim/parser/binary_escapeCharacter.sim
|
# ./test.sh -f tsim/parser/binary_escapeCharacter.sim
|
||||||
|
@ -205,7 +205,7 @@
|
||||||
./test.sh -f tsim/qnode/basic1.sim
|
./test.sh -f tsim/qnode/basic1.sim
|
||||||
|
|
||||||
# ---- snode
|
# ---- snode
|
||||||
# ./test.sh -f tsim/snode/basic1.sim
|
# unsupport ./test.sh -f tsim/snode/basic1.sim
|
||||||
|
|
||||||
# ---- bnode
|
# ---- bnode
|
||||||
./test.sh -f tsim/bnode/basic1.sim
|
./test.sh -f tsim/bnode/basic1.sim
|
||||||
|
@ -235,9 +235,9 @@
|
||||||
./test.sh -f tsim/table/createmulti.sim
|
./test.sh -f tsim/table/createmulti.sim
|
||||||
./test.sh -f tsim/table/date.sim
|
./test.sh -f tsim/table/date.sim
|
||||||
./test.sh -f tsim/table/db.table.sim
|
./test.sh -f tsim/table/db.table.sim
|
||||||
# ./test.sh -f tsim/table/delete_reuse1.sim
|
./test.sh -f tsim/table/delete_reuse1.sim
|
||||||
# ./test.sh -f tsim/table/delete_reuse2.sim
|
./test.sh -f tsim/table/delete_reuse2.sim
|
||||||
# ./test.sh -f tsim/table/delete_writing.sim
|
./test.sh -f tsim/table/delete_writing.sim
|
||||||
./test.sh -f tsim/table/describe.sim
|
./test.sh -f tsim/table/describe.sim
|
||||||
./test.sh -f tsim/table/double.sim
|
./test.sh -f tsim/table/double.sim
|
||||||
./test.sh -f tsim/table/float.sim
|
./test.sh -f tsim/table/float.sim
|
||||||
|
@ -314,12 +314,12 @@
|
||||||
./test.sh -f tsim/db/basic3.sim -m
|
./test.sh -f tsim/db/basic3.sim -m
|
||||||
./test.sh -f tsim/db/error1.sim -m
|
./test.sh -f tsim/db/error1.sim -m
|
||||||
./test.sh -f tsim/insert/backquote.sim -m
|
./test.sh -f tsim/insert/backquote.sim -m
|
||||||
# ./test.sh -f tsim/parser/fourArithmetic-basic.sim -m
|
# nojira ./test.sh -f tsim/parser/fourArithmetic-basic.sim -m
|
||||||
./test.sh -f tsim/query/interval-offset.sim -m
|
./test.sh -f tsim/query/interval-offset.sim -m
|
||||||
./test.sh -f tsim/tmq/basic3.sim -m
|
./test.sh -f tsim/tmq/basic3.sim -m
|
||||||
./test.sh -f tsim/stable/vnode3.sim -m
|
./test.sh -f tsim/stable/vnode3.sim -m
|
||||||
./test.sh -f tsim/qnode/basic1.sim -m
|
./test.sh -f tsim/qnode/basic1.sim -m
|
||||||
#./test.sh -f tsim/mnode/basic1.sim -m
|
# nojira ./test.sh -f tsim/mnode/basic1.sim -m
|
||||||
|
|
||||||
# --- sma
|
# --- sma
|
||||||
./test.sh -f tsim/sma/drop_sma.sim
|
./test.sh -f tsim/sma/drop_sma.sim
|
||||||
|
@ -333,13 +333,13 @@
|
||||||
./test.sh -f tsim/valgrind/checkError3.sim
|
./test.sh -f tsim/valgrind/checkError3.sim
|
||||||
|
|
||||||
# --- vnode
|
# --- vnode
|
||||||
# ./test.sh -f tsim/vnode/replica3_basic.sim
|
# unsupport ./test.sh -f tsim/vnode/replica3_basic.sim
|
||||||
# ./test.sh -f tsim/vnode/replica3_repeat.sim
|
# unsupport ./test.sh -f tsim/vnode/replica3_repeat.sim
|
||||||
# ./test.sh -f tsim/vnode/replica3_vgroup.sim
|
# unsupport ./test.sh -f tsim/vnode/replica3_vgroup.sim
|
||||||
# ./test.sh -f tsim/vnode/replica3_many.sim
|
# unsupport ./test.sh -f tsim/vnode/replica3_many.sim
|
||||||
# ./test.sh -f tsim/vnode/replica3_import.sim
|
# unsupport ./test.sh -f tsim/vnode/replica3_import.sim
|
||||||
# ./test.sh -f tsim/vnode/stable_balance_replica1.sim
|
# unsupport ./test.sh -f tsim/vnode/stable_balance_replica1.sim
|
||||||
# ./test.sh -f tsim/vnode/stable_dnode2_stop.sim
|
# unsupport ./test.sh -f tsim/vnode/stable_dnode2_stop.sim
|
||||||
./test.sh -f tsim/vnode/stable_dnode2.sim
|
./test.sh -f tsim/vnode/stable_dnode2.sim
|
||||||
./test.sh -f tsim/vnode/stable_dnode3.sim
|
./test.sh -f tsim/vnode/stable_dnode3.sim
|
||||||
./test.sh -f tsim/vnode/stable_replica3_dnode6.sim
|
./test.sh -f tsim/vnode/stable_replica3_dnode6.sim
|
||||||
|
@ -350,7 +350,6 @@
|
||||||
./test.sh -f tsim/sync/3Replica5VgElect.sim
|
./test.sh -f tsim/sync/3Replica5VgElect.sim
|
||||||
./test.sh -f tsim/sync/oneReplica1VgElect.sim
|
./test.sh -f tsim/sync/oneReplica1VgElect.sim
|
||||||
./test.sh -f tsim/sync/oneReplica5VgElect.sim
|
./test.sh -f tsim/sync/oneReplica5VgElect.sim
|
||||||
# ./test.sh -f tsim/sync/3Replica5VgElect3mnode.sim
|
|
||||||
|
|
||||||
# --- catalog
|
# --- catalog
|
||||||
./test.sh -f tsim/catalog/alterInCurrent.sim
|
./test.sh -f tsim/catalog/alterInCurrent.sim
|
||||||
|
@ -382,7 +381,7 @@
|
||||||
|
|
||||||
# ---- compute
|
# ---- compute
|
||||||
./test.sh -f tsim/compute/avg.sim
|
./test.sh -f tsim/compute/avg.sim
|
||||||
# jira ./test.sh -f tsim/compute/block_dist.sim
|
./test.sh -f tsim/compute/block_dist.sim
|
||||||
./test.sh -f tsim/compute/bottom.sim
|
./test.sh -f tsim/compute/bottom.sim
|
||||||
./test.sh -f tsim/compute/count.sim
|
./test.sh -f tsim/compute/count.sim
|
||||||
./test.sh -f tsim/compute/diff.sim
|
./test.sh -f tsim/compute/diff.sim
|
||||||
|
@ -433,12 +432,6 @@
|
||||||
# ---- wal
|
# ---- wal
|
||||||
./test.sh -f tsim/wal/kill.sim
|
./test.sh -f tsim/wal/kill.sim
|
||||||
|
|
||||||
# ---- issue
|
|
||||||
#./test.sh -f tsim/issue/TD-2677.sim
|
|
||||||
#./test.sh -f tsim/issue/TD-2680.sim
|
|
||||||
#./test.sh -f tsim/issue/TD-2713.sim
|
|
||||||
#./test.sh -f tsim/issue/TD-3300.sim
|
|
||||||
|
|
||||||
# ---- tag
|
# ---- tag
|
||||||
./test.sh -f tsim/tag/3.sim
|
./test.sh -f tsim/tag/3.sim
|
||||||
./test.sh -f tsim/tag/4.sim
|
./test.sh -f tsim/tag/4.sim
|
||||||
|
@ -451,18 +444,18 @@
|
||||||
./test.sh -f tsim/tag/bool_binary.sim
|
./test.sh -f tsim/tag/bool_binary.sim
|
||||||
./test.sh -f tsim/tag/bool_int.sim
|
./test.sh -f tsim/tag/bool_int.sim
|
||||||
./test.sh -f tsim/tag/bool.sim
|
./test.sh -f tsim/tag/bool.sim
|
||||||
#./test.sh -f tsim/tag/change.sim
|
# ./test.sh -f tsim/tag/change.sim
|
||||||
#./test.sh -f tsim/tag/column.sim
|
# ./test.sh -f tsim/tag/column.sim
|
||||||
#./test.sh -f tsim/tag/commit.sim
|
# ./test.sh -f tsim/tag/commit.sim
|
||||||
#./test.sh -f tsim/tag/create.sim
|
# ./test.sh -f tsim/tag/create.sim
|
||||||
#./test.sh -f tsim/tag/delete.sim
|
# ./test.sh -f tsim/tag/delete.sim
|
||||||
# jira ./test.sh -f tsim/tag/double.sim
|
# jira ./test.sh -f tsim/tag/double.sim
|
||||||
#./test.sh -f tsim/tag/filter.sim
|
# ./test.sh -f tsim/tag/filter.sim
|
||||||
# jira ./test.sh -f tsim/tag/float.sim
|
# jira ./test.sh -f tsim/tag/float.sim
|
||||||
./test.sh -f tsim/tag/int_binary.sim
|
./test.sh -f tsim/tag/int_binary.sim
|
||||||
./test.sh -f tsim/tag/int_float.sim
|
./test.sh -f tsim/tag/int_float.sim
|
||||||
./test.sh -f tsim/tag/int.sim
|
./test.sh -f tsim/tag/int.sim
|
||||||
#./test.sh -f tsim/tag/set.sim
|
# ./test.sh -f tsim/tag/set.sim
|
||||||
./test.sh -f tsim/tag/smallint.sim
|
./test.sh -f tsim/tag/smallint.sim
|
||||||
./test.sh -f tsim/tag/tinyint.sim
|
./test.sh -f tsim/tag/tinyint.sim
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
|
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
|
@ -80,11 +81,11 @@ $nt = $ntPrefix . $i
|
||||||
|
|
||||||
#sql select _block_dist() from $nt
|
#sql select _block_dist() from $nt
|
||||||
print show table distributed $nt
|
print show table distributed $nt
|
||||||
sql show table distributed $nt
|
sql_error show table distributed $nt
|
||||||
|
|
||||||
if $rows == 0 then
|
#if $rows == 0 then
|
||||||
return -1
|
# return -1
|
||||||
endi
|
#endi
|
||||||
|
|
||||||
print ============== TD-5998
|
print ============== TD-5998
|
||||||
sql_error select _block_dist() from (select * from $nt)
|
sql_error select _block_dist() from (select * from $nt)
|
||||||
|
|
|
@ -1,111 +0,0 @@
|
||||||
system sh/stop_dnodes.sh
|
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 3
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 3
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 3
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode3 -c mnodeEqualVnodeNum -v 4
|
|
||||||
|
|
||||||
print ============== deploy
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
|
||||||
sql connect
|
|
||||||
|
|
||||||
sql create dnode $hostname2
|
|
||||||
sql create dnode $hostname3
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
|
|
||||||
print =============== step1
|
|
||||||
$x = 0
|
|
||||||
step1:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show mnodes
|
|
||||||
$mnode1Role = $data2_1
|
|
||||||
print mnode1Role $mnode1Role
|
|
||||||
$mnode2Role = $data2_2
|
|
||||||
print mnode2Role $mnode2Role
|
|
||||||
$mnode3Role = $data2_3
|
|
||||||
print mnode3Role $mnode3Role
|
|
||||||
|
|
||||||
if $mnode1Role != master then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $mnode2Role != slave then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $mnode3Role != slave then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
$x = 1
|
|
||||||
show2:
|
|
||||||
|
|
||||||
print =============== step1
|
|
||||||
sql create database d1 replica 2 quorum 2
|
|
||||||
sql create table d1.t1 (ts timestamp, i int)
|
|
||||||
sql_error create table d1.t1 (ts timestamp, i int)
|
|
||||||
sql insert into d1.t1 values(now, 1)
|
|
||||||
sql select * from d1.t1;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print =============== step2
|
|
||||||
sql create database d2 replica 3 quorum 2
|
|
||||||
sql create table d2.t1 (ts timestamp, i int)
|
|
||||||
sql_error create table d2.t1 (ts timestamp, i int)
|
|
||||||
sql insert into d2.t1 values(now, 1)
|
|
||||||
sql select * from d2.t1;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print =============== step3
|
|
||||||
sql create database d4 replica 1 quorum 1
|
|
||||||
sql_error create database d5 replica 1 quorum 2
|
|
||||||
sql_error create database d6 replica 1 quorum 3
|
|
||||||
sql_error create database d7 replica 1 quorum 4
|
|
||||||
sql_error create database d8 replica 1 quorum 0
|
|
||||||
sql create database d9 replica 2 quorum 1
|
|
||||||
sql create database d10 replica 2 quorum 2
|
|
||||||
sql_error create database d11 replica 2 quorum 3
|
|
||||||
sql_error create database d12 replica 2 quorum 4
|
|
||||||
sql_error create database d12 replica 2 quorum 0
|
|
||||||
sql create database d13 replica 3 quorum 1
|
|
||||||
sql create database d14 replica 3 quorum 2
|
|
||||||
sql_error create database d15 replica 3 quorum 3
|
|
||||||
sql_error create database d16 replica 3 quorum 4
|
|
||||||
sql_error create database d17 replica 3 quorum 0
|
|
||||||
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
|
|
@ -1,202 +0,0 @@
|
||||||
system sh/stop_dnodes.sh
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode2 -c walLevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode3 -c walLevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode4 -c walLevel -v 2
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c balanceInterval -v 10
|
|
||||||
system sh/cfg.sh -n dnode2 -c balanceInterval -v 10
|
|
||||||
system sh/cfg.sh -n dnode3 -c balanceInterval -v 10
|
|
||||||
system sh/cfg.sh -n dnode4 -c balanceInterval -v 10
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c role -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c role -v 2
|
|
||||||
system sh/cfg.sh -n dnode3 -c role -v 2
|
|
||||||
system sh/cfg.sh -n dnode4 -c role -v 2
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode2 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode3 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode4 -c arbitrator -v $arbitrator
|
|
||||||
|
|
||||||
print ============== step0
|
|
||||||
system sh/exec_tarbitrator.sh -s start
|
|
||||||
|
|
||||||
print ============== step1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
|
||||||
sql connect
|
|
||||||
sql create dnode $hostname2
|
|
||||||
sql create dnode $hostname3
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step1:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show mnodes
|
|
||||||
print mnode1 $data2_1
|
|
||||||
print mnode1 $data2_2
|
|
||||||
print mnode1 $data2_3
|
|
||||||
if $data2_1 != master then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step2
|
|
||||||
sql show dnodes
|
|
||||||
if $rows != 4 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07
|
|
||||||
print $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17
|
|
||||||
print $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27
|
|
||||||
print $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37
|
|
||||||
|
|
||||||
if $data30 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data32 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data33 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data34 != ready then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data35 != arb then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data37 != - then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step4
|
|
||||||
system sh/exec_tarbitrator.sh -s stop
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step4:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 20 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
if $rows != 4 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07
|
|
||||||
print $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17
|
|
||||||
print $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27
|
|
||||||
print $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37
|
|
||||||
|
|
||||||
if $data30 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data32 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data33 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data34 != offline then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data35 != arb then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data37 != - then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step5
|
|
||||||
system sh/exec_tarbitrator.sh -s start
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step5:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 20 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
if $rows != 4 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07
|
|
||||||
print $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17
|
|
||||||
print $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27
|
|
||||||
print $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37
|
|
||||||
|
|
||||||
if $data30 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data32 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data33 != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data34 != ready then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data35 != arb then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data37 != - then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
|
|
@ -1,145 +0,0 @@
|
||||||
system sh/stop_dnodes.sh
|
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c wallevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode2 -c wallevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode3 -c wallevel -v 2
|
|
||||||
system sh/cfg.sh -n dnode4 -c wallevel -v 2
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 3
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 3
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 3
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 3
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode3 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode4 -c mnodeEqualVnodeNum -v 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c slaveQuery -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c slaveQuery -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c slaveQuery -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c slaveQuery -v 1
|
|
||||||
|
|
||||||
print ========= step1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
|
||||||
sql connect
|
|
||||||
sql create dnode $hostname2
|
|
||||||
sql create dnode $hostname3
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step1:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show mnodes
|
|
||||||
print mnode1 $data2_1
|
|
||||||
print mnode1 $data2_2
|
|
||||||
print mnode1 $data2_3
|
|
||||||
if $data2_1 != master then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data2_2 != slave then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
if $data2_3 != slave then
|
|
||||||
goto step1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ========= step2
|
|
||||||
sql create database d1 replica 3
|
|
||||||
sql create table d1.t1 (ts timestamp, i int)
|
|
||||||
sql insert into d1.t1 values(now, 1)
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step2:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show d1.vgroups
|
|
||||||
print online vgroups: $data03
|
|
||||||
if $data03 != 3 then
|
|
||||||
goto step2
|
|
||||||
endi
|
|
||||||
sleep 1000
|
|
||||||
|
|
||||||
print ========= step3
|
|
||||||
$i = 0
|
|
||||||
while $i < 100
|
|
||||||
$i = $i + 1
|
|
||||||
sql select * from d1.t1
|
|
||||||
print d1.t1 rows: $rows
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
endw
|
|
||||||
|
|
||||||
print ========= step4
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
|
||||||
|
|
||||||
system rm -rf ../../../sim/dnode3/data/vnode/vnode2/tsdb/data/*
|
|
||||||
system rm -rf ../../../sim/dnode3/data/vnode/vnode2/version.json
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s start -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode2 -s start -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode3 -s start -x SIGINT
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step4:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 30 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show d1.vgroups
|
|
||||||
print online vgroups: $data03
|
|
||||||
if $data03 != 3 then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
sleep 1000
|
|
||||||
|
|
||||||
print ========= step5
|
|
||||||
$i = 0
|
|
||||||
while $i < 100
|
|
||||||
$i = $i + 1
|
|
||||||
sql select * from d1.t1
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
print d1.t1 rows: $rows
|
|
||||||
endw
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
|
||||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
|
|
@ -1,556 +0,0 @@
|
||||||
system sh/stop_dnodes.sh
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c role -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c role -v 2
|
|
||||||
system sh/cfg.sh -n dnode3 -c role -v 2
|
|
||||||
system sh/cfg.sh -n dnode4 -c role -v 2
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode2 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode3 -c arbitrator -v $arbitrator
|
|
||||||
system sh/cfg.sh -n dnode4 -c arbitrator -v $arbitrator
|
|
||||||
|
|
||||||
print ============== step0: start tarbitrator
|
|
||||||
system sh/exec_tarbitrator.sh -s start
|
|
||||||
|
|
||||||
print ============== step1: start dnode1, only deploy mnode
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
|
||||||
sql connect
|
|
||||||
|
|
||||||
print ============== step2: start dnode2/dnode3
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
sql create dnode $hostname2
|
|
||||||
sql create dnode $hostname3
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step2:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step2
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step2
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step2
|
|
||||||
endi
|
|
||||||
|
|
||||||
sleep 1000
|
|
||||||
|
|
||||||
print ============== step3
|
|
||||||
sql create database db replica 2
|
|
||||||
sql use db
|
|
||||||
|
|
||||||
sql create table stb (ts timestamp, c1 int, c2 int) tags(t1 int)
|
|
||||||
sql create table t1 using stb tags(1)
|
|
||||||
sql insert into t1 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t1 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t1 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t1 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t1 values(1577980800004, 5, 1)
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data05 != master then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data07 != slave then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGKILL
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGKILL
|
|
||||||
|
|
||||||
print ============== step4
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step4:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
if $data05 != master then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
if $data07 != slave then
|
|
||||||
goto step4
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql create table t2 using stb tags(1)
|
|
||||||
sql insert into t2 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t2 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t2 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t2 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t2 values(1577980800004, 5, 1)
|
|
||||||
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step5
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGKILL
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step5:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
if $data4_3 != offline then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
if $data05 != offline then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
if $data07 != master then
|
|
||||||
goto step5
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step6
|
|
||||||
sql create table t3 using stb tags(1)
|
|
||||||
sql insert into t3 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t3 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t3 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t3 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t3 values(1577980800004, 5, 1)
|
|
||||||
sql insert into t3 values(1577980800010, 11, 5)
|
|
||||||
sql insert into t3 values(1577980800011, 12, 4)
|
|
||||||
sql insert into t3 values(1577980800012, 13, 3)
|
|
||||||
sql insert into t3 values(1577980800013, 14, 2)
|
|
||||||
sql insert into t3 values(1577980800014, 15, 1)
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode3 -s start
|
|
||||||
|
|
||||||
$x = 0
|
|
||||||
step6:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
if $data05 != slave then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
if $data07 != master then
|
|
||||||
goto step6
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step7
|
|
||||||
sql create table t4 using stb tags(1)
|
|
||||||
sql insert into t4 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t4 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t4 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t4 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t4 values(1577980800004, 5, 1)
|
|
||||||
sql insert into t4 values(1577980800010, 11, 5)
|
|
||||||
sql insert into t4 values(1577980800011, 12, 4)
|
|
||||||
sql insert into t4 values(1577980800012, 13, 3)
|
|
||||||
sql insert into t4 values(1577980800013, 14, 2)
|
|
||||||
sql insert into t4 values(1577980800014, 15, 1)
|
|
||||||
sql insert into t4 values(1577980800020, 21, 5)
|
|
||||||
sql insert into t4 values(1577980800021, 22, 4)
|
|
||||||
sql insert into t4 values(1577980800022, 23, 3)
|
|
||||||
sql insert into t4 values(1577980800023, 24, 2)
|
|
||||||
sql insert into t4 values(1577980800024, 25, 1)
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode2 -s stop -x SIGKILL
|
|
||||||
$x = 0
|
|
||||||
step7:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data4_2 != offline then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data05 != master then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data07 != offline then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step8
|
|
||||||
sql create table t5 using stb tags(1)
|
|
||||||
sql insert into t5 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t5 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t5 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t5 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t5 values(1577980800004, 5, 1)
|
|
||||||
sql insert into t5 values(1577980800010, 11, 5)
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t5
|
|
||||||
if $rows != 6 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode2 -s start
|
|
||||||
$x = 0
|
|
||||||
step8:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
if $data4_3 != ready then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show db.vgroups
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
if $data05 != master then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
if $data07 != slave then
|
|
||||||
goto step8
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t5
|
|
||||||
if $rows != 6 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== step9
|
|
||||||
sql create table t6 using stb tags(1)
|
|
||||||
sql insert into t6 values(1577980800000, 1, 5)
|
|
||||||
sql insert into t6 values(1577980800001, 2, 4)
|
|
||||||
sql insert into t6 values(1577980800002, 3, 3)
|
|
||||||
sql insert into t6 values(1577980800003, 4, 2)
|
|
||||||
sql insert into t6 values(1577980800004, 5, 1)
|
|
||||||
sql insert into t6 values(1577980800010, 11, 5)
|
|
||||||
sql insert into t6 values(1577980800011, 12, 4)
|
|
||||||
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t5
|
|
||||||
if $rows != 6 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t6
|
|
||||||
if $rows != 7 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode3 -s stop -x SIGKILL
|
|
||||||
$x = 0
|
|
||||||
step9:
|
|
||||||
$x = $x + 1
|
|
||||||
sleep 1000
|
|
||||||
if $x == 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show dnodes
|
|
||||||
print dnode1 $data4_1
|
|
||||||
print dnode2 $data4_2
|
|
||||||
print dnode3 $data4_3
|
|
||||||
|
|
||||||
if $data4_1 != ready then
|
|
||||||
goto step9
|
|
||||||
endi
|
|
||||||
if $data4_2 != ready then
|
|
||||||
goto step9
|
|
||||||
endi
|
|
||||||
if $data4_3 != offline then
|
|
||||||
goto step9
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== 2
|
|
||||||
sql show db.vgroups
|
|
||||||
|
|
||||||
if $data04 != 3 then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data06 != 2 then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data05 != offline then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
if $data07 != master then
|
|
||||||
goto step7
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ============== 3
|
|
||||||
sql select * from t1
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t2
|
|
||||||
if $rows != 5 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t3
|
|
||||||
if $rows != 10 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t4
|
|
||||||
if $rows != 15 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t5
|
|
||||||
if $rows != 6 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql select * from t6
|
|
||||||
if $rows != 7 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop
|
|
||||||
system sh/exec.sh -n dnode2 -s stop
|
|
||||||
system sh/exec.sh -n dnode3 -s stop
|
|
|
@ -1,9 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
$dbPrefix = m_alt_db
|
$dbPrefix = m_alt_db
|
||||||
|
@ -40,62 +37,61 @@ sql_error alter database $db keep 20,20,20,20
|
||||||
sql_error alter database $db keep 365001,365001,365001
|
sql_error alter database $db keep 365001,365001,365001
|
||||||
sql alter database $db keep 21
|
sql alter database $db keep 21
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 21,21,21 then
|
if $data27 != 30240m,30240m,30240m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 11,12
|
sql alter database $db keep 11,12
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 11,12,12 then
|
if $data27 != 15840m,17280m,17280m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 20,20,20
|
sql alter database $db keep 20,20,20
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 20,20,20 then
|
if $data27 != 28800m,28800m,28800m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 10,10,10
|
sql alter database $db keep 10,10,10
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 10,10,10 then
|
if $data27 != 14400m,14400m,14400m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 10,10,11
|
sql alter database $db keep 10,10,11
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 10,10,11 then
|
if $data27 != 14400m,14400m,15840m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 11,12,13
|
sql alter database $db keep 11,12,13
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 11,12,13 then
|
if $data27 != 15840m,17280m,18720m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 365000,365000,365000
|
sql alter database $db keep 365000,365000,365000
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 365000,365000,365000 then
|
if $data27 != 525600000m,525600000m,525600000m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
|
||||||
##### alter table test, simeplest case
|
##### alter table test, simeplest case
|
||||||
sql create table tb (ts timestamp, c1 int, c2 int, c3 int)
|
sql create table tb (ts timestamp, c1 int, c2 int, c3 int)
|
||||||
sql insert into tb values (now, 1, 1, 1)
|
sql insert into tb values (now, 1, 1, 1)
|
||||||
|
@ -187,7 +183,6 @@ endi
|
||||||
sql drop table tb
|
sql drop table tb
|
||||||
sql drop table mt
|
sql drop table mt
|
||||||
|
|
||||||
sleep 100
|
|
||||||
### ALTER TABLE WHILE STREAMING [TBASE271]
|
### ALTER TABLE WHILE STREAMING [TBASE271]
|
||||||
#sql create table tb1 (ts timestamp, c1 int, c2 nchar(5), c3 int)
|
#sql create table tb1 (ts timestamp, c1 int, c2 nchar(5), c3 int)
|
||||||
#sql create table strm as select count(*), avg(c1), first(c2), sum(c3) from tb1 interval(2s)
|
#sql create table strm as select count(*), avg(c1), first(c2), sum(c3) from tb1 interval(2s)
|
||||||
|
@ -195,9 +190,9 @@ sleep 100
|
||||||
#if $rows != 0 then
|
#if $rows != 0 then
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
##sleep 12000
|
|
||||||
#sql insert into tb1 values (now, 1, 'taos', 1)
|
#sql insert into tb1 values (now, 1, 'taos', 1)
|
||||||
#sleep 20000
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#print rows = $rows
|
#print rows = $rows
|
||||||
#if $rows != 1 then
|
#if $rows != 1 then
|
||||||
|
@ -207,9 +202,9 @@ sleep 100
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
#sql alter table tb1 drop column c3
|
#sql alter table tb1 drop column c3
|
||||||
#sleep 500
|
|
||||||
#sql insert into tb1 values (now, 2, 'taos')
|
#sql insert into tb1 values (now, 2, 'taos')
|
||||||
#sleep 30000
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#if $rows != 2 then
|
#if $rows != 2 then
|
||||||
# return -1
|
# return -1
|
||||||
|
@ -218,9 +213,9 @@ sleep 100
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
#sql alter table tb1 add column c3 int
|
#sql alter table tb1 add column c3 int
|
||||||
#sleep 500
|
|
||||||
#sql insert into tb1 values (now, 3, 'taos', 3);
|
#sql insert into tb1 values (now, 3, 'taos', 3);
|
||||||
#sleep 100
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#if $rows != 3 then
|
#if $rows != 3 then
|
||||||
# return -1
|
# return -1
|
||||||
|
@ -259,7 +254,7 @@ sql create database $db
|
||||||
sql use $db
|
sql use $db
|
||||||
sql create table mt (ts timestamp, c1 int, c2 nchar(7), c3 int) tags (t1 int)
|
sql create table mt (ts timestamp, c1 int, c2 nchar(7), c3 int) tags (t1 int)
|
||||||
sql create table tb using mt tags(1)
|
sql create table tb using mt tags(1)
|
||||||
sleep 100
|
|
||||||
sql insert into tb values ('2018-11-01 16:30:00.000', 1, 'insert', 1)
|
sql insert into tb values ('2018-11-01 16:30:00.000', 1, 'insert', 1)
|
||||||
sql alter table mt drop column c3
|
sql alter table mt drop column c3
|
||||||
|
|
||||||
|
|
|
@ -1,11 +1,7 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
sql reset query cache
|
|
||||||
|
|
||||||
$dbPrefix = alt1_db
|
$dbPrefix = alt1_db
|
||||||
|
|
||||||
|
@ -87,9 +83,8 @@ if $data13 != NULL then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sleep 100
|
|
||||||
print ================== insert values into table
|
print ================== insert values into table
|
||||||
sql insert into car1 values (now, 1, 1,1 ) (now +1s, 2,2,2,) car2 values (now, 1,3,3)
|
sql insert into car1 values (now, 1, 1,1 ) (now +1s, 2,2,2) car2 values (now, 1,3,3)
|
||||||
|
|
||||||
sql select c1+speed from stb where c1 > 0
|
sql select c1+speed from stb where c1 > 0
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
$dbPrefix = m_alt_db
|
$dbPrefix = m_alt_db
|
||||||
|
@ -23,10 +20,10 @@ sql drop database if exists $db
|
||||||
sql create database $db duration 10 keep 20
|
sql create database $db duration 10 keep 20
|
||||||
sql use $db
|
sql use $db
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 20 then
|
if $data27 != 28800m,28800m,28800m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -47,44 +44,44 @@ sql_error alter database $db keep 20,19,18
|
||||||
sql_error alter database $db keep 20,20,20,20
|
sql_error alter database $db keep 20,20,20,20
|
||||||
sql_error alter database $db keep 365001,365001,365001
|
sql_error alter database $db keep 365001,365001,365001
|
||||||
sql_error alter database $db keep 365001
|
sql_error alter database $db keep 365001
|
||||||
sql alter database $db keep 20
|
sql_error alter database $db keep 20
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 20 then
|
if $data27 != 28800m,28800m,28800m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 10
|
sql alter database $db keep 10
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 10 then
|
if $data27 != 14400m,14400m,14400m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 11
|
sql alter database $db keep 11
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 11 then
|
if $data27 != 15840m,15840m,15840m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 13
|
sql alter database $db keep 13
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 13 then
|
if $data27 != 18720m,18720m,18720m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql alter database $db keep 365000
|
sql alter database $db keep 365000
|
||||||
sql show databases
|
sql show databases
|
||||||
if $rows != 1 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data07 != 365000 then
|
if $data27 != 525600000m,525600000m,525600000m then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -180,7 +177,6 @@ endi
|
||||||
sql drop table tb
|
sql drop table tb
|
||||||
sql drop table mt
|
sql drop table mt
|
||||||
|
|
||||||
sleep 100
|
|
||||||
### ALTER TABLE WHILE STREAMING [TBASE271]
|
### ALTER TABLE WHILE STREAMING [TBASE271]
|
||||||
#sql create table tb1 (ts timestamp, c1 int, c2 nchar(5), c3 int)
|
#sql create table tb1 (ts timestamp, c1 int, c2 nchar(5), c3 int)
|
||||||
#sql create table strm as select count(*), avg(c1), first(c2), sum(c3) from tb1 interval(2s)
|
#sql create table strm as select count(*), avg(c1), first(c2), sum(c3) from tb1 interval(2s)
|
||||||
|
@ -188,9 +184,7 @@ sleep 100
|
||||||
#if $rows != 0 then
|
#if $rows != 0 then
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
##sleep 12000
|
|
||||||
#sql insert into tb1 values (now, 1, 'taos', 1)
|
#sql insert into tb1 values (now, 1, 'taos', 1)
|
||||||
#sleep 20000
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#print rows = $rows
|
#print rows = $rows
|
||||||
#if $rows != 1 then
|
#if $rows != 1 then
|
||||||
|
@ -200,9 +194,7 @@ sleep 100
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
#sql alter table tb1 drop column c3
|
#sql alter table tb1 drop column c3
|
||||||
#sleep 500
|
|
||||||
#sql insert into tb1 values (now, 2, 'taos')
|
#sql insert into tb1 values (now, 2, 'taos')
|
||||||
#sleep 30000
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#if $rows != 2 then
|
#if $rows != 2 then
|
||||||
# return -1
|
# return -1
|
||||||
|
@ -211,9 +203,7 @@ sleep 100
|
||||||
# return -1
|
# return -1
|
||||||
#endi
|
#endi
|
||||||
#sql alter table tb1 add column c3 int
|
#sql alter table tb1 add column c3 int
|
||||||
#sleep 500
|
|
||||||
#sql insert into tb1 values (now, 3, 'taos', 3);
|
#sql insert into tb1 values (now, 3, 'taos', 3);
|
||||||
#sleep 100
|
|
||||||
#sql select * from strm
|
#sql select * from strm
|
||||||
#if $rows != 3 then
|
#if $rows != 3 then
|
||||||
# return -1
|
# return -1
|
||||||
|
@ -252,7 +242,6 @@ sql create database $db
|
||||||
sql use $db
|
sql use $db
|
||||||
sql create table mt (ts timestamp, c1 int, c2 nchar(7), c3 int) tags (t1 int)
|
sql create table mt (ts timestamp, c1 int, c2 nchar(7), c3 int) tags (t1 int)
|
||||||
sql create table tb using mt tags(1)
|
sql create table tb using mt tags(1)
|
||||||
sleep 100
|
|
||||||
sql insert into tb values ('2018-11-01 16:30:00.000', 1, 'insert', 1)
|
sql insert into tb values ('2018-11-01 16:30:00.000', 1, 'insert', 1)
|
||||||
sql alter table mt drop column c3
|
sql alter table mt drop column c3
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
$dbPrefix = m_alt_db
|
$dbPrefix = m_alt_db
|
||||||
|
@ -26,51 +23,20 @@ sql use $db
|
||||||
sql create table tb (ts timestamp, c1 int, c2 binary(10), c3 nchar(10))
|
sql create table tb (ts timestamp, c1 int, c2 binary(10), c3 nchar(10))
|
||||||
sql insert into tb values (now, 1, "1", "1")
|
sql insert into tb values (now, 1, "1", "1")
|
||||||
sql alter table tb modify column c2 binary(20);
|
sql alter table tb modify column c2 binary(20);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter table tb modify column c3 nchar(20);
|
sql alter table tb modify column c3 nchar(20);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
|
|
||||||
sql create stable stb (ts timestamp, c1 int, c2 binary(10), c3 nchar(10)) tags(id1 int, id2 binary(10), id3 nchar(10))
|
sql create stable stb (ts timestamp, c1 int, c2 binary(10), c3 nchar(10)) tags(id1 int, id2 binary(10), id3 nchar(10))
|
||||||
sql create table tb1 using stb tags(1, "a", "b")
|
sql create table tb1 using stb tags(1, "a", "b")
|
||||||
sql insert into tb1 values (now, 1, "1", "1")
|
sql insert into tb1 values (now, 1, "1", "1")
|
||||||
sql alter stable stb modify column c2 binary(20);
|
sql alter stable stb modify column c2 binary(20);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter table stb modify column c2 binary(30);
|
sql alter table stb modify column c2 binary(30);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter stable stb modify column c3 nchar(20);
|
sql alter stable stb modify column c3 nchar(20);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter table stb modify column c3 nchar(30);
|
sql alter table stb modify column c3 nchar(30);
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql alter table stb modify tag id2 binary(11);
|
sql alter table stb modify tag id2 binary(11);
|
||||||
if $rows != 0 then
|
sql_error alter stable stb modify tag id2 binary(11);
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter stable stb modify tag id2 binary(11);
|
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter table stb modify tag id3 nchar(11);
|
sql alter table stb modify tag id3 nchar(11);
|
||||||
if $rows != 0 then
|
sql_error alter stable stb modify tag id3 nchar(11);
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql alter stable stb modify tag id3 nchar(11);
|
|
||||||
if $rows != 0 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
##### ILLEGAL OPERATIONS
|
##### ILLEGAL OPERATIONS
|
||||||
|
|
||||||
|
@ -82,14 +48,14 @@ sql_error alter table tb modify column c2 binary(10);
|
||||||
sql_error alter table tb modify column c2 binary(9);
|
sql_error alter table tb modify column c2 binary(9);
|
||||||
sql_error alter table tb modify column c2 binary(-9);
|
sql_error alter table tb modify column c2 binary(-9);
|
||||||
sql_error alter table tb modify column c2 binary(0);
|
sql_error alter table tb modify column c2 binary(0);
|
||||||
sql_error alter table tb modify column c2 binary(17000);
|
sql alter table tb modify column c2 binary(17000);
|
||||||
sql_error alter table tb modify column c2 nchar(30);
|
sql_error alter table tb modify column c2 nchar(30);
|
||||||
sql_error alter table tb modify column c3 double;
|
sql_error alter table tb modify column c3 double;
|
||||||
sql_error alter table tb modify column c3 nchar(10);
|
sql_error alter table tb modify column c3 nchar(10);
|
||||||
sql_error alter table tb modify column c3 nchar(0);
|
sql_error alter table tb modify column c3 nchar(0);
|
||||||
sql_error alter table tb modify column c3 nchar(-1);
|
sql_error alter table tb modify column c3 nchar(-1);
|
||||||
sql_error alter table tb modify column c3 binary(80);
|
sql_error alter table tb modify column c3 binary(80);
|
||||||
sql_error alter table tb modify column c3 nchar(17000);
|
sql alter table tb modify column c3 nchar(17000);
|
||||||
sql_error alter table tb modify column c3 nchar(100), c2 binary(30);
|
sql_error alter table tb modify column c3 nchar(100), c2 binary(30);
|
||||||
sql_error alter table tb modify column c1 nchar(100), c2 binary(30);
|
sql_error alter table tb modify column c1 nchar(100), c2 binary(30);
|
||||||
sql_error alter stable tb modify column c2 binary(30);
|
sql_error alter stable tb modify column c2 binary(30);
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print ========== alter_stable.sim
|
print ========== alter_stable.sim
|
||||||
|
@ -13,19 +10,19 @@ sql drop database if exists $db
|
||||||
sql create database $db
|
sql create database $db
|
||||||
sql use $db
|
sql use $db
|
||||||
|
|
||||||
##### alter stable test : change tag name
|
##### alter stable test : rename tag name
|
||||||
# case-1 change tag name: new name inclue old name
|
# case-1 rename tag name: new name inclue old name
|
||||||
sql create table mt1 (ts timestamp, c1 int) tags (a int)
|
sql create table mt1 (ts timestamp, c1 int) tags (a int)
|
||||||
sql alter table mt1 change tag a abcd
|
sql alter table mt1 rename tag a abcd
|
||||||
sql alter table mt1 change tag abcd a
|
sql alter table mt1 rename tag abcd a
|
||||||
sql_error alter table mt1 change tag a 1
|
sql_error alter table mt1 rename tag a 1
|
||||||
|
|
||||||
sql_error create table mtx1 (ts timestamp, c1 int) tags (123 int)
|
sql_error create table mtx1 (ts timestamp, c1 int) tags (123 int)
|
||||||
|
|
||||||
sql_error create table mt2 (ts timestamp, c1 int) tags (abc012345678901234567890123456789012345678901234567890123456789def int)
|
sql_error create table mt2 (ts timestamp, c1 int) tags (abc012345678901234567890123456789012345678901234567890123456789def int)
|
||||||
sql create table mt3 (ts timestamp, c1 int) tags (abc012345678901234567890123456789012345678901234567890123456789 int)
|
sql create table mt3 (ts timestamp, c1 int) tags (abc012345678901234567890123456789012345678901234567890123456789 int)
|
||||||
sql_error alter table mt3 change tag abc012345678901234567890123456789012345678901234567890123456789 abcdefg012345678901234567890123456789012345678901234567890123456789
|
sql_error alter table mt3 rename tag abc012345678901234567890123456789012345678901234567890123456789 abcdefg012345678901234567890123456789012345678901234567890123456789
|
||||||
sql alter table mt3 change tag abc012345678901234567890123456789012345678901234567890123456789 abcdefg0123456789012345678901234567890123456789
|
sql alter table mt3 rename tag abc012345678901234567890123456789012345678901234567890123456789 abcdefg0123456789012345678901234567890123456789
|
||||||
|
|
||||||
# case-2 set tag value
|
# case-2 set tag value
|
||||||
sql create table mt4 (ts timestamp, c1 int) tags (name binary(16), len int)
|
sql create table mt4 (ts timestamp, c1 int) tags (name binary(16), len int)
|
||||||
|
@ -37,7 +34,7 @@ sql alter table tb1 set tag len = 379
|
||||||
|
|
||||||
# case TD-5594
|
# case TD-5594
|
||||||
sql create stable st5520(ts timestamp, f int) tags(t0 bool, t1 nchar(4093), t2 nchar(1))
|
sql create stable st5520(ts timestamp, f int) tags(t0 bool, t1 nchar(4093), t2 nchar(1))
|
||||||
sql_error alter stable st5520 modify tag t2 nchar(2);
|
sql alter stable st5520 modify tag t2 nchar(2);
|
||||||
# test end
|
# test end
|
||||||
sql drop database $db
|
sql drop database $db
|
||||||
|
|
||||||
|
|
|
@ -1,11 +1,8 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
|
|
||||||
sleep 100
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print ======================== dnode1 start
|
print ======================== dnode1 start
|
||||||
|
|
||||||
$dbPrefix = ac_db
|
$dbPrefix = ac_db
|
||||||
|
@ -153,36 +150,37 @@ print $rows $data00 $data10 $data20
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data00 != tb1 then
|
if $data(tb1)[0] != tb1 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data10 != tb2 then
|
if $data(tb2)[0] != tb2 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data20 != tb3 then
|
if $data(tb3)[0] != tb3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select ts,c1,c2,c3,c4,c5,c7,c8,c9 from $stb
|
sql select c1,c1,c2,c3,c4,c5,c7,c8,c9 from $stb
|
||||||
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||||
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||||
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||||
|
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
#if $data00 != @18-09-17 09:00:00.000@ then
|
if $data(1)[1] != 1 then
|
||||||
# return -1
|
|
||||||
#endi
|
|
||||||
if $data01 != 1 then
|
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data08 != 涛思数据1 then
|
if $data(1)[8] != 涛思数据1 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data14 != 2.000000000 then
|
if $data(2)[4] != 2.000000000 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data18 != 涛思数据2 then
|
if $data(2)[8] != 涛思数据2 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data28 != 涛思数据3 then
|
if $data(3)[8] != 涛思数据3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -208,12 +206,7 @@ endi
|
||||||
|
|
||||||
print ================== restart server to commit data into disk
|
print ================== restart server to commit data into disk
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
sleep 500
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
print ================== server restart completed
|
|
||||||
sql connect
|
|
||||||
sleep 100
|
|
||||||
sql use $db
|
|
||||||
|
|
||||||
#### auto create multiple tables
|
#### auto create multiple tables
|
||||||
sql insert into tb1 using $stb tags(1) values ( $ts0 , 1, 1, 1, 1, 'bin1', 1, 1, 1, '涛思数据1') tb2 using $stb tags(2) values ( $ts0 , 2, 2, 2, 2, 'bin2', 2, 2, 2, '涛思数据2') tb3 using $stb tags(3) values ( $ts0 , 3, 3, 3, 3, 'bin3', 3, 3, 3, '涛思数据3')
|
sql insert into tb1 using $stb tags(1) values ( $ts0 , 1, 1, 1, 1, 'bin1', 1, 1, 1, '涛思数据1') tb2 using $stb tags(2) values ( $ts0 , 2, 2, 2, 2, 'bin2', 2, 2, 2, '涛思数据2') tb3 using $stb tags(3) values ( $ts0 , 3, 3, 3, 3, 'bin3', 3, 3, 3, '涛思数据3')
|
||||||
|
@ -221,36 +214,37 @@ sql show tables
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data00 != tb1 then
|
if $data(tb1)[0] != tb1 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data10 != tb2 then
|
if $data(tb2)[0] != tb2 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data20 != tb3 then
|
if $data(tb3)[0] != tb3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select ts,c1,c2,c3,c4,c5,c7,c8,c9 from $stb
|
sql select c1,c1,c2,c3,c4,c5,c7,c8,c9 from $stb
|
||||||
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||||
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||||
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||||
|
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
#if $data00 != @18-09-17 09:00:00.000@ then
|
if $data(1)[1] != 1 then
|
||||||
# return -1
|
|
||||||
#endi
|
|
||||||
if $data01 != 1 then
|
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data08 != 涛思数据1 then
|
if $data(1)[8] != 涛思数据1 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data14 != 2.000000000 then
|
if $data(2)[4] != 2.000000000 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data18 != 涛思数据2 then
|
if $data(2)[8] != 涛思数据2 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data28 != 涛思数据3 then
|
if $data(3)[8] != 涛思数据3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
|
|
@ -6,32 +6,32 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print =============== show dnodes
|
print =============== show dnodes
|
||||||
sql show dnodes;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data00 != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show mnodes;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data00 != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data02 != leader then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print =============== create dnodes
|
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sleep 2000
|
|
||||||
|
|
||||||
|
$x = 0
|
||||||
|
step1:
|
||||||
|
$x = $x + 1
|
||||||
|
sleep 1000
|
||||||
|
if $x == 10 then
|
||||||
|
print ====> dnode not ready!
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
sql show dnodes
|
||||||
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
|
if $rows != 2 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
if $data(1)[4] != ready then
|
||||||
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(2)[4] != ready then
|
||||||
|
goto step1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print =============== show dnodes
|
||||||
sql show dnodes;
|
sql show dnodes;
|
||||||
if $rows != 2 then
|
if $rows != 2 then
|
||||||
return -1
|
return -1
|
||||||
|
@ -125,7 +125,6 @@ system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
system sh/exec.sh -n dnode2 -s start
|
system sh/exec.sh -n dnode2 -s start
|
||||||
|
|
||||||
sleep 2000
|
|
||||||
sql show qnodes
|
sql show qnodes
|
||||||
if $rows != 2 then
|
if $rows != 2 then
|
||||||
return -1
|
return -1
|
||||||
|
|
|
@ -6,32 +6,32 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print =============== show dnodes
|
print =============== show dnodes
|
||||||
sql show dnodes;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data00 != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql show mnodes;
|
|
||||||
if $rows != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data00 != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data02 != leader then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print =============== create dnodes
|
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sleep 2000
|
|
||||||
|
|
||||||
|
$x = 0
|
||||||
|
step1:
|
||||||
|
$x = $x + 1
|
||||||
|
sleep 1000
|
||||||
|
if $x == 10 then
|
||||||
|
print ====> dnode not ready!
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
sql show dnodes
|
||||||
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
|
if $rows != 2 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
if $data(1)[4] != ready then
|
||||||
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(2)[4] != ready then
|
||||||
|
goto step1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print =============== show dnodes
|
||||||
sql show dnodes;
|
sql show dnodes;
|
||||||
if $rows != 2 then
|
if $rows != 2 then
|
||||||
return -1
|
return -1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -3,66 +3,44 @@ system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
system sh/deploy.sh -n dnode2 -i 2
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
system sh/deploy.sh -n dnode3 -i 3
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
system sh/deploy.sh -n dnode4 -i 4
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
system sh/exec.sh -n dnode2 -s start
|
system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
### create clusters using four dnodes;
|
|
||||||
|
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> 1-dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -11,57 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
print ===> create clusters using four dnodes;
|
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> 1-dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$vgroups = 1
|
$vgroups = 1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$vgroups = 1
|
$vgroups = 1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 1
|
$replica = 1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
sql connect
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
#sql connect
|
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$vgroups = 1
|
$vgroups = 1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$vgroups = 1
|
$vgroups = 1
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -11,55 +11,38 @@ system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
system sh/exec.sh -n dnode4 -s start
|
system sh/exec.sh -n dnode4 -s start
|
||||||
|
|
||||||
$loop_cnt = 0
|
|
||||||
check_dnode_ready:
|
|
||||||
$loop_cnt = $loop_cnt + 1
|
|
||||||
sleep 200
|
|
||||||
if $loop_cnt == 10 then
|
|
||||||
print ====> dnode not ready!
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
sql show dnodes
|
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
|
||||||
if $data[0][0] != 1 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
if $data[0][4] != ready then
|
|
||||||
goto check_dnode_ready
|
|
||||||
endi
|
|
||||||
|
|
||||||
sql connect
|
sql connect
|
||||||
sql create dnode $hostname port 7200
|
sql create dnode $hostname port 7200
|
||||||
sql create dnode $hostname port 7300
|
sql create dnode $hostname port 7300
|
||||||
sql create dnode $hostname port 7400
|
sql create dnode $hostname port 7400
|
||||||
|
|
||||||
$loop_cnt = 0
|
$x = 0
|
||||||
check_dnode_ready_1:
|
step1:
|
||||||
$loop_cnt = $loop_cnt + 1
|
$x = $x + 1
|
||||||
sleep 200
|
sleep 1000
|
||||||
if $loop_cnt == 10 then
|
if $x == 10 then
|
||||||
print ====> dnodes not ready!
|
print ====> dnode not ready!
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
sql show dnodes
|
sql show dnodes
|
||||||
print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
|
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
|
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||||
print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
|
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||||
print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
|
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||||
if $data[0][4] != ready then
|
if $rows != 4 then
|
||||||
goto check_dnode_ready_1
|
return -1
|
||||||
endi
|
endi
|
||||||
if $data[1][4] != ready then
|
if $data(1)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[2][4] != ready then
|
if $data(2)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
endi
|
endi
|
||||||
if $data[3][4] != ready then
|
if $data(3)[4] != ready then
|
||||||
goto check_dnode_ready_1
|
goto step1
|
||||||
|
endi
|
||||||
|
if $data(4)[4] != ready then
|
||||||
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
$replica = 3
|
$replica = 3
|
||||||
|
|
|
@ -3,5 +3,6 @@ $x = 1
|
||||||
begin:
|
begin:
|
||||||
sql insert into db.tb values(now, $x ) -x begin
|
sql insert into db.tb values(now, $x ) -x begin
|
||||||
#print ===> insert successed $x
|
#print ===> insert successed $x
|
||||||
|
sleep 100
|
||||||
$x = $x + 1
|
$x = $x + 1
|
||||||
goto begin
|
goto begin
|
|
@ -1,28 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c walLevel -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode3 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode4 -c mnodeEqualVnodeNum -v 4
|
|
||||||
|
|
||||||
print ========= start dnodes
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
|
|
||||||
sleep 2000
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print ======== step1
|
print ======== step1
|
||||||
|
@ -37,7 +15,6 @@ endi
|
||||||
|
|
||||||
print ======== step2
|
print ======== step2
|
||||||
sql drop table d1.t1
|
sql drop table d1.t1
|
||||||
sleep 1000
|
|
||||||
sql insert into d1.t1 values(now, 2) -x step2
|
sql insert into d1.t1 values(now, 2) -x step2
|
||||||
return -1
|
return -1
|
||||||
step2:
|
step2:
|
||||||
|
@ -58,7 +35,6 @@ $x = 0
|
||||||
while $x < 20
|
while $x < 20
|
||||||
|
|
||||||
sql drop table d1.t1
|
sql drop table d1.t1
|
||||||
sleep 1000
|
|
||||||
sql insert into d1.t1 values(now, -1) -x step4
|
sql insert into d1.t1 values(now, -1) -x step4
|
||||||
return -1
|
return -1
|
||||||
step4:
|
step4:
|
||||||
|
|
|
@ -1,28 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c walLevel -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode3 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode4 -c mnodeEqualVnodeNum -v 4
|
|
||||||
|
|
||||||
print ========= start dnodes
|
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
|
|
||||||
sleep 2000
|
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print ======== step1
|
print ======== step1
|
||||||
|
@ -37,7 +15,6 @@ endi
|
||||||
|
|
||||||
print ======== step2
|
print ======== step2
|
||||||
sql drop table db1.t1
|
sql drop table db1.t1
|
||||||
sleep 1000
|
|
||||||
sql_error insert into db1.t1 values(now, 2)
|
sql_error insert into db1.t1 values(now, 2)
|
||||||
|
|
||||||
print ========= step3
|
print ========= step3
|
||||||
|
@ -59,7 +36,6 @@ while $x < 20
|
||||||
$tb = tb . $x
|
$tb = tb . $x
|
||||||
sql drop table $tb
|
sql drop table $tb
|
||||||
|
|
||||||
sleep 1000
|
|
||||||
sql_error insert into $tb values(now, -1)
|
sql_error insert into $tb values(now, -1)
|
||||||
|
|
||||||
step4:
|
step4:
|
||||||
|
|
|
@ -1,23 +1,7 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
system sh/exec.sh -n dnode1 -s start
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
sql connect
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c walLevel -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c walLevel -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1
|
|
||||||
system sh/cfg.sh -n dnode4 -c numOfMnodes -v 1
|
|
||||||
|
|
||||||
system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode3 -c mnodeEqualVnodeNum -v 4
|
|
||||||
system sh/cfg.sh -n dnode4 -c mnodeEqualVnodeNum -v 4
|
|
||||||
|
|
||||||
print ========= start dnodes
|
print ========= start dnodes
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
|
@ -28,12 +12,12 @@ sql create table db.tb (ts timestamp, i int)
|
||||||
sql insert into db.tb values(now, 1)
|
sql insert into db.tb values(now, 1)
|
||||||
|
|
||||||
print ======== start back
|
print ======== start back
|
||||||
run_back general/table/back_insert.sim
|
run_back tsim/table/back_insert.sim
|
||||||
sleep 1000
|
sleep 1000
|
||||||
|
|
||||||
print ======== step1
|
print ======== step1
|
||||||
$x = 1
|
$x = 1
|
||||||
while $x < 15
|
while $x < 10
|
||||||
|
|
||||||
print drop table times $x
|
print drop table times $x
|
||||||
sql drop table db.tb -x step1
|
sql drop table db.tb -x step1
|
||||||
|
|
|
@ -24,6 +24,7 @@ class TDTestCase:
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor())
|
tdSql.init(conn.cursor())
|
||||||
self.dbname = 'db'
|
self.dbname = 'db'
|
||||||
|
self.delaytime = 10
|
||||||
def get_database_info(self):
|
def get_database_info(self):
|
||||||
tdSql.query('select database()')
|
tdSql.query('select database()')
|
||||||
tdSql.checkData(0,0,None)
|
tdSql.checkData(0,0,None)
|
||||||
|
@ -43,14 +44,15 @@ class TDTestCase:
|
||||||
def get_server_status(self):
|
def get_server_status(self):
|
||||||
tdSql.query('select server_status()')
|
tdSql.query('select server_status()')
|
||||||
tdSql.checkData(0,0,1)
|
tdSql.checkData(0,0,1)
|
||||||
tdDnodes.stoptaosd(1)
|
#!for bug
|
||||||
|
# tdDnodes.stoptaosd(1)
|
||||||
|
# sleep(self.delaytime)
|
||||||
|
# tdSql.error('select server_status()')
|
||||||
|
|
||||||
tdSql.query('select server_status()')
|
|
||||||
print(tdSql.queryResult)
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self.get_database_info()
|
self.get_database_info()
|
||||||
self.check_version()
|
self.check_version()
|
||||||
# self.get_server_status()
|
self.get_server_status()
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
tdLog.success("%s successfully executed" % __file__)
|
||||||
|
|
|
@ -85,7 +85,7 @@ class TDTestCase:
|
||||||
# tdSql.checkRows(10)
|
# tdSql.checkRows(10)
|
||||||
|
|
||||||
# test partition interval Pseudo time-column
|
# test partition interval Pseudo time-column
|
||||||
tdSql.query("SELECT count(ms1)/144 FROM (SELECT _wstartts as ts1,model, fleet,avg(status) AS ms1 FROM diagnostics WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition by model, fleet interval(10m)) WHERE ts1 >= '2016-01-01T00:00:00Z' AND ts1 < '2016-01-05T00:00:01Z' AND ms1<1;")
|
tdSql.query("SELECT count(ms1)/144 FROM (SELECT _wstart as ts1,model, fleet,avg(status) AS ms1 FROM diagnostics WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition by model, fleet interval(10m)) WHERE ts1 >= '2016-01-01T00:00:00Z' AND ts1 < '2016-01-05T00:00:01Z' AND ms1<1;")
|
||||||
|
|
||||||
|
|
||||||
# test
|
# test
|
||||||
|
|
|
@ -114,7 +114,7 @@ ELSE ()
|
||||||
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
|
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
|
||||||
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
|
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
|
||||||
INSTALL_COMMAND
|
INSTALL_COMMAND
|
||||||
COMMAND curl -sL https://github.com/upx/upx/releases/download/v3.96/upx-3.96-${PLATFORM_ARCH_STR}_linux.tar.xz -o upx.tar.xz && tar -xvJf upx.tar.xz -C ${CMAKE_BINARY_DIR} --strip-components 1 > /dev/null && ${CMAKE_BINARY_DIR}/upx taosadapter || :
|
COMMAND wget -c https://github.com/upx/upx/releases/download/v3.96/upx-3.96-${PLATFORM_ARCH_STR}_linux.tar.xz -O ${CMAKE_CURRENT_SOURCE_DIR}/upx.tar.xz && tar -xvJf ${CMAKE_CURRENT_SOURCE_DIR}/upx.tar.xz -C ${CMAKE_CURRENT_SOURCE_DIR} --strip-components 1 > /dev/null && ${CMAKE_CURRENT_SOURCE_DIR}/upx taosadapter || :
|
||||||
COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin
|
COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin
|
||||||
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
|
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
|
||||||
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
|
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
|
||||||
|
|
Loading…
Reference in New Issue