merge 3.0
This commit is contained in:
commit
1f47241fc9
|
@ -338,6 +338,14 @@ pipeline {
|
|||
changeRequest()
|
||||
}
|
||||
steps {
|
||||
script {
|
||||
def linux_node_ip = sh (
|
||||
script: 'ip addr|grep 192|grep -v virbr|awk "{print \\\$2}"|sed "s/\\/.*//"',
|
||||
returnStdout: true
|
||||
).trim()
|
||||
echo "${linux_node_ip}"
|
||||
echo "${WKDIR}/restore.sh -p ${BRANCH_NAME} -n ${BUILD_ID} -c {container name}"
|
||||
}
|
||||
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
|
||||
timeout(time: 120, unit: 'MINUTES'){
|
||||
pre_test()
|
||||
|
|
|
@ -22,7 +22,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
|
|||
|
||||
**Virtual node (vnode)**: To better support data sharding, load balancing and prevent data from overheating or skewing, data nodes are virtualized into multiple virtual nodes (vnode, V2, V3, V4, etc. in the figure). Each vnode is a relatively independent work unit, which is the basic unit of time-series data storage and has independent running threads, memory space and persistent storage path. A vnode contains a certain number of tables (data collection points). When a new table is created, the system checks whether a new vnode needs to be created. The number of vnodes that can be created on a data node depends on the capacity of the hardware of the physical node where the data node is located. A vnode belongs to only one DB, but a DB can have multiple vnodes. In addition to the stored time-series data, a vnode also stores the schema and tag values of the included tables. A virtual node is uniquely identified in the system by the EP of the data node and the VGroup ID to which it belongs and is created and managed by the management node.
|
||||
|
||||
**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 5) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The leader/follower mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the leader. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction.
|
||||
**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 3) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The leader/follower mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the leader. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction.
|
||||
|
||||
**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed in a leader/follower mechanism. Write operations can only be performed on the leader vnode, and then replicated to follower vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `“replica”` when creating a DB, and the default is 1. Using the multi-replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID. If two virtual nodes have the same vnode group ID, it means that they belong to the same group and the data is backed up to each other. The number of virtual nodes in a virtual node group can be dynamically changed, allowing only one, that is, no data replication. VGroup ID is never changed. Even if a virtual node group is deleted, its ID will not be reused.
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ TDengine 分布式架构的逻辑结构图如下:
|
|||
|
||||
**虚拟节点(vnode):** 为更好的支持数据分片、负载均衡,防止数据过热或倾斜,数据节点被虚拟化成多个虚拟节点(vnode,图中 V2,V3,V4 等)。每个 vnode 都是一个相对独立的工作单元,是时序数据存储的基本单元,具有独立的运行线程、内存空间与持久化存储的路径。一个 vnode 包含一定数量的表(数据采集点)。当创建一张新表时,系统会检查是否需要创建新的 vnode。一个数据节点上能创建的 vnode 的数量取决于该数据节点所在物理节点的硬件资源。一个 vnode 只属于一个 DB,但一个 DB 可以有多个 vnode。一个 vnode 除存储的时序数据外,也保存有所包含的表的 schema、标签值等。一个虚拟节点由所属的数据节点的 EP,以及所属的 VGroup ID 在系统内唯一标识,由管理节点创建并管理。
|
||||
|
||||
**管理节点(mnode):** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M)。同时,管理节点也负责元数据(包括用户、数据库、表、静态标签等)的存储和管理,因此也称为 Meta Node。TDengine 集群中可配置多个(开源版最多不超过 3 个)mnode,它们自动构建成为一个虚拟管理节点组(图中 M0,M1,M2)。mnode 间采用 master/slave 的机制进行管理,而且采取强一致方式进行数据同步,任何数据更新操作只能在 Master 上进行。mnode 集群的创建由系统自动完成,无需人工干预。每个 dnode 上至多有一个 mnode,由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。
|
||||
**管理节点(mnode):** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M)。同时,管理节点也负责元数据(包括用户、数据库、表、静态标签等)的存储和管理,因此也称为 Meta Node。TDengine 集群中可配置多个(最多不超过 3 个)mnode,它们自动构建成为一个虚拟管理节点组(图中 M0,M1,M2)。mnode 间采用 master/slave 的机制进行管理,而且采取强一致方式进行数据同步,任何数据更新操作只能在 Master 上进行。mnode 集群的创建由系统自动完成,无需人工干预。每个 dnode 上至多有一个 mnode,由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。
|
||||
|
||||
**虚拟节点组(VGroup):** 不同数据节点上的 vnode 可以组成一个虚拟节点组(vgroup)来保证系统的高可靠。虚拟节点组内采取 master/slave 的方式进行管理。写操作只能在 master vnode 上进行,系统采用异步复制的方式将数据同步到 slave vnode,这样确保了一份数据在多个物理节点上有拷贝。一个 vgroup 里虚拟节点个数就是数据的副本数。如果一个 DB 的副本数为 N,系统必须有至少 N 数据节点。副本数在创建 DB 时通过参数 replica 可以指定,缺省为 1。使用 TDengine 的多副本特性,可以不再需要昂贵的磁盘阵列等存储设备,就可以获得同样的数据高可靠性。虚拟节点组由管理节点创建、管理,并且由管理节点分配一个系统唯一的 ID,VGroup ID。如果两个虚拟节点的 VGroup ID 相同,说明他们属于同一个组,数据互为备份。虚拟节点组里虚拟节点的个数是可以动态改变的,容许只有一个,也就是没有数据复制。VGroup ID 是永远不变的,即使一个虚拟节点组被删除,它的 ID 也不会被收回重复利用。
|
||||
|
||||
|
|
|
@ -128,6 +128,7 @@ typedef struct setConfRet {
|
|||
|
||||
DLL_EXPORT void taos_cleanup(void);
|
||||
DLL_EXPORT int taos_options(TSDB_OPTION option, const void *arg, ...);
|
||||
DLL_EXPORT setConfRet taos_set_config(const char *config);
|
||||
DLL_EXPORT int taos_init(void);
|
||||
DLL_EXPORT TAOS *taos_connect(const char *ip, const char *user, const char *pass, const char *db, uint16_t port);
|
||||
DLL_EXPORT TAOS *taos_connect_auth(const char *ip, const char *user, const char *auth, const char *db, uint16_t port);
|
||||
|
|
|
@ -78,7 +78,6 @@ int32_t tEncodeTag(SEncoder *pEncoder, const STag *pTag);
|
|||
int32_t tDecodeTag(SDecoder *pDecoder, STag **ppTag);
|
||||
int32_t tTagToValArray(const STag *pTag, SArray **ppArray);
|
||||
void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove
|
||||
void debugCheckTags(STag *pTag); // TODO: remove
|
||||
|
||||
// STRUCT =================
|
||||
struct STColumn {
|
||||
|
|
|
@ -1886,7 +1886,7 @@ typedef struct SVCreateStbReq {
|
|||
int8_t rollup;
|
||||
SSchemaWrapper schemaRow;
|
||||
SSchemaWrapper schemaTag;
|
||||
SRSmaParam pRSmaParam;
|
||||
SRSmaParam rsmaParam;
|
||||
} SVCreateStbReq;
|
||||
|
||||
int tEncodeSVCreateStbReq(SEncoder* pCoder, const SVCreateStbReq* pReq);
|
||||
|
|
|
@ -0,0 +1,266 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef TDENGINE_TAOSUDF_H
|
||||
#define TDENGINE_TAOSUDF_H
|
||||
|
||||
#include <stdint.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#include <taos.h>
|
||||
#include <taoserror.h>
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#if defined(__GNUC__)
|
||||
#define FORCE_INLINE inline __attribute__((always_inline))
|
||||
#else
|
||||
#define FORCE_INLINE
|
||||
#endif
|
||||
typedef struct SUdfColumnMeta {
|
||||
int16_t type;
|
||||
int32_t bytes;
|
||||
uint8_t precision;
|
||||
uint8_t scale;
|
||||
} SUdfColumnMeta;
|
||||
|
||||
typedef struct SUdfColumnData {
|
||||
int32_t numOfRows;
|
||||
int32_t rowsAlloc;
|
||||
union {
|
||||
struct {
|
||||
int32_t nullBitmapLen;
|
||||
char *nullBitmap;
|
||||
int32_t dataLen;
|
||||
char *data;
|
||||
} fixLenCol;
|
||||
|
||||
struct {
|
||||
int32_t varOffsetsLen;
|
||||
int32_t *varOffsets;
|
||||
int32_t payloadLen;
|
||||
char *payload;
|
||||
int32_t payloadAllocLen;
|
||||
} varLenCol;
|
||||
};
|
||||
} SUdfColumnData;
|
||||
|
||||
|
||||
typedef struct SUdfColumn {
|
||||
SUdfColumnMeta colMeta;
|
||||
bool hasNull;
|
||||
SUdfColumnData colData;
|
||||
} SUdfColumn;
|
||||
|
||||
typedef struct SUdfDataBlock {
|
||||
int32_t numOfRows;
|
||||
int32_t numOfCols;
|
||||
SUdfColumn **udfCols;
|
||||
} SUdfDataBlock;
|
||||
|
||||
typedef struct SUdfInterBuf {
|
||||
int32_t bufLen;
|
||||
char* buf;
|
||||
int8_t numOfResult; //zero or one
|
||||
} SUdfInterBuf;
|
||||
typedef void *UdfcFuncHandle;
|
||||
|
||||
// dynamic lib init and destroy
|
||||
typedef int32_t (*TUdfInitFunc)();
|
||||
typedef int32_t (*TUdfDestroyFunc)();
|
||||
|
||||
#define UDF_MEMORY_EXP_GROWTH 1.5
|
||||
#define NBIT (3u)
|
||||
#define BitPos(_n) ((_n) & ((1 << NBIT) - 1))
|
||||
#define BMCharPos(bm_, r_) ((bm_)[(r_) >> NBIT])
|
||||
#define BitmapLen(_n) (((_n) + ((1 << NBIT) - 1)) >> NBIT)
|
||||
|
||||
#define udfColDataIsNull_var(pColumn, row) ((pColumn->colData.varLenCol.varOffsets)[row] == -1)
|
||||
#define udfColDataIsNull_f(pColumn, row) ((BMCharPos(pColumn->colData.fixLenCol.nullBitmap, row) & (1u << (7u - BitPos(row)))) == (1u << (7u - BitPos(row))))
|
||||
#define udfColDataSetNull_f(pColumn, row) \
|
||||
do { \
|
||||
BMCharPos(pColumn->colData.fixLenCol.nullBitmap, row) |= (1u << (7u - BitPos(row))); \
|
||||
} while (0)
|
||||
|
||||
#define udfColDataSetNotNull_f(pColumn, r_) \
|
||||
do { \
|
||||
BMCharPos(pColumn->colData.fixLenCol.nullBitmap, r_) &= ~(1u << (7u - BitPos(r_))); \
|
||||
} while (0)
|
||||
#define udfColDataSetNull_var(pColumn, row) ((pColumn->colData.varLenCol.varOffsets)[row] = -1)
|
||||
|
||||
typedef uint16_t VarDataLenT; // maxVarDataLen: 32767
|
||||
#define VARSTR_HEADER_SIZE sizeof(VarDataLenT)
|
||||
#define varDataLen(v) ((VarDataLenT *)(v))[0]
|
||||
#define varDataVal(v) ((char *)(v) + VARSTR_HEADER_SIZE)
|
||||
#define varDataTLen(v) (sizeof(VarDataLenT) + varDataLen(v))
|
||||
#define varDataCopy(dst, v) memcpy((dst), (void *)(v), varDataTLen(v))
|
||||
#define varDataLenByData(v) (*(VarDataLenT *)(((char *)(v)) - VARSTR_HEADER_SIZE))
|
||||
#define varDataSetLen(v, _len) (((VarDataLenT *)(v))[0] = (VarDataLenT)(_len))
|
||||
#define IS_VAR_DATA_TYPE(t) \
|
||||
(((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR) || ((t) == TSDB_DATA_TYPE_JSON))
|
||||
#define IS_STR_DATA_TYPE(t) (((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR))
|
||||
|
||||
|
||||
static FORCE_INLINE char* udfColDataGetData(const SUdfColumn* pColumn, int32_t row) {
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
return pColumn->colData.varLenCol.payload + pColumn->colData.varLenCol.varOffsets[row];
|
||||
} else {
|
||||
return pColumn->colData.fixLenCol.data + pColumn->colMeta.bytes * row;
|
||||
}
|
||||
}
|
||||
|
||||
static FORCE_INLINE bool udfColDataIsNull(const SUdfColumn* pColumn, int32_t row) {
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
if (pColumn->colMeta.type == TSDB_DATA_TYPE_JSON) {
|
||||
if (udfColDataIsNull_var(pColumn, row)) {
|
||||
return true;
|
||||
}
|
||||
char* data = udfColDataGetData(pColumn, row);
|
||||
return (*data == TSDB_DATA_TYPE_NULL);
|
||||
} else {
|
||||
return udfColDataIsNull_var(pColumn, row);
|
||||
}
|
||||
} else {
|
||||
return udfColDataIsNull_f(pColumn, row);
|
||||
}
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t udfColEnsureCapacity(SUdfColumn* pColumn, int32_t newCapacity) {
|
||||
SUdfColumnMeta *meta = &pColumn->colMeta;
|
||||
SUdfColumnData *data = &pColumn->colData;
|
||||
|
||||
if (newCapacity== 0 || newCapacity <= data->rowsAlloc) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int allocCapacity = (data->rowsAlloc< 8) ? 8 : data->rowsAlloc;
|
||||
while (allocCapacity < newCapacity) {
|
||||
allocCapacity *= UDF_MEMORY_EXP_GROWTH;
|
||||
}
|
||||
|
||||
if (IS_VAR_DATA_TYPE(meta->type)) {
|
||||
char* tmp = (char*)realloc(data->varLenCol.varOffsets, sizeof(int32_t) * allocCapacity);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
data->varLenCol.varOffsets = (int32_t*)tmp;
|
||||
data->varLenCol.varOffsetsLen = sizeof(int32_t) * allocCapacity;
|
||||
// for payload, add data in udfColDataAppend
|
||||
} else {
|
||||
char* tmp = (char*)realloc(data->fixLenCol.nullBitmap, BitmapLen(allocCapacity));
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
data->fixLenCol.nullBitmap = tmp;
|
||||
data->fixLenCol.nullBitmapLen = BitmapLen(allocCapacity);
|
||||
if (meta->type == TSDB_DATA_TYPE_NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
tmp = (char*)realloc(data->fixLenCol.data, allocCapacity* meta->bytes);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
data->fixLenCol.data = tmp;
|
||||
data->fixLenCol.dataLen = allocCapacity* meta->bytes;
|
||||
}
|
||||
|
||||
data->rowsAlloc = allocCapacity;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static FORCE_INLINE void udfColDataSetNull(SUdfColumn* pColumn, int32_t row) {
|
||||
udfColEnsureCapacity(pColumn, row+1);
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
udfColDataSetNull_var(pColumn, row);
|
||||
} else {
|
||||
udfColDataSetNull_f(pColumn, row);
|
||||
}
|
||||
pColumn->hasNull = true;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t udfColDataSet(SUdfColumn* pColumn, uint32_t currentRow, const char* pData, bool isNull) {
|
||||
SUdfColumnMeta *meta = &pColumn->colMeta;
|
||||
SUdfColumnData *data = &pColumn->colData;
|
||||
udfColEnsureCapacity(pColumn, currentRow+1);
|
||||
bool isVarCol = IS_VAR_DATA_TYPE(meta->type);
|
||||
if (isNull) {
|
||||
udfColDataSetNull(pColumn, currentRow);
|
||||
} else {
|
||||
if (!isVarCol) {
|
||||
udfColDataSetNotNull_f(pColumn, currentRow);
|
||||
memcpy(data->fixLenCol.data + meta->bytes * currentRow, pData, meta->bytes);
|
||||
} else {
|
||||
int32_t dataLen = varDataTLen(pData);
|
||||
if (meta->type == TSDB_DATA_TYPE_JSON) {
|
||||
if (*pData == TSDB_DATA_TYPE_NULL) {
|
||||
dataLen = 0;
|
||||
} else if (*pData == TSDB_DATA_TYPE_NCHAR) {
|
||||
dataLen = varDataTLen(pData + sizeof(char));
|
||||
} else if (*pData == TSDB_DATA_TYPE_BIGINT || *pData == TSDB_DATA_TYPE_DOUBLE) {
|
||||
dataLen = sizeof(int64_t);
|
||||
} else if (*pData == TSDB_DATA_TYPE_BOOL) {
|
||||
dataLen = sizeof(char);
|
||||
}
|
||||
dataLen += sizeof(char);
|
||||
}
|
||||
|
||||
if (data->varLenCol.payloadAllocLen < data->varLenCol.payloadLen + dataLen) {
|
||||
uint32_t newSize = data->varLenCol.payloadAllocLen;
|
||||
if (newSize <= 1) {
|
||||
newSize = 8;
|
||||
}
|
||||
|
||||
while (newSize < data->varLenCol.payloadLen + dataLen) {
|
||||
newSize = newSize * UDF_MEMORY_EXP_GROWTH;
|
||||
}
|
||||
|
||||
char *buf = (char*)realloc(data->varLenCol.payload, newSize);
|
||||
if (buf == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
data->varLenCol.payload = buf;
|
||||
data->varLenCol.payloadAllocLen = newSize;
|
||||
}
|
||||
|
||||
uint32_t len = data->varLenCol.payloadLen;
|
||||
data->varLenCol.varOffsets[currentRow] = len;
|
||||
|
||||
memcpy(data->varLenCol.payload + len, pData, dataLen);
|
||||
data->varLenCol.payloadLen += dataLen;
|
||||
}
|
||||
}
|
||||
data->numOfRows = (currentRow + 1 > data->numOfRows) ? (currentRow+1) : data->numOfRows;
|
||||
return 0;
|
||||
}
|
||||
|
||||
typedef int32_t (*TUdfScalarProcFunc)(SUdfDataBlock* block, SUdfColumn *resultCol);
|
||||
|
||||
typedef int32_t (*TUdfAggStartFunc)(SUdfInterBuf *buf);
|
||||
typedef int32_t (*TUdfAggProcessFunc)(SUdfDataBlock* block, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf);
|
||||
typedef int32_t (*TUdfAggFinishFunc)(SUdfInterBuf* buf, SUdfInterBuf *resultData);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif // TDENGINE_TAOSUDF_H
|
|
@ -16,6 +16,13 @@
|
|||
#ifndef TDENGINE_TUDF_H
|
||||
#define TDENGINE_TUDF_H
|
||||
|
||||
#undef malloc
|
||||
#define malloc malloc
|
||||
#undef free
|
||||
#define free free
|
||||
#undef realloc
|
||||
#define alloc alloc
|
||||
#include <taosudf.h>
|
||||
|
||||
#include <stdint.h>
|
||||
#include <stdbool.h>
|
||||
|
@ -36,56 +43,6 @@ extern "C" {
|
|||
#endif
|
||||
#define UDF_DNODE_ID_ENV_NAME "DNODE_ID"
|
||||
|
||||
//======================================================================================
|
||||
//begin API to taosd and qworker
|
||||
|
||||
typedef struct SUdfColumnMeta {
|
||||
int16_t type;
|
||||
int32_t bytes;
|
||||
uint8_t precision;
|
||||
uint8_t scale;
|
||||
} SUdfColumnMeta;
|
||||
|
||||
typedef struct SUdfColumnData {
|
||||
int32_t numOfRows;
|
||||
int32_t rowsAlloc;
|
||||
union {
|
||||
struct {
|
||||
int32_t nullBitmapLen;
|
||||
char *nullBitmap;
|
||||
int32_t dataLen;
|
||||
char *data;
|
||||
} fixLenCol;
|
||||
|
||||
struct {
|
||||
int32_t varOffsetsLen;
|
||||
int32_t *varOffsets;
|
||||
int32_t payloadLen;
|
||||
char *payload;
|
||||
int32_t payloadAllocLen;
|
||||
} varLenCol;
|
||||
};
|
||||
} SUdfColumnData;
|
||||
|
||||
|
||||
typedef struct SUdfColumn {
|
||||
SUdfColumnMeta colMeta;
|
||||
bool hasNull;
|
||||
SUdfColumnData colData;
|
||||
} SUdfColumn;
|
||||
|
||||
typedef struct SUdfDataBlock {
|
||||
int32_t numOfRows;
|
||||
int32_t numOfCols;
|
||||
SUdfColumn **udfCols;
|
||||
} SUdfDataBlock;
|
||||
|
||||
typedef struct SUdfInterBuf {
|
||||
int32_t bufLen;
|
||||
char* buf;
|
||||
int8_t numOfResult; //zero or one
|
||||
} SUdfInterBuf;
|
||||
typedef void *UdfcFuncHandle;
|
||||
|
||||
//low level APIs
|
||||
/**
|
||||
|
@ -127,177 +84,6 @@ int32_t udfAggFinalize(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock);
|
|||
int32_t callUdfScalarFunc(char *udfName, SScalarParam *input, int32_t numOfCols, SScalarParam *output);
|
||||
|
||||
int32_t cleanUpUdfs();
|
||||
// end API to taosd and qworker
|
||||
//=============================================================================================================================
|
||||
// begin API to UDF writer.
|
||||
|
||||
// dynamic lib init and destroy
|
||||
typedef int32_t (*TUdfInitFunc)();
|
||||
typedef int32_t (*TUdfDestroyFunc)();
|
||||
|
||||
//TODO: add API to check function arguments type, number etc.
|
||||
|
||||
#define UDF_MEMORY_EXP_GROWTH 1.5
|
||||
|
||||
#define udfColDataIsNull_var(pColumn, row) ((pColumn->colData.varLenCol.varOffsets)[row] == -1)
|
||||
#define udfColDataIsNull_f(pColumn, row) ((BMCharPos(pColumn->colData.fixLenCol.nullBitmap, row) & (1u << (7u - BitPos(row)))) == (1u << (7u - BitPos(row))))
|
||||
#define udfColDataSetNull_f(pColumn, row) \
|
||||
do { \
|
||||
BMCharPos(pColumn->colData.fixLenCol.nullBitmap, row) |= (1u << (7u - BitPos(row))); \
|
||||
} while (0)
|
||||
|
||||
#define udfColDataSetNotNull_f(pColumn, r_) \
|
||||
do { \
|
||||
BMCharPos(pColumn->colData.fixLenCol.nullBitmap, r_) &= ~(1u << (7u - BitPos(r_))); \
|
||||
} while (0)
|
||||
#define udfColDataSetNull_var(pColumn, row) ((pColumn->colData.varLenCol.varOffsets)[row] = -1)
|
||||
|
||||
|
||||
static FORCE_INLINE char* udfColDataGetData(const SUdfColumn* pColumn, int32_t row) {
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
return pColumn->colData.varLenCol.payload + pColumn->colData.varLenCol.varOffsets[row];
|
||||
} else {
|
||||
return pColumn->colData.fixLenCol.data + pColumn->colMeta.bytes * row;
|
||||
}
|
||||
}
|
||||
|
||||
static FORCE_INLINE bool udfColDataIsNull(const SUdfColumn* pColumn, int32_t row) {
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
if (pColumn->colMeta.type == TSDB_DATA_TYPE_JSON) {
|
||||
if (udfColDataIsNull_var(pColumn, row)) {
|
||||
return true;
|
||||
}
|
||||
char* data = udfColDataGetData(pColumn, row);
|
||||
return (*data == TSDB_DATA_TYPE_NULL);
|
||||
} else {
|
||||
return udfColDataIsNull_var(pColumn, row);
|
||||
}
|
||||
} else {
|
||||
return udfColDataIsNull_f(pColumn, row);
|
||||
}
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t udfColEnsureCapacity(SUdfColumn* pColumn, int32_t newCapacity) {
|
||||
SUdfColumnMeta *meta = &pColumn->colMeta;
|
||||
SUdfColumnData *data = &pColumn->colData;
|
||||
|
||||
if (newCapacity== 0 || newCapacity <= data->rowsAlloc) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int allocCapacity = TMAX(data->rowsAlloc, 8);
|
||||
while (allocCapacity < newCapacity) {
|
||||
allocCapacity *= UDF_MEMORY_EXP_GROWTH;
|
||||
}
|
||||
|
||||
if (IS_VAR_DATA_TYPE(meta->type)) {
|
||||
char* tmp = taosMemoryRealloc(data->varLenCol.varOffsets, sizeof(int32_t) * allocCapacity);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
data->varLenCol.varOffsets = (int32_t*)tmp;
|
||||
data->varLenCol.varOffsetsLen = sizeof(int32_t) * allocCapacity;
|
||||
// for payload, add data in udfColDataAppend
|
||||
} else {
|
||||
char* tmp = taosMemoryRealloc(data->fixLenCol.nullBitmap, BitmapLen(allocCapacity));
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
data->fixLenCol.nullBitmap = tmp;
|
||||
data->fixLenCol.nullBitmapLen = BitmapLen(allocCapacity);
|
||||
if (meta->type == TSDB_DATA_TYPE_NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
tmp = taosMemoryRealloc(data->fixLenCol.data, allocCapacity* meta->bytes);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
data->fixLenCol.data = tmp;
|
||||
data->fixLenCol.dataLen = allocCapacity* meta->bytes;
|
||||
}
|
||||
|
||||
data->rowsAlloc = allocCapacity;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static FORCE_INLINE void udfColDataSetNull(SUdfColumn* pColumn, int32_t row) {
|
||||
udfColEnsureCapacity(pColumn, row+1);
|
||||
if (IS_VAR_DATA_TYPE(pColumn->colMeta.type)) {
|
||||
udfColDataSetNull_var(pColumn, row);
|
||||
} else {
|
||||
udfColDataSetNull_f(pColumn, row);
|
||||
}
|
||||
pColumn->hasNull = true;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t udfColDataSet(SUdfColumn* pColumn, uint32_t currentRow, const char* pData, bool isNull) {
|
||||
SUdfColumnMeta *meta = &pColumn->colMeta;
|
||||
SUdfColumnData *data = &pColumn->colData;
|
||||
udfColEnsureCapacity(pColumn, currentRow+1);
|
||||
bool isVarCol = IS_VAR_DATA_TYPE(meta->type);
|
||||
if (isNull) {
|
||||
udfColDataSetNull(pColumn, currentRow);
|
||||
} else {
|
||||
if (!isVarCol) {
|
||||
colDataSetNotNull_f(data->fixLenCol.nullBitmap, currentRow);
|
||||
memcpy(data->fixLenCol.data + meta->bytes * currentRow, pData, meta->bytes);
|
||||
} else {
|
||||
int32_t dataLen = varDataTLen(pData);
|
||||
if (meta->type == TSDB_DATA_TYPE_JSON) {
|
||||
if (*pData == TSDB_DATA_TYPE_NULL) {
|
||||
dataLen = 0;
|
||||
} else if (*pData == TSDB_DATA_TYPE_NCHAR) {
|
||||
dataLen = varDataTLen(pData + CHAR_BYTES);
|
||||
} else if (*pData == TSDB_DATA_TYPE_BIGINT || *pData == TSDB_DATA_TYPE_DOUBLE) {
|
||||
dataLen = LONG_BYTES;
|
||||
} else if (*pData == TSDB_DATA_TYPE_BOOL) {
|
||||
dataLen = CHAR_BYTES;
|
||||
}
|
||||
dataLen += CHAR_BYTES;
|
||||
}
|
||||
|
||||
if (data->varLenCol.payloadAllocLen < data->varLenCol.payloadLen + dataLen) {
|
||||
uint32_t newSize = data->varLenCol.payloadAllocLen;
|
||||
if (newSize <= 1) {
|
||||
newSize = 8;
|
||||
}
|
||||
|
||||
while (newSize < data->varLenCol.payloadLen + dataLen) {
|
||||
newSize = newSize * UDF_MEMORY_EXP_GROWTH;
|
||||
}
|
||||
|
||||
char *buf = taosMemoryRealloc(data->varLenCol.payload, newSize);
|
||||
if (buf == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
data->varLenCol.payload = buf;
|
||||
data->varLenCol.payloadAllocLen = newSize;
|
||||
}
|
||||
|
||||
uint32_t len = data->varLenCol.payloadLen;
|
||||
data->varLenCol.varOffsets[currentRow] = len;
|
||||
|
||||
memcpy(data->varLenCol.payload + len, pData, dataLen);
|
||||
data->varLenCol.payloadLen += dataLen;
|
||||
}
|
||||
}
|
||||
data->numOfRows = TMAX(currentRow + 1, data->numOfRows);
|
||||
return 0;
|
||||
}
|
||||
|
||||
typedef int32_t (*TUdfScalarProcFunc)(SUdfDataBlock* block, SUdfColumn *resultCol);
|
||||
|
||||
typedef int32_t (*TUdfAggStartFunc)(SUdfInterBuf *buf);
|
||||
typedef int32_t (*TUdfAggProcessFunc)(SUdfDataBlock* block, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf);
|
||||
typedef int32_t (*TUdfAggFinishFunc)(SUdfInterBuf* buf, SUdfInterBuf *resultData);
|
||||
|
||||
|
||||
// end API to UDF writer
|
||||
//=======================================================================================================================
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -524,7 +524,7 @@ void syncReconfigFinishLog2(char* s, const SyncReconfigFinish* pMsg);
|
|||
int32_t syncNodeOnPingCb(SSyncNode* ths, SyncPing* pMsg);
|
||||
int32_t syncNodeOnPingReplyCb(SSyncNode* ths, SyncPingReply* pMsg);
|
||||
int32_t syncNodeOnTimeoutCb(SSyncNode* ths, SyncTimeout* pMsg);
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg);
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncIndex* pRetIndex);
|
||||
int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg);
|
||||
int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg);
|
||||
int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg);
|
||||
|
@ -541,7 +541,7 @@ int32_t syncNodeOnSnapshotRspCb(SSyncNode* ths, SyncSnapshotRsp* pMsg);
|
|||
// -----------------------------------------
|
||||
typedef int32_t (*FpOnPingCb)(SSyncNode* ths, SyncPing* pMsg);
|
||||
typedef int32_t (*FpOnPingReplyCb)(SSyncNode* ths, SyncPingReply* pMsg);
|
||||
typedef int32_t (*FpOnClientRequestCb)(SSyncNode* ths, SyncClientRequest* pMsg);
|
||||
typedef int32_t (*FpOnClientRequestCb)(SSyncNode* ths, SyncClientRequest* pMsg, SyncIndex* pRetIndex);
|
||||
typedef int32_t (*FpOnRequestVoteCb)(SSyncNode* ths, SyncRequestVote* pMsg);
|
||||
typedef int32_t (*FpOnRequestVoteReplyCb)(SSyncNode* ths, SyncRequestVoteReply* pMsg);
|
||||
typedef int32_t (*FpOnAppendEntriesCb)(SSyncNode* ths, SyncAppendEntries* pMsg);
|
||||
|
|
|
@ -69,7 +69,7 @@ typedef struct SRpcMsg {
|
|||
} SRpcMsg;
|
||||
|
||||
typedef void (*RpcCfp)(void *parent, SRpcMsg *, SEpSet *rf);
|
||||
typedef bool (*RpcRfp)(int32_t code);
|
||||
typedef bool (*RpcRfp)(int32_t code, tmsg_t msgType);
|
||||
|
||||
typedef struct SRpcInit {
|
||||
char localFqdn[TSDB_FQDN_LEN];
|
||||
|
|
|
@ -29,6 +29,9 @@ extern "C" {
|
|||
#define tcgetattr TCGETATTR_FUNC_TAOS_FORBID
|
||||
#endif
|
||||
|
||||
#define TAOS_CONSOLE_PROMPT_HEADER "taos> "
|
||||
#define TAOS_CONSOLE_PROMPT_CONTINUE " -> "
|
||||
|
||||
typedef struct TdCmd *TdCmdPtr;
|
||||
|
||||
TdCmdPtr taosOpenCmd(const char* cmd);
|
||||
|
|
|
@ -84,9 +84,12 @@ void closeTransporter(STscObj *pTscObj) {
|
|||
rpcClose(pTscObj->pAppInfo->pTransporter);
|
||||
}
|
||||
|
||||
static bool clientRpcRfp(int32_t code) {
|
||||
static bool clientRpcRfp(int32_t code, tmsg_t msgType) {
|
||||
if (code == TSDB_CODE_RPC_REDIRECT || code == TSDB_CODE_RPC_NETWORK_UNAVAIL || code == TSDB_CODE_NODE_NOT_DEPLOYED ||
|
||||
code == TSDB_CODE_SYN_NOT_LEADER || code == TSDB_CODE_APP_NOT_READY) {
|
||||
if (msgType == TDMT_VND_QUERY || msgType == TDMT_VND_FETCH) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
|
|
|
@ -81,6 +81,19 @@ void taos_cleanup(void) {
|
|||
taosCloseLog();
|
||||
}
|
||||
|
||||
static setConfRet taos_set_config_imp(const char *config){
|
||||
setConfRet ret = {SET_CONF_RET_SUCC, {0}};
|
||||
// TODO: need re-implementation
|
||||
return ret;
|
||||
}
|
||||
|
||||
setConfRet taos_set_config(const char *config){
|
||||
// TODO pthread_mutex_lock(&setConfMutex);
|
||||
setConfRet ret = taos_set_config_imp(config);
|
||||
// pthread_mutex_unlock(&setConfMutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
TAOS *taos_connect(const char *ip, const char *user, const char *pass, const char *db, uint16_t port) {
|
||||
tscDebug("try to connect to %s:%u, user:%s db:%s", ip, port, user, db);
|
||||
if (user == NULL) {
|
||||
|
|
|
@ -862,21 +862,6 @@ void debugPrintSTag(STag *pTag, const char *tag, int32_t ln) {
|
|||
printf("\n");
|
||||
}
|
||||
|
||||
void debugCheckTags(STag *pTag) {
|
||||
switch (pTag->flags) {
|
||||
case 0x0:
|
||||
case 0x20:
|
||||
case 0x40:
|
||||
case 0x60:
|
||||
break;
|
||||
default:
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
ASSERT(pTag->nTag <= 128 && pTag->nTag >= 0);
|
||||
ASSERT(pTag->ver <= 512 && pTag->ver >= 0); // temp condition for pTag->ver
|
||||
}
|
||||
|
||||
static int32_t tPutTagVal(uint8_t *p, STagVal *pTagVal, int8_t isJson) {
|
||||
int32_t n = 0;
|
||||
|
||||
|
@ -999,7 +984,6 @@ int32_t tTagNew(SArray *pArray, int32_t version, int8_t isJson, STag **ppTag) {
|
|||
debugPrintSTag(*ppTag, __func__, __LINE__);
|
||||
#endif
|
||||
|
||||
debugCheckTags(*ppTag); // TODO: remove this line after debug
|
||||
return code;
|
||||
|
||||
_err:
|
||||
|
|
|
@ -3696,7 +3696,7 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
|
|||
|
||||
if (tDecodeI8(&decoder, &pReq->isTsma) < 0) return -1;
|
||||
if (pReq->isTsma) {
|
||||
if (tDecodeBinaryAlloc(&decoder, &pReq->pTsma, NULL) < 0) return -1;
|
||||
if (tDecodeBinary(&decoder, (uint8_t **)&pReq->pTsma, NULL) < 0) return -1;
|
||||
}
|
||||
|
||||
tEndDecode(&decoder);
|
||||
|
@ -3707,9 +3707,6 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
|
|||
int32_t tFreeSCreateVnodeReq(SCreateVnodeReq *pReq) {
|
||||
taosArrayDestroy(pReq->pRetensions);
|
||||
pReq->pRetensions = NULL;
|
||||
if (pReq->isTsma) {
|
||||
taosMemoryFreeClear(pReq->pTsma);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -4747,9 +4744,8 @@ int32_t tDecodeSRSmaParam(SDecoder *pCoder, SRSmaParam *pRSmaParam) {
|
|||
if (tDecodeI64v(pCoder, &pRSmaParam->watermark[i]) < 0) return -1;
|
||||
if (tDecodeI32v(pCoder, &pRSmaParam->qmsgLen[i]) < 0) return -1;
|
||||
if (pRSmaParam->qmsgLen[i] > 0) {
|
||||
uint64_t len;
|
||||
if (tDecodeBinaryAlloc(pCoder, (void **)&pRSmaParam->qmsg[i], &len) < 0)
|
||||
return -1; // qmsgLen contains len of '\0'
|
||||
tDecoderMalloc(pCoder, pRSmaParam->qmsgLen[i]);
|
||||
if (tDecodeBinary(pCoder, (uint8_t **)&pRSmaParam->qmsg[i], NULL) < 0) return -1; // qmsgLen contains len of '\0'
|
||||
} else {
|
||||
pRSmaParam->qmsg[i] = NULL;
|
||||
}
|
||||
|
@ -4767,7 +4763,7 @@ int tEncodeSVCreateStbReq(SEncoder *pCoder, const SVCreateStbReq *pReq) {
|
|||
if (tEncodeSSchemaWrapper(pCoder, &pReq->schemaRow) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->schemaTag) < 0) return -1;
|
||||
if (pReq->rollup) {
|
||||
if (tEncodeSRSmaParam(pCoder, &pReq->pRSmaParam) < 0) return -1;
|
||||
if (tEncodeSRSmaParam(pCoder, &pReq->rsmaParam) < 0) return -1;
|
||||
}
|
||||
|
||||
tEndEncode(pCoder);
|
||||
|
@ -4783,7 +4779,7 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
|
|||
if (tDecodeSSchemaWrapper(pCoder, &pReq->schemaRow) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->schemaTag) < 0) return -1;
|
||||
if (pReq->rollup) {
|
||||
if (tDecodeSRSmaParam(pCoder, &pReq->pRSmaParam) < 0) return -1;
|
||||
if (tDecodeSRSmaParam(pCoder, &pReq->rsmaParam) < 0) return -1;
|
||||
}
|
||||
|
||||
tEndDecode(pCoder);
|
||||
|
|
|
@ -248,9 +248,12 @@ static inline void dmReleaseHandle(SRpcHandleInfo *pHandle, int8_t type) {
|
|||
}
|
||||
}
|
||||
|
||||
static bool rpcRfp(int32_t code) {
|
||||
static bool rpcRfp(int32_t code, tmsg_t msgType) {
|
||||
if (code == TSDB_CODE_RPC_REDIRECT || code == TSDB_CODE_RPC_NETWORK_UNAVAIL || code == TSDB_CODE_NODE_NOT_DEPLOYED ||
|
||||
code == TSDB_CODE_SYN_NOT_LEADER || code == TSDB_CODE_APP_NOT_READY) {
|
||||
if (msgType == TDMT_VND_QUERY || msgType == TDMT_VND_FETCH) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
|
|
|
@ -39,8 +39,10 @@ typedef struct {
|
|||
int32_t id;
|
||||
int32_t errCode;
|
||||
int32_t acceptableCode;
|
||||
ETrnStage stage;
|
||||
int32_t retryCode;
|
||||
ETrnAct actionType;
|
||||
ETrnStage stage;
|
||||
int8_t reserved;
|
||||
int8_t rawWritten;
|
||||
int8_t msgSent;
|
||||
int8_t msgReceived;
|
||||
|
|
|
@ -442,7 +442,7 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
syncPingReplyDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_CLIENT_REQUEST) {
|
||||
SyncClientRequest *pSyncMsg = syncClientRequestFromRpcMsg2(pMsg);
|
||||
code = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg);
|
||||
code = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg, NULL);
|
||||
syncClientRequestDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE) {
|
||||
SyncRequestVote *pSyncMsg = syncRequestVoteFromRpcMsg2(pMsg);
|
||||
|
@ -491,7 +491,7 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
syncPingReplyDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_CLIENT_REQUEST) {
|
||||
SyncClientRequest *pSyncMsg = syncClientRequestFromRpcMsg2(pMsg);
|
||||
code = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg);
|
||||
code = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg, NULL);
|
||||
syncClientRequestDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE) {
|
||||
SyncRequestVote *pSyncMsg = syncRequestVoteFromRpcMsg2(pMsg);
|
||||
|
|
|
@ -234,7 +234,7 @@ static int32_t mndProcessRetrieveSysTableReq(SRpcMsg *pReq) {
|
|||
if (retrieveReq.user[0] != 0) {
|
||||
memcpy(pReq->info.conn.user, retrieveReq.user, TSDB_USER_LEN);
|
||||
} else {
|
||||
memcpy(pReq->info.conn.user, TSDB_DEFAULT_USER, TSDB_USER_LEN);
|
||||
memcpy(pReq->info.conn.user, TSDB_DEFAULT_USER, strlen(TSDB_DEFAULT_USER) + 1);
|
||||
}
|
||||
if (mndCheckShowPrivilege(pMnode, pReq->info.conn.user, pShow->type, retrieveReq.db) != 0) {
|
||||
return -1;
|
||||
|
|
|
@ -427,17 +427,17 @@ static void *mndBuildVCreateStbReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pSt
|
|||
req.schemaTag.pSchema = pStb->pTags;
|
||||
|
||||
if (req.rollup) {
|
||||
req.pRSmaParam.maxdelay[0] = pStb->maxdelay[0];
|
||||
req.pRSmaParam.maxdelay[1] = pStb->maxdelay[1];
|
||||
req.rsmaParam.maxdelay[0] = pStb->maxdelay[0];
|
||||
req.rsmaParam.maxdelay[1] = pStb->maxdelay[1];
|
||||
if (pStb->ast1Len > 0) {
|
||||
if (mndConvertRsmaTask(&req.pRSmaParam.qmsg[0], &req.pRSmaParam.qmsgLen[0], pStb->pAst1, pStb->uid,
|
||||
STREAM_TRIGGER_WINDOW_CLOSE, req.pRSmaParam.watermark[0]) < 0) {
|
||||
if (mndConvertRsmaTask(&req.rsmaParam.qmsg[0], &req.rsmaParam.qmsgLen[0], pStb->pAst1, pStb->uid,
|
||||
STREAM_TRIGGER_WINDOW_CLOSE, req.rsmaParam.watermark[0]) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
}
|
||||
if (pStb->ast2Len > 0) {
|
||||
if (mndConvertRsmaTask(&req.pRSmaParam.qmsg[1], &req.pRSmaParam.qmsgLen[1], pStb->pAst2, pStb->uid,
|
||||
STREAM_TRIGGER_WINDOW_CLOSE, req.pRSmaParam.watermark[1]) < 0) {
|
||||
if (mndConvertRsmaTask(&req.rsmaParam.qmsg[1], &req.rsmaParam.qmsgLen[1], pStb->pAst2, pStb->uid,
|
||||
STREAM_TRIGGER_WINDOW_CLOSE, req.rsmaParam.watermark[1]) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
}
|
||||
|
@ -470,12 +470,12 @@ static void *mndBuildVCreateStbReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pSt
|
|||
tEncoderClear(&encoder);
|
||||
|
||||
*pContLen = contLen;
|
||||
taosMemoryFreeClear(req.pRSmaParam.qmsg[0]);
|
||||
taosMemoryFreeClear(req.pRSmaParam.qmsg[1]);
|
||||
taosMemoryFreeClear(req.rsmaParam.qmsg[0]);
|
||||
taosMemoryFreeClear(req.rsmaParam.qmsg[1]);
|
||||
return pHead;
|
||||
_err:
|
||||
taosMemoryFreeClear(req.pRSmaParam.qmsg[0]);
|
||||
taosMemoryFreeClear(req.pRSmaParam.qmsg[1]);
|
||||
taosMemoryFreeClear(req.rsmaParam.qmsg[0]);
|
||||
taosMemoryFreeClear(req.rsmaParam.qmsg[1]);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
|
|
@ -15,9 +15,9 @@
|
|||
|
||||
#define _DEFAULT_SOURCE
|
||||
#include "mndTrans.h"
|
||||
#include "mndPrivilege.h"
|
||||
#include "mndConsumer.h"
|
||||
#include "mndDb.h"
|
||||
#include "mndPrivilege.h"
|
||||
#include "mndShow.h"
|
||||
#include "mndSync.h"
|
||||
#include "mndUser.h"
|
||||
|
@ -55,7 +55,7 @@ static bool mndTransPerfromFinishedStage(SMnode *pMnode, STrans *pTrans);
|
|||
static bool mndCannotExecuteTransAction(SMnode *pMnode) { return !pMnode->deploy && !mndIsMaster(pMnode); }
|
||||
|
||||
static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans);
|
||||
static int32_t mndProcessTransReq(SRpcMsg *pReq);
|
||||
static int32_t mndProcessTransTimer(SRpcMsg *pReq);
|
||||
static int32_t mndProcessTtl(SRpcMsg *pReq);
|
||||
static int32_t mndProcessKillTransReq(SRpcMsg *pReq);
|
||||
|
||||
|
@ -73,7 +73,7 @@ int32_t mndInitTrans(SMnode *pMnode) {
|
|||
.deleteFp = (SdbDeleteFp)mndTransActionDelete,
|
||||
};
|
||||
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_TRANS_TIMER, mndProcessTransReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_TRANS_TIMER, mndProcessTransTimer);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_KILL_TRANS, mndProcessKillTransReq);
|
||||
|
||||
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_TRANS, mndRetrieveTrans);
|
||||
|
@ -139,8 +139,10 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
|
|||
SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->retryCode, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->reserved, _OVER)
|
||||
if (pAction->actionType == TRANS_ACTION_RAW) {
|
||||
int32_t len = sdbGetRawTotalSize(pAction->pRaw);
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER)
|
||||
|
@ -163,8 +165,10 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
|
|||
SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->retryCode, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->reserved, _OVER)
|
||||
if (pAction->actionType == TRANS_ACTION_RAW) {
|
||||
int32_t len = sdbGetRawTotalSize(pAction->pRaw);
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER)
|
||||
|
@ -187,8 +191,10 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
|
|||
SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pAction->retryCode, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->reserved, _OVER)
|
||||
if (pAction->actionType == TRANS_ACTION_RAW) {
|
||||
int32_t len = sdbGetRawTotalSize(pAction->pRaw);
|
||||
SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER)
|
||||
|
@ -291,10 +297,12 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.retryCode, _OVER)
|
||||
SDB_GET_INT8(pRaw, dataPos, &actionType, _OVER)
|
||||
action.actionType = actionType;
|
||||
SDB_GET_INT8(pRaw, dataPos, &stage, _OVER)
|
||||
action.stage = stage;
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.reserved, _OVER)
|
||||
if (action.actionType == TRANS_ACTION_RAW) {
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER)
|
||||
|
@ -324,10 +332,12 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.retryCode, _OVER)
|
||||
SDB_GET_INT8(pRaw, dataPos, &actionType, _OVER)
|
||||
action.actionType = actionType;
|
||||
SDB_GET_INT8(pRaw, dataPos, &stage, _OVER)
|
||||
action.stage = stage;
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.reserved, _OVER)
|
||||
if (action.actionType == TRANS_ACTION_RAW) {
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER)
|
||||
|
@ -357,10 +367,12 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &action.retryCode, _OVER)
|
||||
SDB_GET_INT8(pRaw, dataPos, &actionType, _OVER)
|
||||
action.actionType = actionType;
|
||||
SDB_GET_INT8(pRaw, dataPos, &stage, _OVER)
|
||||
action.stage = stage;
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.reserved, _OVER)
|
||||
if (action.actionType) {
|
||||
SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER)
|
||||
|
@ -463,15 +475,25 @@ static int32_t mndTransActionInsert(SSdb *pSdb, STrans *pTrans) {
|
|||
if (fp) {
|
||||
(*fp)(pSdb->pMnode, pTrans->param, pTrans->paramLen);
|
||||
}
|
||||
pTrans->startFunc = 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mndTransDropData(STrans *pTrans) {
|
||||
if (pTrans->redoActions != NULL) {
|
||||
mndTransDropActions(pTrans->redoActions);
|
||||
pTrans->redoActions = NULL;
|
||||
}
|
||||
if (pTrans->undoActions != NULL) {
|
||||
mndTransDropActions(pTrans->undoActions);
|
||||
pTrans->undoActions = NULL;
|
||||
}
|
||||
if (pTrans->commitActions != NULL) {
|
||||
mndTransDropActions(pTrans->commitActions);
|
||||
pTrans->commitActions = NULL;
|
||||
}
|
||||
if (pTrans->rpcRsp != NULL) {
|
||||
taosMemoryFree(pTrans->rpcRsp);
|
||||
pTrans->rpcRsp = NULL;
|
||||
|
@ -492,6 +514,7 @@ static int32_t mndTransActionDelete(SSdb *pSdb, STrans *pTrans, bool callFunc) {
|
|||
if (fp) {
|
||||
(*fp)(pSdb->pMnode, pTrans->param, pTrans->paramLen);
|
||||
}
|
||||
pTrans->stopFunc = 0;
|
||||
}
|
||||
|
||||
mndTransDropData(pTrans);
|
||||
|
@ -805,7 +828,7 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) {
|
|||
sendRsp = true;
|
||||
}
|
||||
} else {
|
||||
if (pTrans->stage == TRN_STAGE_REDO_ACTION && pTrans->failedTimes > 3) {
|
||||
if (pTrans->stage == TRN_STAGE_REDO_ACTION && pTrans->failedTimes > 6) {
|
||||
if (code == 0) code = TSDB_CODE_MND_TRANS_UNKNOW_ERROR;
|
||||
sendRsp = true;
|
||||
}
|
||||
|
@ -875,8 +898,8 @@ int32_t mndTransProcessRsp(SRpcMsg *pRsp) {
|
|||
pAction->errCode = pRsp->code;
|
||||
}
|
||||
|
||||
mDebug("trans:%d, %s:%d response is received, code:0x%x, accept:0x%x", transId, mndTransStr(pAction->stage), action,
|
||||
pRsp->code, pAction->acceptableCode);
|
||||
mDebug("trans:%d, %s:%d response is received, code:0x%x, accept:0x%x retry:0x%x", transId,
|
||||
mndTransStr(pAction->stage), action, pRsp->code, pAction->acceptableCode, pAction->retryCode);
|
||||
mndTransExecute(pMnode, pTrans);
|
||||
|
||||
_OVER:
|
||||
|
@ -884,6 +907,21 @@ _OVER:
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void mndTransResetAction(SMnode *pMnode, STrans *pTrans, STransAction *pAction) {
|
||||
pAction->rawWritten = 0;
|
||||
pAction->msgSent = 0;
|
||||
pAction->msgReceived = 0;
|
||||
if (pAction->errCode == TSDB_CODE_RPC_REDIRECT || pAction->errCode == TSDB_CODE_SYN_NEW_CONFIG_ERROR ||
|
||||
pAction->errCode == TSDB_CODE_SYN_INTERNAL_ERROR || pAction->errCode == TSDB_CODE_SYN_NOT_LEADER) {
|
||||
pAction->epSet.inUse = (pAction->epSet.inUse + 1) % pAction->epSet.numOfEps;
|
||||
mDebug("trans:%d, %s:%d execute status is reset and set epset inuse:%d", pTrans->id, mndTransStr(pAction->stage),
|
||||
pAction->id, pAction->epSet.inUse);
|
||||
} else {
|
||||
mDebug("trans:%d, %s:%d execute status is reset", pTrans->id, mndTransStr(pAction->stage), pAction->id);
|
||||
}
|
||||
pAction->errCode = 0;
|
||||
}
|
||||
|
||||
static void mndTransResetActions(SMnode *pMnode, STrans *pTrans, SArray *pArray) {
|
||||
int32_t numOfActions = taosArrayGetSize(pArray);
|
||||
|
||||
|
@ -894,18 +932,7 @@ static void mndTransResetActions(SMnode *pMnode, STrans *pTrans, SArray *pArray)
|
|||
continue;
|
||||
if (pAction->rawWritten && (pAction->errCode == 0 || pAction->errCode == pAction->acceptableCode)) continue;
|
||||
|
||||
pAction->rawWritten = 0;
|
||||
pAction->msgSent = 0;
|
||||
pAction->msgReceived = 0;
|
||||
if (pAction->errCode == TSDB_CODE_RPC_REDIRECT || pAction->errCode == TSDB_CODE_SYN_NEW_CONFIG_ERROR ||
|
||||
pAction->errCode == TSDB_CODE_SYN_INTERNAL_ERROR || pAction->errCode == TSDB_CODE_SYN_NOT_LEADER) {
|
||||
pAction->epSet.inUse = (pAction->epSet.inUse + 1) % pAction->epSet.numOfEps;
|
||||
mDebug("trans:%d, %s:%d execute status is reset and set epset inuse:%d", pTrans->id, mndTransStr(pAction->stage),
|
||||
action, pAction->epSet.inUse);
|
||||
} else {
|
||||
mDebug("trans:%d, %s:%d execute status is reset", pTrans->id, mndTransStr(pAction->stage), action);
|
||||
}
|
||||
pAction->errCode = 0;
|
||||
mndTransResetAction(pMnode, pTrans, pAction);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1112,9 +1139,9 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
|
|||
if (pAction->msgReceived) {
|
||||
if (pAction->errCode != 0 && pAction->errCode != pAction->acceptableCode) {
|
||||
code = pAction->errCode;
|
||||
pAction->msgSent = 0;
|
||||
pAction->msgReceived = 0;
|
||||
mDebug("trans:%d, %s:%d execute status is reset", pTrans->id, mndTransStr(pAction->stage), action);
|
||||
mndTransResetAction(pMnode, pTrans, pAction);
|
||||
} else {
|
||||
mDebug("trans:%d, %s:%d execute successfully", pTrans->id, mndTransStr(pAction->stage), action);
|
||||
}
|
||||
} else {
|
||||
code = TSDB_CODE_ACTION_IN_PROGRESS;
|
||||
|
@ -1123,6 +1150,8 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
|
|||
if (pAction->rawWritten) {
|
||||
if (pAction->errCode != 0 && pAction->errCode != pAction->acceptableCode) {
|
||||
code = pAction->errCode;
|
||||
} else {
|
||||
mDebug("trans:%d, %s:%d write successfully", pTrans->id, mndTransStr(pAction->stage), action);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1156,9 +1185,16 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
|
|||
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
mDebug("trans:%d, %s:%d is in progress and wait it finish", pTrans->id, mndTransStr(pAction->stage), pAction->id);
|
||||
break;
|
||||
} else if (code == pAction->retryCode) {
|
||||
mDebug("trans:%d, %s:%d receive code:0x%x and retry", pTrans->id, mndTransStr(pAction->stage), pAction->id, code);
|
||||
taosMsleep(300);
|
||||
action--;
|
||||
continue;
|
||||
} else {
|
||||
terrno = code;
|
||||
pTrans->code = code;
|
||||
mDebug("trans:%d, %s:%d receive code:0x%x and wait another schedule, failedTimes:%d", pTrans->id,
|
||||
mndTransStr(pAction->stage), pAction->id, code, pTrans->failedTimes);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -1343,7 +1379,7 @@ void mndTransExecute(SMnode *pMnode, STrans *pTrans) {
|
|||
mndTransSendRpcRsp(pMnode, pTrans);
|
||||
}
|
||||
|
||||
static int32_t mndProcessTransReq(SRpcMsg *pReq) {
|
||||
static int32_t mndProcessTransTimer(SRpcMsg *pReq) {
|
||||
mndTransPullup(pReq->info.node);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -15,10 +15,10 @@
|
|||
|
||||
#define _DEFAULT_SOURCE
|
||||
#include "mndVgroup.h"
|
||||
#include "mndPrivilege.h"
|
||||
#include "mndDb.h"
|
||||
#include "mndDnode.h"
|
||||
#include "mndMnode.h"
|
||||
#include "mndPrivilege.h"
|
||||
#include "mndShow.h"
|
||||
#include "mndTrans.h"
|
||||
#include "mndUser.h"
|
||||
|
@ -896,6 +896,8 @@ int32_t mndAddAlterVnodeConfirmAction(SMnode *pMnode, STrans *pTrans, SDbObj *pD
|
|||
action.pCont = pHead;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_VND_ALTER_CONFIRM;
|
||||
// incorrect redirect result will cause this erro
|
||||
action.retryCode = TSDB_CODE_VND_INVALID_VGROUP_ID;
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pHead);
|
||||
|
@ -942,6 +944,8 @@ static int32_t mndAddSetVnodeStandByAction(SMnode *pMnode, STrans *pTrans, SDbOb
|
|||
action.contLen = contLen;
|
||||
action.msgType = TDMT_SYNC_SET_VNODE_STANDBY;
|
||||
action.acceptableCode = TSDB_CODE_NODE_NOT_DEPLOYED;
|
||||
// Keep retrying until the target vnode is not the leader
|
||||
action.retryCode = TSDB_CODE_SYN_IS_LEADER;
|
||||
|
||||
if (isRedo) {
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
|
@ -1003,7 +1007,7 @@ int32_t mndSetMoveVgroupInfoToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb,
|
|||
|
||||
mInfo("vgId:%d, will add 1 vnodes", pVgroup->vgId);
|
||||
if (mndAddVnodeToVgroup(pMnode, &newVg, pArray) != 0) return -1;
|
||||
if (mndAddCreateVnodeAction(pMnode, pTrans, pDb, &newVg, &newVg.vnodeGid[1], true) != 0) return -1;
|
||||
if (mndAddCreateVnodeAction(pMnode, pTrans, pDb, &newVg, &newVg.vnodeGid[newVg.replica - 1], true) != 0) return -1;
|
||||
if (mndAddAlterVnodeAction(pMnode, pTrans, pDb, &newVg, TDMT_VND_ALTER_REPLICA) != 0) return -1;
|
||||
if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, &newVg) != 0) return -1;
|
||||
|
||||
|
@ -1017,10 +1021,19 @@ int32_t mndSetMoveVgroupInfoToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb,
|
|||
if (mndAddDropVnodeAction(pMnode, pTrans, pDb, &newVg, &del, true) != 0) return -1;
|
||||
if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, &newVg) != 0) return -1;
|
||||
|
||||
{
|
||||
SSdbRaw *pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendRedolog(pTrans, pRaw) != 0) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
pRaw = NULL;
|
||||
}
|
||||
|
||||
{
|
||||
SSdbRaw *pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
pRaw = NULL;
|
||||
}
|
||||
|
||||
mInfo("vgId:%d, vgroup info after move, replica:%d", newVg.vgId, newVg.replica);
|
||||
for (int32_t i = 0; i < newVg.replica; ++i) {
|
||||
|
@ -1168,10 +1181,19 @@ static int32_t mndRedistributeVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb,
|
|||
if (mndAddDecVgroupReplicaFromTrans(pMnode, pTrans, pDb, &newVg, pOld3->id) != 0) goto _OVER;
|
||||
}
|
||||
|
||||
{
|
||||
pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendRedolog(pTrans, pRaw) != 0) goto _OVER;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
pRaw = NULL;
|
||||
}
|
||||
|
||||
{
|
||||
pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
pRaw = NULL;
|
||||
}
|
||||
|
||||
mInfo("vgId:%d, vgroup info after redistribute, replica:%d", newVg.vgId, newVg.replica);
|
||||
for (int32_t i = 0; i < newVg.replica; ++i) {
|
||||
|
@ -1229,7 +1251,8 @@ static int32_t mndProcessRedistributeVgroupMsg(SRpcMsg *pReq) {
|
|||
}
|
||||
|
||||
if (req.dnodeId1 == pVgroup->vnodeGid[0].dnodeId) {
|
||||
terrno = TSDB_CODE_MND_VGROUP_UN_CHANGED;
|
||||
// terrno = TSDB_CODE_MND_VGROUP_UN_CHANGED;
|
||||
code = 0;
|
||||
goto _OVER;
|
||||
}
|
||||
|
||||
|
@ -1351,7 +1374,8 @@ static int32_t mndProcessRedistributeVgroupMsg(SRpcMsg *pReq) {
|
|||
}
|
||||
|
||||
if (pNew1 == NULL && pOld1 == NULL && pNew2 == NULL && pOld2 == NULL && pNew3 == NULL && pOld3 == NULL) {
|
||||
terrno = TSDB_CODE_MND_VGROUP_UN_CHANGED;
|
||||
// terrno = TSDB_CODE_MND_VGROUP_UN_CHANGED;
|
||||
code = 0;
|
||||
goto _OVER;
|
||||
}
|
||||
|
||||
|
@ -1424,6 +1448,17 @@ int32_t mndBuildAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, S
|
|||
} else {
|
||||
}
|
||||
|
||||
{
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(&newVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendRedolog(pTrans, pVgRaw) != 0) {
|
||||
sdbFreeRaw(pVgRaw);
|
||||
return -1;
|
||||
}
|
||||
sdbSetRawStatus(pVgRaw, SDB_STATUS_READY);
|
||||
}
|
||||
|
||||
{
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(&newVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pVgRaw) != 0) {
|
||||
|
@ -1432,6 +1467,7 @@ int32_t mndBuildAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, S
|
|||
}
|
||||
sdbSetRawStatus(pVgRaw, SDB_STATUS_READY);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1538,12 +1574,23 @@ static int32_t mndSetBalanceVgroupInfoToTrans(SMnode *pMnode, STrans *pTrans, SD
|
|||
if (mndAddIncVgroupReplicaToTrans(pMnode, pTrans, pDb, &newVg, pDst->id) != 0) return -1;
|
||||
if (mndAddDecVgroupReplicaFromTrans(pMnode, pTrans, pDb, &newVg, pSrc->id) != 0) return -1;
|
||||
|
||||
{
|
||||
SSdbRaw *pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendRedolog(pTrans, pRaw) != 0) {
|
||||
sdbFreeRaw(pRaw);
|
||||
return -1;
|
||||
}
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
}
|
||||
|
||||
{
|
||||
SSdbRaw *pRaw = mndVgroupActionEncode(&newVg);
|
||||
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) {
|
||||
sdbFreeRaw(pRaw);
|
||||
return -1;
|
||||
}
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
}
|
||||
|
||||
mInfo("vgId:%d, vgroup info after balance, replica:%d", newVg.vgId, newVg.replica);
|
||||
for (int32_t i = 0; i < newVg.replica; ++i) {
|
||||
|
@ -1552,7 +1599,8 @@ static int32_t mndSetBalanceVgroupInfoToTrans(SMnode *pMnode, STrans *pTrans, SD
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndBalanceVgroupBetweenDnode(SMnode *pMnode, STrans *pTrans, SDnodeObj *pSrc, SDnodeObj *pDst) {
|
||||
static int32_t mndBalanceVgroupBetweenDnode(SMnode *pMnode, STrans *pTrans, SDnodeObj *pSrc, SDnodeObj *pDst,
|
||||
SHashObj *pBalancedVgroups) {
|
||||
void *pIter = NULL;
|
||||
int32_t code = -1;
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
@ -1561,6 +1609,10 @@ static int32_t mndBalanceVgroupBetweenDnode(SMnode *pMnode, STrans *pTrans, SDno
|
|||
SVgObj *pVgroup = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_VGROUP, pIter, (void **)&pVgroup);
|
||||
if (pIter == NULL) break;
|
||||
if (taosHashGet(pBalancedVgroups, &pVgroup->vgId, sizeof(int32_t)) != NULL) {
|
||||
sdbRelease(pSdb, pVgroup);
|
||||
continue;
|
||||
}
|
||||
|
||||
bool existInSrc = false;
|
||||
bool existInDst = false;
|
||||
|
@ -1577,6 +1629,9 @@ static int32_t mndBalanceVgroupBetweenDnode(SMnode *pMnode, STrans *pTrans, SDno
|
|||
|
||||
SDbObj *pDb = mndAcquireDb(pMnode, pVgroup->dbName);
|
||||
code = mndSetBalanceVgroupInfoToTrans(pMnode, pTrans, pDb, pVgroup, pSrc, pDst);
|
||||
if (code == 0) {
|
||||
code = taosHashPut(pBalancedVgroups, &pVgroup->vgId, sizeof(int32_t), &pVgroup->vgId, sizeof(int32_t));
|
||||
}
|
||||
mndReleaseDb(pMnode, pDb);
|
||||
sdbRelease(pSdb, pVgroup);
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
|
@ -1590,6 +1645,10 @@ static int32_t mndBalanceVgroup(SMnode *pMnode, SRpcMsg *pReq, SArray *pArray) {
|
|||
int32_t code = -1;
|
||||
int32_t numOfVgroups = 0;
|
||||
STrans *pTrans = NULL;
|
||||
SHashObj *pBalancedVgroups = NULL;
|
||||
|
||||
pBalancedVgroups = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), false, HASH_NO_LOCK);
|
||||
if (pBalancedVgroups == NULL) goto _OVER;
|
||||
|
||||
pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pReq);
|
||||
if (pTrans == NULL) goto _OVER;
|
||||
|
@ -1613,18 +1672,18 @@ static int32_t mndBalanceVgroup(SMnode *pMnode, SRpcMsg *pReq, SArray *pArray) {
|
|||
pDst->id, dstScore);
|
||||
|
||||
if (srcScore > dstScore - 0.000001) {
|
||||
code = mndBalanceVgroupBetweenDnode(pMnode, pTrans, pSrc, pDst);
|
||||
code = mndBalanceVgroupBetweenDnode(pMnode, pTrans, pSrc, pDst, pBalancedVgroups);
|
||||
if (code == 0) {
|
||||
pSrc->numOfVnodes--;
|
||||
pDst->numOfVnodes++;
|
||||
numOfVgroups++;
|
||||
continue;
|
||||
} else {
|
||||
mError("trans:%d, failed to balance vgroup from dnode:%d to dnode:%d", pTrans->id, pSrc->id, pDst->id);
|
||||
return -1;
|
||||
mDebug("trans:%d, no vgroup need to balance from dnode:%d to dnode:%d", pTrans->id, pSrc->id, pDst->id);
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
mDebug("trans:%d, no vgroup need to balance vgroup any more", pTrans->id);
|
||||
mDebug("trans:%d, no vgroup need to balance any more", pTrans->id);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -125,6 +125,7 @@ int32_t tsdbGetFileBlocksDistInfo(tsdbReaderT *pReader, STableBlockDistInfo
|
|||
bool isTsdbCacheLastRow(tsdbReaderT *pReader);
|
||||
int32_t tsdbGetAllTableList(SMeta *pMeta, uint64_t uid, SArray *list);
|
||||
int32_t tsdbGetCtbIdList(SMeta *pMeta, int64_t suid, SArray *list);
|
||||
int32_t tsdbGetStbIdList(SMeta *pMeta, int64_t suid, SArray *list);
|
||||
void *tsdbGetIdx(SMeta *pMeta);
|
||||
void *tsdbGetIvtIdx(SMeta *pMeta);
|
||||
int64_t tsdbGetNumOfRowsInMemTable(tsdbReaderT *pHandle);
|
||||
|
@ -198,15 +199,20 @@ typedef struct {
|
|||
uint64_t groupId;
|
||||
} STableKeyInfo;
|
||||
|
||||
#define TABLE_ROLLUP_ON ((int8_t)0x1)
|
||||
#define TABLE_IS_ROLLUP(FLG) (((FLG) & (TABLE_ROLLUP_ON)) != 0)
|
||||
#define TABLE_SET_ROLLUP(FLG) ((FLG) |= TABLE_ROLLUP_ON)
|
||||
struct SMetaEntry {
|
||||
int64_t version;
|
||||
int8_t type;
|
||||
int8_t flags; // TODO: need refactor?
|
||||
tb_uid_t uid;
|
||||
char *name;
|
||||
union {
|
||||
struct {
|
||||
SSchemaWrapper schemaRow;
|
||||
SSchemaWrapper schemaTag;
|
||||
SRSmaParam rsmaParam;
|
||||
} stbEntry;
|
||||
struct {
|
||||
int64_t ctime;
|
||||
|
|
|
@ -69,6 +69,7 @@ struct SMeta {
|
|||
TTB* pUidIdx;
|
||||
TTB* pNameIdx;
|
||||
TTB* pCtbIdx;
|
||||
TTB* pSuidIdx;
|
||||
// ivt idx and idx
|
||||
void* pTagIvtIdx;
|
||||
TTB* pTagIdx;
|
||||
|
|
|
@ -60,8 +60,9 @@ struct SRSmaStat {
|
|||
SSma *pSma;
|
||||
void *tmrHandle;
|
||||
tmr_h tmrId;
|
||||
int8_t tmrStat;
|
||||
int32_t tmrSeconds;
|
||||
int8_t triggerStat;
|
||||
int8_t runningStat;
|
||||
SHashObj *rsmaInfoHash; // key: stbUid, value: SRSmaInfo;
|
||||
};
|
||||
|
||||
|
@ -74,11 +75,19 @@ struct SSmaStat {
|
|||
};
|
||||
#define SMA_TSMA_STAT(s) (&(s)->tsmaStat)
|
||||
#define SMA_RSMA_STAT(s) (&(s)->rsmaStat)
|
||||
#define SMA_RSMA_INFO_HASH(s) ((s)->rsmaStat.rsmaInfoHash)
|
||||
#define SMA_RSMA_TMR_HANDLE(s) ((s)->rsmaStat.tmrHandle)
|
||||
#define SMA_RSMA_TMR_STAT(s) ((s)->rsmaStat.tmrStat)
|
||||
#define RSMA_INFO_HASH(r) ((r)->rsmaInfoHash)
|
||||
#define RSMA_TMR_ID(r) ((r)->tmrId)
|
||||
#define RSMA_TMR_HANDLE(r) ((r)->tmrHandle)
|
||||
#define RSMA_TRIGGER_STAT(r) (&(r)->triggerStat)
|
||||
#define RSMA_RUNNING_STAT(r) (&(r)->runningStat)
|
||||
|
||||
enum {
|
||||
TASK_TRIGGER_STAT_INIT = 0,
|
||||
TASK_TRIGGER_STAT_ACTIVE = 1,
|
||||
TASK_TRIGGER_STAT_INACTIVE = 2,
|
||||
TASK_TRIGGER_STAT_CANCELLED = 3,
|
||||
TASK_TRIGGER_STAT_FINISHED = 4,
|
||||
};
|
||||
void tdDestroySmaEnv(SSmaEnv *pSmaEnv);
|
||||
void *tdFreeSmaEnv(SSmaEnv *pSmaEnv);
|
||||
|
||||
|
@ -167,13 +176,18 @@ static FORCE_INLINE void tdSmaStatSetDropped(STSmaStat *pTStat) {
|
|||
|
||||
static int32_t tdDestroySmaState(SSmaStat *pSmaStat, int8_t smaType);
|
||||
void *tdFreeSmaState(SSmaStat *pSmaStat, int8_t smaType);
|
||||
|
||||
void *tdFreeRSmaInfo(SRSmaInfo *pInfo);
|
||||
|
||||
int32_t tdProcessRSmaCreateImpl(SSma *pSma, SRSmaParam *param, int64_t suid, const char *tbName);
|
||||
int32_t tdProcessRSmaRestoreImpl(SSma *pSma);
|
||||
int32_t tdProcessTSmaCreateImpl(SSma *pSma, int64_t version, const char *pMsg);
|
||||
int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char *msg);
|
||||
int32_t tdProcessTSmaGetDaysImpl(SVnodeCfg *pCfg, void *pCont, uint32_t contLen, int32_t *days);
|
||||
|
||||
// smaFileUtil ================
|
||||
|
||||
#define TD_FILE_HEAD_SIZE 512
|
||||
|
||||
typedef struct STFInfo STFInfo;
|
||||
typedef struct STFile STFile;
|
||||
|
||||
|
@ -181,7 +195,7 @@ struct STFInfo {
|
|||
uint32_t magic;
|
||||
uint32_t ftype;
|
||||
uint32_t fver;
|
||||
uint64_t fsize;
|
||||
int64_t fsize;
|
||||
};
|
||||
|
||||
struct STFile {
|
||||
|
@ -199,11 +213,8 @@ struct STFile {
|
|||
#define TD_FILE_OPENED(tf) (TD_FILE_PFILE(tf) != NULL)
|
||||
#define TD_FILE_CLOSED(tf) (!TD_FILE_OPENED(tf))
|
||||
#define TD_FILE_SET_CLOSED(f) (TD_FILE_PFILE(f) = NULL)
|
||||
#define TD_FILE_STATE(tf) ((tf)->state)
|
||||
#define TD_FILE_SET_STATE(tf, s) ((tf)->state = (s))
|
||||
#define TD_FILE_DID(tf) (TD_FILE_F(tf)->did)
|
||||
#define TD_FILE_IS_OK(tf) (TD_FILE_STATE(tf) == TD_FILE_STATE_OK)
|
||||
#define TD_FILE_IS_BAD(tf) (TD_FILE_STATE(tf) == TD_FILE_STATE_BAD)
|
||||
|
||||
int32_t tdInitTFile(STFile *pTFile, STfs *pTfs, const char *fname);
|
||||
int32_t tdCreateTFile(STFile *pTFile, STfs *pTfs, bool updateHeader, int8_t fType);
|
||||
|
@ -212,11 +223,13 @@ int64_t tdReadTFile(STFile *pTFile, void *buf, int64_t nbyte);
|
|||
int64_t tdSeekTFile(STFile *pTFile, int64_t offset, int whence);
|
||||
int64_t tdWriteTFile(STFile *pTFile, void *buf, int64_t nbyte);
|
||||
int64_t tdAppendTFile(STFile *pTFile, void *buf, int64_t nbyte, int64_t *offset);
|
||||
int64_t tdGetTFileSize(STFile *pTFile, int64_t *size);
|
||||
int32_t tdRemoveTFile(STFile *pTFile);
|
||||
int32_t tdLoadTFileHeader(STFile *pTFile, STFInfo *pInfo);
|
||||
int32_t tdUpdateTFileHeader(STFile *pTFile);
|
||||
void tdUpdateTFileMagic(STFile *pTFile, void *pCksm);
|
||||
void tdCloseTFile(STFile *pTFile);
|
||||
|
||||
void tdGetVndFileName(int32_t vid, const char *dname, const char *fname, char *outputName);
|
||||
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -76,6 +76,7 @@ void vnodeFree(void* p);
|
|||
|
||||
// meta
|
||||
typedef struct SMCtbCursor SMCtbCursor;
|
||||
typedef struct SMStbCursor SMStbCursor;
|
||||
typedef struct STbUidStore STbUidStore;
|
||||
|
||||
int metaOpen(SVnode* pVnode, SMeta** ppMeta);
|
||||
|
@ -97,6 +98,9 @@ int metaGetTbNum(SMeta* pMeta);
|
|||
SMCtbCursor* metaOpenCtbCursor(SMeta* pMeta, tb_uid_t uid);
|
||||
void metaCloseCtbCursor(SMCtbCursor* pCtbCur);
|
||||
tb_uid_t metaCtbCursorNext(SMCtbCursor* pCtbCur);
|
||||
SMStbCursor* metaOpenStbCursor(SMeta* pMeta, tb_uid_t uid);
|
||||
void metaCloseStbCursor(SMStbCursor* pStbCur);
|
||||
tb_uid_t metaStbCursorNext(SMStbCursor* pStbCur);
|
||||
STSma* metaGetSmaInfoByIndex(SMeta* pMeta, int64_t indexUid);
|
||||
STSmaWrapper* metaGetSmaInfoByTable(SMeta* pMeta, tb_uid_t uid, bool deepCopy);
|
||||
SArray* metaGetSmaIdsByTable(SMeta* pMeta, tb_uid_t uid);
|
||||
|
@ -158,6 +162,8 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool
|
|||
// sma
|
||||
int32_t smaOpen(SVnode* pVnode);
|
||||
int32_t smaClose(SSma* pSma);
|
||||
int32_t smaCloseEnv(SSma* pSma);
|
||||
int32_t smaCloseEx(SSma* pSma);
|
||||
|
||||
int32_t tdProcessTSmaCreate(SSma* pSma, int64_t version, const char* msg);
|
||||
int32_t tdProcessTSmaInsert(SSma* pSma, int64_t indexUid, const char* msg);
|
||||
|
|
|
@ -24,22 +24,25 @@ int metaEncodeEntry(SEncoder *pCoder, const SMetaEntry *pME) {
|
|||
if (tEncodeCStr(pCoder, pME->name) < 0) return -1;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
if (tEncodeI8(pCoder, pME->flags) < 0) return -1; // TODO: need refactor?
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->stbEntry.schemaRow) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->stbEntry.schemaTag) < 0) return -1;
|
||||
if (TABLE_IS_ROLLUP(pME->flags)) {
|
||||
if (tEncodeSRSmaParam(pCoder, &pME->stbEntry.rsmaParam) < 0) return -1;
|
||||
}
|
||||
} else if (pME->type == TSDB_CHILD_TABLE) {
|
||||
if (tEncodeI64(pCoder, pME->ctbEntry.ctime) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pME->ctbEntry.ttlDays) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pME->ctbEntry.commentLen) < 0) return -1;
|
||||
if (tEncodeI32v(pCoder, pME->ctbEntry.commentLen) < 0) return -1;
|
||||
if (pME->ctbEntry.commentLen > 0){
|
||||
if (tEncodeCStr(pCoder, pME->ctbEntry.comment) < 0) return -1;
|
||||
}
|
||||
if (tEncodeI64(pCoder, pME->ctbEntry.suid) < 0) return -1;
|
||||
debugCheckTags((STag*)pME->ctbEntry.pTags); // TODO: remove after debug
|
||||
if (tEncodeTag(pCoder, (const STag *)pME->ctbEntry.pTags) < 0) return -1;
|
||||
} else if (pME->type == TSDB_NORMAL_TABLE) {
|
||||
if (tEncodeI64(pCoder, pME->ntbEntry.ctime) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pME->ntbEntry.ttlDays) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pME->ntbEntry.commentLen) < 0) return -1;
|
||||
if (tEncodeI32v(pCoder, pME->ntbEntry.commentLen) < 0) return -1;
|
||||
if (pME->ntbEntry.commentLen > 0){
|
||||
if (tEncodeCStr(pCoder, pME->ntbEntry.comment) < 0) return -1;
|
||||
}
|
||||
|
@ -64,23 +67,26 @@ int metaDecodeEntry(SDecoder *pCoder, SMetaEntry *pME) {
|
|||
if (tDecodeCStr(pCoder, &pME->name) < 0) return -1;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
if (tDecodeI8(pCoder, &pME->flags) < 0) return -1; // TODO: need refactor?
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->stbEntry.schemaRow) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->stbEntry.schemaTag) < 0) return -1;
|
||||
if (TABLE_IS_ROLLUP(pME->flags)) {
|
||||
if (tDecodeSRSmaParam(pCoder, &pME->stbEntry.rsmaParam) < 0) return -1;
|
||||
}
|
||||
} else if (pME->type == TSDB_CHILD_TABLE) {
|
||||
if (tDecodeI64(pCoder, &pME->ctbEntry.ctime) < 0) return -1;
|
||||
if (tDecodeI32(pCoder, &pME->ctbEntry.ttlDays) < 0) return -1;
|
||||
if (tDecodeI32(pCoder, &pME->ctbEntry.commentLen) < 0) return -1;
|
||||
if (tDecodeI32v(pCoder, &pME->ctbEntry.commentLen) < 0) return -1;
|
||||
if (pME->ctbEntry.commentLen > 0){
|
||||
if (tDecodeCStr(pCoder, &pME->ctbEntry.comment) < 0)
|
||||
return -1;
|
||||
}
|
||||
if (tDecodeI64(pCoder, &pME->ctbEntry.suid) < 0) return -1;
|
||||
if (tDecodeTag(pCoder, (STag **)&pME->ctbEntry.pTags) < 0) return -1; // (TODO)
|
||||
debugCheckTags((STag*)pME->ctbEntry.pTags); // TODO: remove after debug
|
||||
} else if (pME->type == TSDB_NORMAL_TABLE) {
|
||||
if (tDecodeI64(pCoder, &pME->ntbEntry.ctime) < 0) return -1;
|
||||
if (tDecodeI32(pCoder, &pME->ntbEntry.ttlDays) < 0) return -1;
|
||||
if (tDecodeI32(pCoder, &pME->ntbEntry.commentLen) < 0) return -1;
|
||||
if (tDecodeI32v(pCoder, &pME->ntbEntry.commentLen) < 0) return -1;
|
||||
if (pME->ntbEntry.commentLen > 0){
|
||||
if (tDecodeCStr(pCoder, &pME->ntbEntry.comment) < 0) return -1;
|
||||
}
|
||||
|
|
|
@ -92,6 +92,13 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) {
|
|||
goto _err;
|
||||
}
|
||||
|
||||
// open pSuidIdx
|
||||
ret = tdbTbOpen("suid.idx", sizeof(tb_uid_t), 0, uidIdxKeyCmpr, pMeta->pEnv, &pMeta->pSuidIdx);
|
||||
if (ret < 0) {
|
||||
metaError("vgId:%d, failed to open meta super table index since %s", TD_VID(pVnode), tstrerror(terrno));
|
||||
goto _err;
|
||||
}
|
||||
|
||||
// open pTagIdx
|
||||
// TODO(yihaoDeng), refactor later
|
||||
char indexFullPath[128] = {0};
|
||||
|
@ -141,6 +148,7 @@ _err:
|
|||
if (pMeta->pTagIvtIdx) indexClose(pMeta->pTagIvtIdx);
|
||||
if (pMeta->pTagIdx) tdbTbClose(pMeta->pTagIdx);
|
||||
if (pMeta->pCtbIdx) tdbTbClose(pMeta->pCtbIdx);
|
||||
if (pMeta->pSuidIdx) tdbTbClose(pMeta->pSuidIdx);
|
||||
if (pMeta->pNameIdx) tdbTbClose(pMeta->pNameIdx);
|
||||
if (pMeta->pUidIdx) tdbTbClose(pMeta->pUidIdx);
|
||||
if (pMeta->pSkmDb) tdbTbClose(pMeta->pSkmDb);
|
||||
|
@ -162,6 +170,7 @@ int metaClose(SMeta *pMeta) {
|
|||
if (pMeta->pTagIdx) tdbTbClose(pMeta->pTagIdx);
|
||||
#endif
|
||||
if (pMeta->pCtbIdx) tdbTbClose(pMeta->pCtbIdx);
|
||||
if (pMeta->pSuidIdx) tdbTbClose(pMeta->pSuidIdx);
|
||||
if (pMeta->pNameIdx) tdbTbClose(pMeta->pNameIdx);
|
||||
if (pMeta->pUidIdx) tdbTbClose(pMeta->pUidIdx);
|
||||
if (pMeta->pSkmDb) tdbTbClose(pMeta->pSkmDb);
|
||||
|
|
|
@ -325,6 +325,72 @@ tb_uid_t metaCtbCursorNext(SMCtbCursor *pCtbCur) {
|
|||
return pCtbIdxKey->uid;
|
||||
}
|
||||
|
||||
struct SMStbCursor {
|
||||
SMeta *pMeta;
|
||||
TBC *pCur;
|
||||
tb_uid_t suid;
|
||||
void *pKey;
|
||||
void *pVal;
|
||||
int kLen;
|
||||
int vLen;
|
||||
};
|
||||
|
||||
SMStbCursor *metaOpenStbCursor(SMeta *pMeta, tb_uid_t suid) {
|
||||
SMStbCursor *pStbCur = NULL;
|
||||
int ret = 0;
|
||||
int c = 0;
|
||||
|
||||
pStbCur = (SMStbCursor *)taosMemoryCalloc(1, sizeof(*pStbCur));
|
||||
if (pStbCur == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pStbCur->pMeta = pMeta;
|
||||
pStbCur->suid = suid;
|
||||
metaRLock(pMeta);
|
||||
|
||||
ret = tdbTbcOpen(pMeta->pSuidIdx, &pStbCur->pCur, NULL);
|
||||
if (ret < 0) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
metaULock(pMeta);
|
||||
taosMemoryFree(pStbCur);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// move to the suid
|
||||
tdbTbcMoveTo(pStbCur->pCur, &suid, sizeof(suid), &c);
|
||||
if (c > 0) {
|
||||
tdbTbcMoveToNext(pStbCur->pCur);
|
||||
}
|
||||
|
||||
return pStbCur;
|
||||
}
|
||||
|
||||
void metaCloseStbCursor(SMStbCursor *pStbCur) {
|
||||
if (pStbCur) {
|
||||
if (pStbCur->pMeta) metaULock(pStbCur->pMeta);
|
||||
if (pStbCur->pCur) {
|
||||
tdbTbcClose(pStbCur->pCur);
|
||||
|
||||
tdbFree(pStbCur->pKey);
|
||||
tdbFree(pStbCur->pVal);
|
||||
}
|
||||
|
||||
taosMemoryFree(pStbCur);
|
||||
}
|
||||
}
|
||||
|
||||
tb_uid_t metaStbCursorNext(SMStbCursor *pStbCur) {
|
||||
int ret;
|
||||
|
||||
ret = tdbTbcNext(pStbCur->pCur, &pStbCur->pKey, &pStbCur->kLen, &pStbCur->pVal, &pStbCur->vLen);
|
||||
if (ret < 0) {
|
||||
return 0;
|
||||
}
|
||||
return *(tb_uid_t*)pStbCur->pKey;
|
||||
}
|
||||
|
||||
STSchema *metaGetTbTSchema(SMeta *pMeta, tb_uid_t uid, int32_t sver) {
|
||||
// SMetaReader mr = {0};
|
||||
STSchema * pTSchema = NULL;
|
||||
|
|
|
@ -23,6 +23,7 @@ static int metaUpdateNameIdx(SMeta *pMeta, const SMetaEntry *pME);
|
|||
static int metaUpdateTtlIdx(SMeta *pMeta, const SMetaEntry *pME);
|
||||
static int metaSaveToSkmDb(SMeta *pMeta, const SMetaEntry *pME);
|
||||
static int metaUpdateCtbIdx(SMeta *pMeta, const SMetaEntry *pME);
|
||||
static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME);
|
||||
static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry);
|
||||
static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type);
|
||||
|
||||
|
@ -138,6 +139,10 @@ int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
|
|||
me.name = pReq->name;
|
||||
me.stbEntry.schemaRow = pReq->schemaRow;
|
||||
me.stbEntry.schemaTag = pReq->schemaTag;
|
||||
if (pReq->rollup) {
|
||||
TABLE_SET_ROLLUP(me.flags);
|
||||
me.stbEntry.rsmaParam = pReq->rsmaParam;
|
||||
}
|
||||
|
||||
if (metaHandleEntry(pMeta, &me) < 0) goto _err;
|
||||
|
||||
|
@ -209,6 +214,7 @@ _drop_super_table:
|
|||
&pMeta->txn);
|
||||
tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pMeta->txn);
|
||||
tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn);
|
||||
tdbTbDelete(pMeta->pSuidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn);
|
||||
|
||||
metaULock(pMeta);
|
||||
|
||||
|
@ -436,11 +442,13 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) {
|
|||
tdbTbDelete(pMeta->pUidIdx, &uid, sizeof(uid), &pMeta->txn);
|
||||
if(e.type != TSDB_SUPER_TABLE) metaDeleteTtlIdx(pMeta, &e);
|
||||
|
||||
|
||||
if (e.type == TSDB_CHILD_TABLE) {
|
||||
tdbTbDelete(pMeta->pCtbIdx, &(SCtbIdxKey){.suid = e.ctbEntry.suid, .uid = uid}, sizeof(SCtbIdxKey), &pMeta->txn);
|
||||
} else if (e.type == TSDB_NORMAL_TABLE) {
|
||||
// drop schema.db (todo)
|
||||
} else if (e.type == TSDB_SUPER_TABLE) {
|
||||
tdbTbDelete(pMeta->pSuidIdx, &e.uid, sizeof(tb_uid_t), &pMeta->txn);
|
||||
// drop schema.db (todo)
|
||||
}
|
||||
|
||||
|
@ -911,6 +919,10 @@ static int metaUpdateUidIdx(SMeta *pMeta, const SMetaEntry *pME) {
|
|||
return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &pME->version, sizeof(int64_t), &pMeta->txn);
|
||||
}
|
||||
|
||||
static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME) {
|
||||
return tdbTbInsert(pMeta->pSuidIdx, &pME->uid, sizeof(tb_uid_t), NULL, 0, &pMeta->txn);
|
||||
}
|
||||
|
||||
static int metaUpdateNameIdx(SMeta *pMeta, const SMetaEntry *pME) {
|
||||
return tdbTbInsert(pMeta->pNameIdx, pME->name, strlen(pME->name) + 1, &pME->uid, sizeof(tb_uid_t), &pMeta->txn);
|
||||
}
|
||||
|
@ -1081,6 +1093,10 @@ static int metaHandleEntry(SMeta *pMeta, const SMetaEntry *pME) {
|
|||
} else {
|
||||
// update schema.db
|
||||
if (metaSaveToSkmDb(pMeta, pME) < 0) goto _err;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
if (metaUpdateSuidIdx(pMeta, pME) < 0) goto _err;
|
||||
}
|
||||
}
|
||||
|
||||
if (pME->type != TSDB_SUPER_TABLE) {
|
||||
|
|
|
@ -126,22 +126,21 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
|
|||
}
|
||||
|
||||
if (smaType == TSDB_SMA_TYPE_ROLLUP) {
|
||||
SMA_RSMA_STAT(*pSmaStat)->pSma = (SSma*)pSma;
|
||||
SRSmaStat *pRSmaStat = (SRSmaStat *)(*pSmaStat);
|
||||
pRSmaStat->pSma = (SSma *)pSma;
|
||||
// init timer
|
||||
SMA_RSMA_TMR_HANDLE(*pSmaStat) = taosTmrInit(10000, 100, 10000, "RSMA_G");
|
||||
if (!SMA_RSMA_TMR_HANDLE(*pSmaStat)) {
|
||||
RSMA_TMR_HANDLE(pRSmaStat) = taosTmrInit(10000, 100, 10000, "RSMA");
|
||||
if (!RSMA_TMR_HANDLE(pRSmaStat)) {
|
||||
taosMemoryFreeClear(*pSmaStat);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
atomic_store_8(&SMA_RSMA_TMR_STAT(*pSmaStat), TASK_TRIGGER_STATUS__ACTIVE);
|
||||
|
||||
// init hash
|
||||
SMA_RSMA_INFO_HASH(*pSmaStat) = taosHashInit(
|
||||
RSMA_INFO_HASH(pRSmaStat) = taosHashInit(
|
||||
RSMA_TASK_INFO_HASH_SLOT, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
|
||||
if (!SMA_RSMA_INFO_HASH(*pSmaStat)) {
|
||||
if (SMA_RSMA_TMR_HANDLE(*pSmaStat)) {
|
||||
taosTmrCleanUp(SMA_RSMA_TMR_HANDLE(*pSmaStat));
|
||||
if (!RSMA_INFO_HASH(pRSmaStat)) {
|
||||
if (RSMA_TMR_HANDLE(pRSmaStat)) {
|
||||
taosTmrCleanUp(RSMA_TMR_HANDLE(pRSmaStat));
|
||||
}
|
||||
taosMemoryFreeClear(*pSmaStat);
|
||||
return TSDB_CODE_FAILED;
|
||||
|
@ -155,12 +154,80 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void *tdFreeTSmaStat(STSmaStat *pStat) {
|
||||
static void tdDestroyTSmaStat(STSmaStat *pStat) {
|
||||
if (pStat) {
|
||||
smaDebug("destroy tsma stat");
|
||||
tDestroyTSma(pStat->pTSma);
|
||||
taosMemoryFreeClear(pStat->pTSma);
|
||||
taosMemoryFreeClear(pStat);
|
||||
taosMemoryFreeClear(pStat->pTSchema);
|
||||
}
|
||||
}
|
||||
|
||||
static void *tdFreeTSmaStat(STSmaStat *pStat) {
|
||||
tdDestroyTSmaStat(pStat);
|
||||
taosMemoryFreeClear(pStat);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tdDestroyRSmaStat(SRSmaStat *pStat) {
|
||||
if (pStat) {
|
||||
smaDebug("vgId:%d destroy rsma stat", SMA_VID(pStat->pSma));
|
||||
// step 1: set persistence task cancelled
|
||||
atomic_store_8(RSMA_TRIGGER_STAT(pStat), TASK_TRIGGER_STAT_CANCELLED);
|
||||
|
||||
// step 2: stop the persistence timer
|
||||
taosTmrStopA(&RSMA_TMR_ID(pStat));
|
||||
|
||||
// step 3: wait the persistence thread to finish
|
||||
int32_t nLoops = 0;
|
||||
if (atomic_load_8(RSMA_RUNNING_STAT(pStat)) == 1) {
|
||||
while (1) {
|
||||
if (atomic_load_8(RSMA_TRIGGER_STAT(pStat)) == TASK_TRIGGER_STAT_FINISHED) {
|
||||
break;
|
||||
} else {
|
||||
smaDebug("not destroyed since rsma stat in %" PRIi8, atomic_load_8(RSMA_TRIGGER_STAT(pStat)));
|
||||
}
|
||||
++nLoops;
|
||||
if (nLoops > 1000) {
|
||||
sched_yield();
|
||||
nLoops = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// step 4: destroy the rsma info and associated fetch tasks
|
||||
// TODO: use taosHashSetFreeFp when taosHashSetFreeFp is ready.
|
||||
void *infoHash = taosHashIterate(RSMA_INFO_HASH(pStat), NULL);
|
||||
while (infoHash) {
|
||||
SRSmaInfo *pSmaInfo = *(SRSmaInfo **)infoHash;
|
||||
tdFreeRSmaInfo(pSmaInfo);
|
||||
infoHash = taosHashIterate(RSMA_INFO_HASH(pStat), infoHash);
|
||||
}
|
||||
taosHashCleanup(RSMA_INFO_HASH(pStat));
|
||||
|
||||
// step 5: wait all triggered fetch tasks finished
|
||||
nLoops = 0;
|
||||
while (1) {
|
||||
if (T_REF_VAL_GET((SSmaStat *)pStat) == 0) {
|
||||
break;
|
||||
}
|
||||
++nLoops;
|
||||
if (nLoops > 1000) {
|
||||
sched_yield();
|
||||
nLoops = 0;
|
||||
}
|
||||
}
|
||||
|
||||
// step 6: cleanup the timer handle
|
||||
if (RSMA_TMR_HANDLE(pStat)) {
|
||||
taosTmrCleanUp(RSMA_TMR_HANDLE(pStat));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void *tdFreeRSmaStat(SRSmaStat *pStat) {
|
||||
tdDestroyRSmaStat(pStat);
|
||||
taosMemoryFreeClear(pStat);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -179,19 +246,9 @@ void *tdFreeSmaState(SSmaStat *pSmaStat, int8_t smaType) {
|
|||
int32_t tdDestroySmaState(SSmaStat *pSmaStat, int8_t smaType) {
|
||||
if (pSmaStat) {
|
||||
if (smaType == TSDB_SMA_TYPE_TIME_RANGE) {
|
||||
tdFreeTSmaStat(&pSmaStat->tsmaStat);
|
||||
tdDestroyTSmaStat(SMA_TSMA_STAT(pSmaStat));
|
||||
} else if (smaType == TSDB_SMA_TYPE_ROLLUP) {
|
||||
if (SMA_RSMA_TMR_HANDLE(pSmaStat)) {
|
||||
taosTmrCleanUp(SMA_RSMA_TMR_HANDLE(pSmaStat));
|
||||
}
|
||||
// TODO: use taosHashSetFreeFp when taosHashSetFreeFp is ready.
|
||||
void *infoHash = taosHashIterate(SMA_RSMA_INFO_HASH(pSmaStat), NULL);
|
||||
while (infoHash) {
|
||||
SRSmaInfo *pInfoHash = *(SRSmaInfo **)infoHash;
|
||||
tdFreeRSmaInfo(pInfoHash);
|
||||
infoHash = taosHashIterate(SMA_RSMA_INFO_HASH(pSmaStat), infoHash);
|
||||
}
|
||||
taosHashCleanup(SMA_RSMA_INFO_HASH(pSmaStat));
|
||||
tdDestroyRSmaStat(SMA_RSMA_STAT(pSmaStat));
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -260,34 +317,3 @@ int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType) {
|
|||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
};
|
||||
|
||||
int32_t smaTimerInit(void **timer, int8_t *initFlag, const char *label) {
|
||||
int8_t old;
|
||||
while (1) {
|
||||
old = atomic_val_compare_exchange_8(initFlag, 0, 2);
|
||||
if (old != 2) break;
|
||||
}
|
||||
|
||||
if (old == 0) {
|
||||
*timer = taosTmrInit(10000, 100, 10000, label);
|
||||
if (!(*timer)) {
|
||||
atomic_store_8(initFlag, 0);
|
||||
return -1;
|
||||
}
|
||||
atomic_store_8(initFlag, 1);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void smaTimerCleanUp(void *timer, int8_t *initFlag) {
|
||||
int8_t old;
|
||||
while (1) {
|
||||
old = atomic_val_compare_exchange_8(initFlag, 1, 2);
|
||||
if (old != 2) break;
|
||||
}
|
||||
|
||||
if (old == 1) {
|
||||
taosTmrCleanUp(timer);
|
||||
atomic_store_8(initFlag, 0);
|
||||
}
|
||||
}
|
|
@ -18,6 +18,7 @@
|
|||
|
||||
static int32_t smaEvalDays(SRetention *r, int8_t precision);
|
||||
static int32_t smaSetKeepCfg(STsdbKeepCfg *pKeepCfg, STsdbCfg *pCfg, int type);
|
||||
static int32_t rsmaRestore(SSma *pSma);
|
||||
|
||||
#define SMA_SET_KEEP_CFG(l) \
|
||||
do { \
|
||||
|
@ -100,6 +101,9 @@ int32_t smaOpen(SVnode *pVnode) {
|
|||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
pVnode->pSma = pSma;
|
||||
|
||||
pSma->pVnode = pVnode;
|
||||
taosThreadMutexInit(&pSma->mutex, NULL);
|
||||
pSma->locked = false;
|
||||
|
@ -117,37 +121,53 @@ int32_t smaOpen(SVnode *pVnode) {
|
|||
ASSERT(0);
|
||||
}
|
||||
}
|
||||
|
||||
// restore the rsma
|
||||
#if 0
|
||||
if (rsmaRestore(pSma) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
pVnode->pSma = pSma;
|
||||
return 0;
|
||||
_err:
|
||||
taosMemoryFreeClear(pSma);
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t smaClose(SSma *pSma) {
|
||||
int32_t smaCloseEnv(SSma *pSma) {
|
||||
if (pSma) {
|
||||
SMA_TSMA_ENV(pSma) = tdFreeSmaEnv(SMA_TSMA_ENV(pSma));
|
||||
SMA_RSMA_ENV(pSma) = tdFreeSmaEnv(SMA_RSMA_ENV(pSma));
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t smaCloseEx(SSma *pSma) {
|
||||
if (pSma) {
|
||||
taosThreadMutexDestroy(&pSma->mutex);
|
||||
if SMA_RSMA_TSDB0 (pSma) tsdbClose(&SMA_RSMA_TSDB0(pSma));
|
||||
if SMA_RSMA_TSDB1 (pSma) tsdbClose(&SMA_RSMA_TSDB1(pSma));
|
||||
if SMA_RSMA_TSDB2 (pSma) tsdbClose(&SMA_RSMA_TSDB2(pSma));
|
||||
// SMA_TSMA_ENV(pSma) = tdFreeSmaEnv(SMA_TSMA_ENV(pSma));
|
||||
// SMA_RSMA_ENV(pSma) = tdFreeSmaEnv(SMA_RSMA_ENV(pSma));
|
||||
taosMemoryFreeClear(pSma);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t smaClose(SSma *pSma) {
|
||||
smaCloseEnv(pSma);
|
||||
smaCloseEx(pSma);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief rsma env restore
|
||||
*
|
||||
* @param pSma
|
||||
* @return int32_t
|
||||
*/
|
||||
int32_t smaRestore(SSma *pSma) {
|
||||
if (!pSma) return 0;
|
||||
// iterate all stables to restore the rsma env
|
||||
static int32_t rsmaRestore(SSma *pSma) {
|
||||
ASSERT(VND_IS_RSMA(pSma->pVnode));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
return tdProcessRSmaRestoreImpl(pSma);
|
||||
}
|
|
@ -14,11 +14,15 @@
|
|||
*/
|
||||
|
||||
#include "sma.h"
|
||||
#include "tstream.h"
|
||||
|
||||
#define RSMA_QTASKINFO_PERSIST_MS 7200000
|
||||
#define RSMA_QTASKINFO_BUFSIZE 32768
|
||||
typedef enum { TD_QTASK_TMP_FILE = 0, TD_QTASK_CUR_FILE } TD_QTASK_FILE_T;
|
||||
static const char *tdQTaskInfoFname[] = {"qtaskinfo.t", "qtaskinfo"};
|
||||
|
||||
typedef struct SRSmaQTaskInfoItem SRSmaQTaskInfoItem;
|
||||
typedef struct SRSmaQTaskFIter SRSmaQTaskFIter;
|
||||
|
||||
static int32_t tdUidStorePut(STbUidStore *pStore, tb_uid_t suid, tb_uid_t *uid);
|
||||
static int32_t tdUpdateTbUidListImpl(SSma *pSma, tb_uid_t *suid, SArray *tbUids);
|
||||
static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaInfo *pRSmaInfo, SReadHandle *handle,
|
||||
|
@ -27,17 +31,24 @@ static int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType
|
|||
tb_uid_t suid, int8_t level);
|
||||
static void tdRSmaFetchTrigger(void *param, void *tmrId);
|
||||
static void tdRSmaPersistTrigger(void *param, void *tmrId);
|
||||
static void *tdRSmaPersistExec(void *param);
|
||||
static void tdRSmaQTaskGetFName(int32_t vid, int8_t ftype, char *outputName);
|
||||
|
||||
static int32_t tdRSmaQTaskInfoIterInit(SRSmaQTaskFIter *pIter, STFile *pTFile);
|
||||
static int32_t tdRSmaQTaskInfoIterNextBlock(SRSmaQTaskFIter *pIter, bool *isFinish);
|
||||
static int32_t tdRSmaQTaskInfoIterNext(SRSmaQTaskFIter *pIter, SRSmaQTaskInfoItem *pItem, bool *isEnd);
|
||||
static int32_t tdRSmaQTaskInfoItemRestore(SSma *pSma, const SRSmaQTaskInfoItem *infoItem);
|
||||
|
||||
struct SRSmaInfoItem {
|
||||
SRSmaInfo *pRsmaInfo;
|
||||
void *taskInfo; // qTaskInfo_t
|
||||
void *tmrHandle;
|
||||
tmr_h tmrId;
|
||||
int8_t level;
|
||||
int8_t tmrInitFlag;
|
||||
int8_t triggerStatus; // TASK_TRIGGER_STATUS__IN_ACTIVE/TASK_TRIGGER_STATUS__ACTIVE
|
||||
int8_t triggerStat;
|
||||
int32_t maxDelay;
|
||||
};
|
||||
|
||||
struct SRSmaInfo {
|
||||
STSchema *pTSchema;
|
||||
SSma *pSma;
|
||||
|
@ -45,11 +56,38 @@ struct SRSmaInfo {
|
|||
SRSmaInfoItem items[TSDB_RETENTION_L2];
|
||||
};
|
||||
|
||||
static FORCE_INLINE void tdFreeTaskHandle(qTaskInfo_t *taskHandle) {
|
||||
struct SRSmaQTaskInfoItem {
|
||||
int32_t len;
|
||||
int8_t type;
|
||||
int64_t suid;
|
||||
void *qTaskInfo;
|
||||
};
|
||||
|
||||
struct SRSmaQTaskFIter {
|
||||
STFile *pTFile;
|
||||
int64_t offset;
|
||||
int64_t fsize;
|
||||
int32_t nBytes;
|
||||
int32_t nAlloc;
|
||||
char *buf;
|
||||
// ------------
|
||||
int32_t nBufPos;
|
||||
};
|
||||
|
||||
static FORCE_INLINE int32_t tdRSmaQTaskInfoContLen(int32_t lenWithHead) {
|
||||
return lenWithHead - sizeof(int32_t) - sizeof(int8_t) - sizeof(int64_t);
|
||||
}
|
||||
|
||||
static FORCE_INLINE void tdRSmaQTaskInfoIterDestroy(SRSmaQTaskFIter *pIter) { taosMemoryFreeClear(pIter->buf); }
|
||||
|
||||
static FORCE_INLINE void tdFreeTaskHandle(qTaskInfo_t *taskHandle, int32_t vgId, int32_t level) {
|
||||
// Note: free/kill may in RC
|
||||
qTaskInfo_t otaskHandle = atomic_load_ptr(taskHandle);
|
||||
if (otaskHandle && atomic_val_compare_exchange_ptr(taskHandle, otaskHandle, NULL)) {
|
||||
smaDebug("vgId:%d, free qTaskInfo_t %p of level %d", vgId, otaskHandle, level);
|
||||
qDestroyTask(otaskHandle);
|
||||
} else {
|
||||
smaDebug("vgId:%d, not free qTaskInfo_t %p of level %d", vgId, otaskHandle, level);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -58,14 +96,19 @@ void *tdFreeRSmaInfo(SRSmaInfo *pInfo) {
|
|||
for (int32_t i = 0; i < TSDB_RETENTION_L2; ++i) {
|
||||
SRSmaInfoItem *pItem = &pInfo->items[i];
|
||||
if (pItem->taskInfo) {
|
||||
tdFreeTaskHandle(pItem->taskInfo);
|
||||
}
|
||||
if (pItem->tmrHandle) {
|
||||
taosTmrCleanUp(pItem->tmrHandle);
|
||||
smaDebug("vgId:%d, stb %" PRIi64 " stop fetch-timer %p level %d", SMA_VID(pInfo->pSma), pInfo->suid,
|
||||
pItem->tmrId, i + 1);
|
||||
taosTmrStopA(&pItem->tmrId);
|
||||
tdFreeTaskHandle(&pItem->taskInfo, SMA_VID(pInfo->pSma), i + 1);
|
||||
} else {
|
||||
smaDebug("vgId:%d, stb %" PRIi64 " no need to destroy rsma info level %d since empty taskInfo",
|
||||
SMA_VID(pInfo->pSma), pInfo->suid, i + 1);
|
||||
}
|
||||
}
|
||||
taosMemoryFree(pInfo->pTSchema);
|
||||
taosMemoryFree(pInfo);
|
||||
} else {
|
||||
smaDebug("vgId:%d, stb %" PRIi64 " no need to destroy rsma info since empty", SMA_VID(pInfo->pSma), pInfo->suid);
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
@ -81,9 +124,9 @@ static FORCE_INLINE int32_t tdUidStoreInit(STbUidStore **pStore) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tdUpdateTbUidListImpl(SSma *pSma, tb_uid_t *suid, SArray *tbUids) {
|
||||
static int32_t tdUpdateTbUidListImpl(SSma *pSma, tb_uid_t *suid, SArray *tbUids) {
|
||||
SSmaEnv *pEnv = SMA_RSMA_ENV(pSma);
|
||||
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT(pEnv);
|
||||
SRSmaInfo *pRSmaInfo = NULL;
|
||||
|
||||
if (!suid || !tbUids) {
|
||||
|
@ -92,28 +135,32 @@ static FORCE_INLINE int32_t tdUpdateTbUidListImpl(SSma *pSma, tb_uid_t *suid, SA
|
|||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
pRSmaInfo = taosHashGet(SMA_RSMA_INFO_HASH(pStat), suid, sizeof(tb_uid_t));
|
||||
pRSmaInfo = taosHashGet(RSMA_INFO_HASH(pStat), suid, sizeof(tb_uid_t));
|
||||
if (!pRSmaInfo || !(pRSmaInfo = *(SRSmaInfo **)pRSmaInfo)) {
|
||||
smaError("vgId:%d, failed to get rsma info for uid:%" PRIi64, SMA_VID(pSma), *suid);
|
||||
terrno = TSDB_CODE_RSMA_INVALID_STAT;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
if (pRSmaInfo->items[0].taskInfo && (qUpdateQualifiedTableId(pRSmaInfo->items[0].taskInfo, tbUids, true) < 0)) {
|
||||
if (pRSmaInfo->items[0].taskInfo) {
|
||||
if ((qUpdateQualifiedTableId(pRSmaInfo->items[0].taskInfo, tbUids, true) < 0)) {
|
||||
smaError("vgId:%d, update tbUidList failed for uid:%" PRIi64 " since %s", SMA_VID(pSma), *suid, terrstr(terrno));
|
||||
return TSDB_CODE_FAILED;
|
||||
} else {
|
||||
smaDebug("vgId:%d, update tbUidList succeed for qTaskInfo:%p with suid:%" PRIi64 ", uid:%" PRIi64, SMA_VID(pSma),
|
||||
pRSmaInfo->items[0].taskInfo, *suid, *(int64_t *)taosArrayGet(tbUids, 0));
|
||||
}
|
||||
}
|
||||
|
||||
if (pRSmaInfo->items[1].taskInfo && (qUpdateQualifiedTableId(pRSmaInfo->items[1].taskInfo, tbUids, true) < 0)) {
|
||||
if (pRSmaInfo->items[1].taskInfo) {
|
||||
if ((qUpdateQualifiedTableId(pRSmaInfo->items[1].taskInfo, tbUids, true) < 0)) {
|
||||
smaError("vgId:%d, update tbUidList failed for uid:%" PRIi64 " since %s", SMA_VID(pSma), *suid, terrstr(terrno));
|
||||
return TSDB_CODE_FAILED;
|
||||
} else {
|
||||
smaDebug("vgId:%d, update tbUidList succeed for qTaskInfo:%p with suid:%" PRIi64 ", uid:%" PRIi64, SMA_VID(pSma),
|
||||
pRSmaInfo->items[1].taskInfo, *suid, *(int64_t *)taosArrayGet(tbUids, 0));
|
||||
}
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -159,9 +206,9 @@ int32_t tdFetchTbUidList(SSma *pSma, STbUidStore **ppStore, tb_uid_t suid, tb_ui
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT(pEnv);
|
||||
SHashObj *infoHash = NULL;
|
||||
if (!pStat || !(infoHash = SMA_RSMA_INFO_HASH(pStat))) {
|
||||
if (!pStat || !(infoHash = RSMA_INFO_HASH(pStat))) {
|
||||
terrno = TSDB_CODE_RSMA_INVALID_STAT;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
@ -195,11 +242,11 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaInfo
|
|||
if (param->qmsg[idx]) {
|
||||
SRSmaInfoItem *pItem = &(pRSmaInfo->items[idx]);
|
||||
pItem->pRsmaInfo = pRSmaInfo;
|
||||
pItem->taskInfo = qCreateStreamExecTaskInfo(param->qmsg[0], pReadHandle);
|
||||
pItem->taskInfo = qCreateStreamExecTaskInfo(param->qmsg[idx], pReadHandle);
|
||||
if (!pItem->taskInfo) {
|
||||
goto _err;
|
||||
}
|
||||
pItem->triggerStatus = TASK_TRIGGER_STATUS__IN_ACTIVE;
|
||||
pItem->triggerStat = TASK_TRIGGER_STAT_INACTIVE;
|
||||
if (param->maxdelay[idx] < TSDB_MIN_ROLLUP_MAX_DELAY) {
|
||||
int64_t msInterval =
|
||||
convertTimeFromPrecisionToUnit(pRetention[idx + 1].freq, pTsdbCfg->precision, TIME_UNIT_MILLISECOND);
|
||||
|
@ -211,10 +258,6 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaInfo
|
|||
pItem->maxDelay = TSDB_MAX_ROLLUP_MAX_DELAY;
|
||||
}
|
||||
pItem->level = (idx == 0 ? TSDB_RETENTION_L1 : TSDB_RETENTION_L2);
|
||||
pItem->tmrHandle = taosTmrInit(10000, 100, 10000, "RSMA");
|
||||
if (!pItem->tmrHandle) {
|
||||
goto _err;
|
||||
}
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
_err:
|
||||
|
@ -222,26 +265,21 @@ _err:
|
|||
}
|
||||
|
||||
/**
|
||||
* @brief Check and init qTaskInfo_t, only applicable to stable with SRSmaParam.
|
||||
* @brief for rsam create or restore
|
||||
*
|
||||
* @param pTsdb
|
||||
* @param pMeta
|
||||
* @param pReq
|
||||
* @param pSma
|
||||
* @param param
|
||||
* @param suid
|
||||
* @param tbName
|
||||
* @return int32_t
|
||||
*/
|
||||
int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq *pReq) {
|
||||
SSma *pSma = pVnode->pSma;
|
||||
if (!pReq->rollup) {
|
||||
smaTrace("vgId:%d, return directly since no rollup for stable %s %" PRIi64, SMA_VID(pSma), pReq->name, pReq->suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t tdProcessRSmaCreateImpl(SSma *pSma, SRSmaParam *param, int64_t suid, const char *tbName) {
|
||||
SVnode *pVnode = pSma->pVnode;
|
||||
SMeta *pMeta = pVnode->pMeta;
|
||||
SMsgCb *pMsgCb = &pVnode->msgCb;
|
||||
SRSmaParam *param = &pReq->pRSmaParam;
|
||||
|
||||
if ((param->qmsgLen[0] == 0) && (param->qmsgLen[1] == 0)) {
|
||||
smaWarn("vgId:%d, no qmsg1/qmsg2 for rollup stable %s %" PRIi64, SMA_VID(pSma), pReq->name, pReq->suid);
|
||||
smaDebug("vgId:%d, no qmsg1/qmsg2 for rollup table %s %" PRIi64, SMA_VID(pSma), tbName, suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -251,13 +289,13 @@ int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq *pReq) {
|
|||
}
|
||||
|
||||
SSmaEnv *pEnv = SMA_RSMA_ENV(pSma);
|
||||
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT(pEnv);
|
||||
SRSmaInfo *pRSmaInfo = NULL;
|
||||
|
||||
pRSmaInfo = taosHashGet(SMA_RSMA_INFO_HASH(pStat), &pReq->suid, sizeof(tb_uid_t));
|
||||
pRSmaInfo = taosHashGet(RSMA_INFO_HASH(pStat), &suid, sizeof(tb_uid_t));
|
||||
if (pRSmaInfo) {
|
||||
ASSERT(0); // TODO: free original pRSmaInfo is exists abnormally
|
||||
smaWarn("vgId:%d, rsma info already exists for stb: %s, %" PRIi64, SMA_VID(pSma), pReq->name, pReq->suid);
|
||||
smaDebug("vgId:%d, rsma info already exists for table %s, %" PRIi64, SMA_VID(pSma), tbName, suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -281,26 +319,33 @@ int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq *pReq) {
|
|||
.vnode = pVnode,
|
||||
};
|
||||
|
||||
STSchema *pTSchema = metaGetTbTSchema(SMA_META(pSma), pReq->suid, -1);
|
||||
STSchema *pTSchema = metaGetTbTSchema(SMA_META(pSma), suid, -1);
|
||||
if (!pTSchema) {
|
||||
terrno = TSDB_CODE_TDB_IVD_TB_SCHEMA_VERSION;
|
||||
goto _err;
|
||||
}
|
||||
pRSmaInfo->pTSchema = pTSchema;
|
||||
pRSmaInfo->pSma = pSma;
|
||||
pRSmaInfo->suid = pReq->suid;
|
||||
pRSmaInfo->suid = suid;
|
||||
|
||||
if (tdSetRSmaInfoItemParams(pSma, param, pRSmaInfo, &handle, 0) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
if (tdSetRSmaInfoItemParams(pSma, param, pRSmaInfo, &handle, 1) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
if (taosHashPut(SMA_RSMA_INFO_HASH(pStat), &pReq->suid, sizeof(tb_uid_t), &pRSmaInfo, sizeof(pRSmaInfo)) < 0) {
|
||||
if (taosHashPut(RSMA_INFO_HASH(pStat), &suid, sizeof(tb_uid_t), &pRSmaInfo, sizeof(pRSmaInfo)) < 0) {
|
||||
goto _err;
|
||||
} else {
|
||||
smaDebug("vgId:%d, register rsma info succeed for suid:%" PRIi64, SMA_VID(pSma), pReq->suid);
|
||||
smaDebug("vgId:%d, register rsma info succeed for suid:%" PRIi64, SMA_VID(pSma), suid);
|
||||
}
|
||||
|
||||
// start the persist timer
|
||||
if (TASK_TRIGGER_STAT_INIT ==
|
||||
atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pStat), TASK_TRIGGER_STAT_INIT, TASK_TRIGGER_STAT_ACTIVE)) {
|
||||
taosTmrStart(tdRSmaPersistTrigger, RSMA_QTASKINFO_PERSIST_MS, pStat, RSMA_TMR_HANDLE(pStat));
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -310,6 +355,24 @@ _err:
|
|||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Check and init qTaskInfo_t, only applicable to stable with SRSmaParam.
|
||||
*
|
||||
* @param pTsdb
|
||||
* @param pMeta
|
||||
* @param pReq
|
||||
* @return int32_t
|
||||
*/
|
||||
int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq *pReq) {
|
||||
SSma *pSma = pVnode->pSma;
|
||||
if (!pReq->rollup) {
|
||||
smaTrace("vgId:%d, return directly since no rollup for stable %s %" PRIi64, SMA_VID(pSma), pReq->name, pReq->suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
return tdProcessRSmaCreateImpl(pSma, &pReq->rsmaParam, pReq->suid, pReq->name);
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief store suid/[uids], prefer to use array and then hash
|
||||
*
|
||||
|
@ -432,6 +495,16 @@ static int32_t tdFetchSubmitReqSuids(SSubmitReq *pMsg, STbUidStore *pStore) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void tdDestroySDataBlockArray(SArray *pArray) {
|
||||
#if 0
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pArray); ++i) {
|
||||
SSDataBlock *pDataBlock = taosArrayGet(pArray, i);
|
||||
blockDestroyInner(pDataBlock);
|
||||
}
|
||||
#endif
|
||||
taosArrayDestroy(pArray);
|
||||
}
|
||||
|
||||
static int32_t tdFetchAndSubmitRSmaResult(SRSmaInfoItem *pItem, int8_t blkType) {
|
||||
SArray *pResult = NULL;
|
||||
SRSmaInfo *pRSmaInfo = pItem->pRsmaInfo;
|
||||
|
@ -466,6 +539,7 @@ static int32_t tdFetchAndSubmitRSmaResult(SRSmaInfoItem *pItem, int8_t blkType)
|
|||
#endif
|
||||
STsdb *sinkTsdb = (pItem->level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb1 : pSma->pRSmaTsdb2);
|
||||
SSubmitReq *pReq = NULL;
|
||||
// TODO: the schema update should be handled
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pRSmaInfo->pTSchema, SMA_VID(pSma), pRSmaInfo->suid) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
@ -476,18 +550,16 @@ static int32_t tdFetchAndSubmitRSmaResult(SRSmaInfoItem *pItem, int8_t blkType)
|
|||
}
|
||||
|
||||
taosMemoryFreeClear(pReq);
|
||||
} else if (terrno == 0) {
|
||||
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched yet", SMA_VID(pSma), pItem->level);
|
||||
} else {
|
||||
smaDebug("vgId:%d, no rsma % " PRIi8 " data generated since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
|
||||
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
|
||||
}
|
||||
|
||||
if (blkType == STREAM_DATA_TYPE_SUBMIT_BLOCK) {
|
||||
atomic_store_8(&pItem->triggerStatus, TASK_TRIGGER_STATUS__ACTIVE);
|
||||
}
|
||||
|
||||
taosArrayDestroy(pResult);
|
||||
tdDestroySDataBlockArray(pResult);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
_err:
|
||||
taosArrayDestroy(pResult);
|
||||
tdDestroySDataBlockArray(pResult);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
|
@ -499,26 +571,38 @@ _err:
|
|||
*/
|
||||
static void tdRSmaFetchTrigger(void *param, void *tmrId) {
|
||||
SRSmaInfoItem *pItem = param;
|
||||
SSma *pSma = pItem->pRsmaInfo->pSma;
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT((SSmaEnv *)pSma->pRSmaEnv);
|
||||
|
||||
if (atomic_load_8(&pItem->triggerStatus) == TASK_TRIGGER_STATUS__ACTIVE) {
|
||||
smaWarn("%s:%d THREAD:%" PRIi64 " level %" PRIi8 " status is active for tb suid:%" PRIi64, __func__, __LINE__,
|
||||
taosGetSelfPthreadId(), pItem->level, pItem->pRsmaInfo->suid);
|
||||
SSDataBlock dataBlock = {.info.type = STREAM_GET_ALL};
|
||||
|
||||
atomic_store_8(&pItem->triggerStatus, TASK_TRIGGER_STATUS__IN_ACTIVE);
|
||||
qSetStreamInput(pItem->taskInfo, &dataBlock, STREAM_DATA_TYPE_SSDATA_BLOCK, false);
|
||||
|
||||
tdFetchAndSubmitRSmaResult(pItem, STREAM_DATA_TYPE_SSDATA_BLOCK);
|
||||
} else {
|
||||
smaWarn("%s:%d THREAD:%" PRIi64 " level %" PRIi8 " status is inactive for tb suid:%" PRIi64, __func__, __LINE__,
|
||||
taosGetSelfPthreadId(), pItem->level, pItem->pRsmaInfo->suid);
|
||||
int8_t rsmaTriggerStat = atomic_load_8(RSMA_TRIGGER_STAT(pStat));
|
||||
if (rsmaTriggerStat == TASK_TRIGGER_STAT_CANCELLED || rsmaTriggerStat == TASK_TRIGGER_STAT_FINISHED) {
|
||||
smaDebug("vgId:%d, level %" PRIi8 " not fetch since stat is cancelled for table suid:%" PRIi64, SMA_VID(pSma),
|
||||
pItem->level, pItem->pRsmaInfo->suid);
|
||||
return;
|
||||
}
|
||||
|
||||
// taosTmrReset(tdRSmaFetchTrigger, pItem->maxDelay, pItem, pItem->tmrHandle, &pItem->tmrId);
|
||||
int8_t fetchTriggerStat =
|
||||
atomic_val_compare_exchange_8(&pItem->triggerStat, TASK_TRIGGER_STAT_ACTIVE, TASK_TRIGGER_STAT_INACTIVE);
|
||||
if (fetchTriggerStat == TASK_TRIGGER_STAT_ACTIVE) {
|
||||
smaDebug("vgId:%d, level %" PRIi8 " stat is active for table suid:%" PRIi64, SMA_VID(pSma), pItem->level,
|
||||
pItem->pRsmaInfo->suid);
|
||||
|
||||
tdRefSmaStat(pSma, (SSmaStat *)pStat);
|
||||
|
||||
SSDataBlock dataBlock = {.info.type = STREAM_GET_ALL};
|
||||
qSetStreamInput(pItem->taskInfo, &dataBlock, STREAM_DATA_TYPE_SSDATA_BLOCK, false);
|
||||
tdFetchAndSubmitRSmaResult(pItem, STREAM_DATA_TYPE_SSDATA_BLOCK);
|
||||
|
||||
tdUnRefSmaStat(pSma, (SSmaStat *)pStat);
|
||||
|
||||
} else {
|
||||
smaDebug("vgId:%d, level %" PRIi8 " stat is inactive for table suid:%" PRIi64, SMA_VID(pSma), pItem->level,
|
||||
pItem->pRsmaInfo->suid);
|
||||
}
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType, SRSmaInfoItem *pItem,
|
||||
tb_uid_t suid, int8_t level) {
|
||||
static int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType, SRSmaInfoItem *pItem, tb_uid_t suid,
|
||||
int8_t level) {
|
||||
if (!pItem || !pItem->taskInfo) {
|
||||
smaDebug("vgId:%d, no qTaskInfo to execute rsma %" PRIi8 " task for suid:%" PRIu64, SMA_VID(pSma), level, suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -533,14 +617,17 @@ static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int3
|
|||
}
|
||||
|
||||
tdFetchAndSubmitRSmaResult(pItem, STREAM_DATA_TYPE_SUBMIT_BLOCK);
|
||||
atomic_store_8(&pItem->triggerStatus, TASK_TRIGGER_STATUS__ACTIVE);
|
||||
smaWarn("%s:%d THREAD:%" PRIi64 " process rsma insert", __func__, __LINE__, taosGetSelfPthreadId());
|
||||
atomic_store_8(&pItem->triggerStat, TASK_TRIGGER_STAT_ACTIVE);
|
||||
smaDebug("vgId:%d, process rsma insert", SMA_VID(pSma));
|
||||
|
||||
SSmaEnv *pEnv = SMA_RSMA_ENV(pSma);
|
||||
SRSmaStat *pStat = SMA_RSMA_STAT(pEnv->pStat);
|
||||
|
||||
taosTmrStart(tdRSmaPersistTrigger, 5000, pStat, pStat->tmrHandle);
|
||||
taosTmrReset(tdRSmaFetchTrigger, pItem->maxDelay, pItem, pItem->tmrHandle, &pItem->tmrId);
|
||||
if (pStat->tmrHandle) {
|
||||
taosTmrReset(tdRSmaFetchTrigger, pItem->maxDelay, pItem, pStat->tmrHandle, &pItem->tmrId);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -552,10 +639,10 @@ static int32_t tdExecuteRSma(SSma *pSma, const void *pMsg, int32_t inputType, tb
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT(pEnv);
|
||||
SRSmaInfo *pRSmaInfo = NULL;
|
||||
|
||||
pRSmaInfo = taosHashGet(SMA_RSMA_INFO_HASH(pStat), &suid, sizeof(tb_uid_t));
|
||||
pRSmaInfo = taosHashGet(RSMA_INFO_HASH(pStat), &suid, sizeof(tb_uid_t));
|
||||
|
||||
if (!pRSmaInfo || !(pRSmaInfo = *(SRSmaInfo **)pRSmaInfo)) {
|
||||
smaDebug("vgId:%d, return as no rsma info for suid:%" PRIu64, SMA_VID(pSma), suid);
|
||||
|
@ -608,7 +695,7 @@ int32_t tdProcessRSmaSubmit(SSma *pSma, void *pMsg, int32_t inputType) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
void tdRSmaQTaskGetFName(int32_t vid, int8_t ftype, char* outputName) {
|
||||
static void tdRSmaQTaskGetFName(int32_t vid, int8_t ftype, char *outputName) {
|
||||
tdGetVndFileName(vid, "rsma", tdQTaskInfoFname[ftype], outputName);
|
||||
}
|
||||
|
||||
|
@ -618,6 +705,11 @@ static void *tdRSmaPersistExec(void *param) {
|
|||
SSma *pSma = pRSmaStat->pSma;
|
||||
STfs *pTfs = pSma->pVnode->pTfs;
|
||||
int64_t toffset = 0;
|
||||
bool isFileCreated = false;
|
||||
|
||||
if (TASK_TRIGGER_STAT_CANCELLED == atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat))) {
|
||||
goto _end;
|
||||
}
|
||||
|
||||
void *infoHash = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), NULL);
|
||||
if (!infoHash) {
|
||||
|
@ -625,36 +717,82 @@ static void *tdRSmaPersistExec(void *param) {
|
|||
}
|
||||
|
||||
STFile tFile = {0};
|
||||
int32_t vid = 2;
|
||||
int32_t vid = SMA_VID(pSma);
|
||||
|
||||
while (infoHash) {
|
||||
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)infoHash;
|
||||
|
||||
#if 0
|
||||
smaDebug("table %" PRIi64 " sleep 15s start ...", pRSmaInfo->items[0].pRsmaInfo->suid);
|
||||
for (int32_t i = 15; i > 0; --i) {
|
||||
taosSsleep(1);
|
||||
smaDebug("table %" PRIi64 " countdown %d", pRSmaInfo->items[0].pRsmaInfo->suid, i);
|
||||
}
|
||||
smaDebug("table %" PRIi64 " sleep 15s end ...", pRSmaInfo->items[0].pRsmaInfo->suid);
|
||||
#endif
|
||||
for (int32_t i = 0; i < TSDB_RETENTION_L2; ++i) {
|
||||
qTaskInfo_t taskInfo = pRSmaInfo->items[i].taskInfo;
|
||||
if (!taskInfo) {
|
||||
smaDebug("vgId:%d, table %" PRIi64 " level %d qTaskInfo is NULL", vid, pRSmaInfo->suid, i + 1);
|
||||
continue;
|
||||
}
|
||||
char *pOutput = NULL;
|
||||
int32_t len = 0;
|
||||
int8_t type = (int8_t)(i + 1);
|
||||
if (qSerializeTaskStatus(taskInfo, &pOutput, &len) < 0) {
|
||||
smaError("vgId:%d, table %" PRIi64 " level %d serialize rsma task failed since %s", vid, pRSmaInfo->suid, i + 1,
|
||||
terrstr(terrno));
|
||||
goto _err;
|
||||
} else {
|
||||
if (!pOutput) {
|
||||
smaDebug("vgId:%d, table %" PRIi64
|
||||
" level %d serialize rsma task success but no output(len %d) and no need to persist",
|
||||
vid, pRSmaInfo->suid, i + 1, len);
|
||||
continue;
|
||||
} else if (len <= 0) {
|
||||
smaDebug("vgId:%d, table %" PRIi64 " level %d serialize rsma task success with len %d and no need to persist",
|
||||
vid, pRSmaInfo->suid, i + 1, len);
|
||||
taosMemoryFree(pOutput);
|
||||
}
|
||||
smaDebug("vgId:%d, table %" PRIi64 " level %d serialize rsma task success with len %d and need persist", vid,
|
||||
pRSmaInfo->suid, i + 1, len);
|
||||
#if 1
|
||||
if (qDeserializeTaskStatus(taskInfo, pOutput, len) < 0) {
|
||||
smaError("vgId:%d, table %" PRIi64 "level %d deserialize rsma task failed since %s", vid, pRSmaInfo->suid,
|
||||
i + 1, terrstr(terrno));
|
||||
} else {
|
||||
smaDebug("vgId:%d, table %" PRIi64 " level %d deserialize rsma task success", vid, pRSmaInfo->suid, i + 1);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
if (!isFileCreated) {
|
||||
char qTaskInfoFName[TSDB_FILENAME_LEN];
|
||||
tdRSmaQTaskGetFName(vid, TD_QTASK_TMP_FILE, qTaskInfoFName);
|
||||
tdInitTFile(&tFile, pTfs, qTaskInfoFName);
|
||||
tdCreateTFile(&tFile, pTfs, true, -1);
|
||||
|
||||
while (infoHash) {
|
||||
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)infoHash;
|
||||
char *pOutput = NULL;
|
||||
int32_t len = 0;
|
||||
if (qSerializeTaskStatus(pRSmaInfo->items[0].taskInfo, &pOutput, &len) < 0) {
|
||||
smaError("serialize rsma task for table %" PRIi64 " failed since %s", pRSmaInfo->items[0].pRsmaInfo->suid,
|
||||
terrstr(terrno));
|
||||
} else {
|
||||
smaWarn("serialize rsma task for table %" PRIi64 " success and len is %d", pRSmaInfo->items[0].pRsmaInfo->suid,
|
||||
len);
|
||||
isFileCreated = true;
|
||||
}
|
||||
len += (sizeof(len) + sizeof(type) + sizeof(pRSmaInfo->suid));
|
||||
tdAppendTFile(&tFile, &len, sizeof(len), &toffset);
|
||||
tdAppendTFile(&tFile, &type, sizeof(type), &toffset);
|
||||
tdAppendTFile(&tFile, &pRSmaInfo->suid, sizeof(pRSmaInfo->suid), &toffset);
|
||||
tdAppendTFile(&tFile, pOutput, len, &toffset);
|
||||
|
||||
taosMemoryFree(pOutput);
|
||||
}
|
||||
infoHash = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), infoHash);
|
||||
}
|
||||
_end:
|
||||
|
||||
_normal:
|
||||
if (isFileCreated) {
|
||||
if (tdUpdateTFileHeader(&tFile) < 0) {
|
||||
smaError("vgId:%d, failed to update tfile %s header since %s", vid, TD_FILE_FULL_NAME(&tFile), tstrerror(terrno));
|
||||
tdCloseTFile(&tFile);
|
||||
tdRemoveTFile(&tFile);
|
||||
return NULL;
|
||||
goto _err;
|
||||
} else {
|
||||
smaDebug("vgId:%d, succeed to update tfile %s header", vid, TD_FILE_FULL_NAME(&tFile));
|
||||
}
|
||||
|
||||
tdCloseTFile(&tFile);
|
||||
|
@ -663,29 +801,59 @@ _end:
|
|||
strncpy(newFName, TD_FILE_FULL_NAME(&tFile), TSDB_FILENAME_LEN);
|
||||
char *pos = strstr(newFName, tdQTaskInfoFname[TD_QTASK_TMP_FILE]);
|
||||
strncpy(pos, tdQTaskInfoFname[TD_QTASK_CUR_FILE], TSDB_FILENAME_LEN - POINTER_DISTANCE(pos, newFName));
|
||||
taosRenameFile(TD_FILE_FULL_NAME(&tFile), newFName);
|
||||
|
||||
atomic_store_8(&pRSmaStat->tmrStat, TASK_TRIGGER_STATUS__ACTIVE);
|
||||
return NULL;
|
||||
if (taosRenameFile(TD_FILE_FULL_NAME(&tFile), newFName) != 0) {
|
||||
smaError("vgId:%d, failed to rename %s to %s", vid, TD_FILE_FULL_NAME(&tFile), newFName);
|
||||
goto _err;
|
||||
} else {
|
||||
smaDebug("vgId:%d, succeed to rename %s to %s", vid, TD_FILE_FULL_NAME(&tFile), newFName);
|
||||
}
|
||||
}
|
||||
goto _end;
|
||||
_err:
|
||||
atomic_store_8(&pRSmaStat->tmrStat, TASK_TRIGGER_STATUS__ACTIVE);
|
||||
// remove the .tmp file
|
||||
if (isFileCreated) {
|
||||
tdRemoveTFile(&tFile);
|
||||
}
|
||||
_end:
|
||||
if (TASK_TRIGGER_STAT_INACTIVE == atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat),
|
||||
TASK_TRIGGER_STAT_INACTIVE,
|
||||
TASK_TRIGGER_STAT_ACTIVE)) {
|
||||
smaDebug("vgId:%d, persist task is active again", vid);
|
||||
} else if (TASK_TRIGGER_STAT_CANCELLED == atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat),
|
||||
TASK_TRIGGER_STAT_CANCELLED,
|
||||
TASK_TRIGGER_STAT_FINISHED)) {
|
||||
smaDebug("vgId:%d, persist task is cancelled", vid);
|
||||
} else {
|
||||
smaWarn("vgId:%d, persist task in abnormal stat %" PRIi8, vid, atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat)));
|
||||
ASSERT(0);
|
||||
}
|
||||
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 0);
|
||||
taosThreadExit(NULL);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tdRSmaPersistTask(SRSmaStat *pRSmaStat) {
|
||||
smaWarn("%s:%d entry ", __func__, __LINE__);
|
||||
TdThread threadId;
|
||||
TdThreadAttr thAttr;
|
||||
taosThreadAttrInit(&thAttr);
|
||||
taosThreadAttrSetDetachState(&thAttr, PTHREAD_CREATE_DETACHED);
|
||||
TdThread tid;
|
||||
|
||||
if (taosThreadCreate(&threadId, &thAttr, tdRSmaPersistExec, pRSmaStat) != 0) {
|
||||
smaError("failed to create thread to persist rsma qTaskInfo since %s", strerror(errno));
|
||||
if (taosThreadCreate(&tid, &thAttr, tdRSmaPersistExec, pRSmaStat) != 0) {
|
||||
if (TASK_TRIGGER_STAT_INACTIVE == atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat),
|
||||
TASK_TRIGGER_STAT_INACTIVE,
|
||||
TASK_TRIGGER_STAT_ACTIVE)) {
|
||||
smaDebug("persist task is active again");
|
||||
} else if (TASK_TRIGGER_STAT_CANCELLED == atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat),
|
||||
TASK_TRIGGER_STAT_CANCELLED,
|
||||
TASK_TRIGGER_STAT_FINISHED)) {
|
||||
smaDebug(" persist task is cancelled and set finished");
|
||||
} else {
|
||||
smaWarn("persist task in abnormal stat %" PRIi8, atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat)));
|
||||
ASSERT(0);
|
||||
}
|
||||
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 0);
|
||||
}
|
||||
|
||||
taosThreadAttrDestroy(&thAttr);
|
||||
smaWarn("%s:%d end ", __func__, __LINE__);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -696,17 +864,283 @@ static void tdRSmaPersistTask(SRSmaStat *pRSmaStat) {
|
|||
*/
|
||||
static void tdRSmaPersistTrigger(void *param, void *tmrId) {
|
||||
SRSmaStat *pRSmaStat = param;
|
||||
int8_t tmrStat =
|
||||
atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_ACTIVE, TASK_TRIGGER_STAT_INACTIVE);
|
||||
switch (tmrStat) {
|
||||
case TASK_TRIGGER_STAT_ACTIVE: {
|
||||
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 1);
|
||||
if (TASK_TRIGGER_STAT_CANCELLED != atomic_val_compare_exchange_8(RSMA_TRIGGER_STAT(pRSmaStat),
|
||||
TASK_TRIGGER_STAT_CANCELLED,
|
||||
TASK_TRIGGER_STAT_FINISHED)) {
|
||||
smaDebug("rsma persistence start since active");
|
||||
|
||||
if (atomic_load_8(&pRSmaStat->tmrStat) == TASK_TRIGGER_STATUS__ACTIVE) {
|
||||
smaWarn("%s:%d THREAD:%" PRIi64 " rsma persistence start since active", __func__, __LINE__, taosGetSelfPthreadId());
|
||||
atomic_store_8(&pRSmaStat->tmrStat, TASK_TRIGGER_STATUS__IN_ACTIVE);
|
||||
|
||||
// execution
|
||||
// start persist task
|
||||
tdRSmaPersistTask(pRSmaStat);
|
||||
|
||||
taosTmrReset(tdRSmaPersistTrigger, RSMA_QTASKINFO_PERSIST_MS, pRSmaStat, pRSmaStat->tmrHandle,
|
||||
&pRSmaStat->tmrId);
|
||||
} else {
|
||||
smaWarn("%s:%d THREAD:%" PRIi64 " rsma persistence not start since inactive", __func__, __LINE__,
|
||||
taosGetSelfPthreadId());
|
||||
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 0);
|
||||
}
|
||||
} break;
|
||||
case TASK_TRIGGER_STAT_CANCELLED: {
|
||||
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_FINISHED);
|
||||
smaDebug("rsma persistence not start since cancelled and finished");
|
||||
} break;
|
||||
case TASK_TRIGGER_STAT_INACTIVE: {
|
||||
smaDebug("rsma persistence not start since inactive");
|
||||
} break;
|
||||
case TASK_TRIGGER_STAT_INIT: {
|
||||
smaDebug("rsma persistence not start since init");
|
||||
} break;
|
||||
default: {
|
||||
smaWarn("rsma persistence not start since unknown stat %" PRIi8, tmrStat);
|
||||
ASSERT(0);
|
||||
} break;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t tdProcessRSmaRestoreImpl(SSma *pSma) {
|
||||
SVnode *pVnode = pSma->pVnode;
|
||||
|
||||
// step 1: iterate all stables to restore the rsma env
|
||||
|
||||
SArray *suidList = taosArrayInit(1, sizeof(tb_uid_t));
|
||||
if (tsdbGetStbIdList(SMA_META(pSma), 0, suidList) < 0) {
|
||||
smaError("vgId:%d, failed to restore rsma since get stb id list error: %s", TD_VID(pVnode), terrstr());
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
taosTmrReset(tdRSmaPersistTrigger, 3600000, pRSmaStat, pRSmaStat->tmrHandle, &pRSmaStat->tmrId);
|
||||
if (taosArrayGetSize(suidList) == 0) {
|
||||
smaDebug("vgId:%d no need to restore rsma since empty stb id list", TD_VID(pVnode));
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SMetaReader mr = {0};
|
||||
metaReaderInit(&mr, SMA_META(pSma), 0);
|
||||
for (int32_t i = 0; i < taosArrayGetSize(suidList); ++i) {
|
||||
tb_uid_t suid = *(tb_uid_t *)taosArrayGet(suidList, i);
|
||||
smaDebug("suid [%d] is %" PRIi64, i, suid);
|
||||
if (metaGetTableEntryByUid(&mr, suid) < 0) {
|
||||
smaError("vgId:%d failed to get table meta for %" PRIi64 " since %s", TD_VID(pVnode), suid, terrstr());
|
||||
goto _err;
|
||||
}
|
||||
ASSERT(mr.me.type == TSDB_SUPER_TABLE);
|
||||
ASSERT(mr.me.uid == suid);
|
||||
if (TABLE_IS_ROLLUP(mr.me.flags)) {
|
||||
SRSmaParam *param = &mr.me.stbEntry.rsmaParam;
|
||||
for (int i = 0; i < 2; ++i) {
|
||||
smaDebug("vgId: %d table:%" PRIi64 " maxdelay[%d]:%" PRIi64 " watermark[%d]:%" PRIi64, TD_VID(pSma->pVnode),
|
||||
suid, i, param->maxdelay[i], i, param->watermark[i]);
|
||||
}
|
||||
if (tdProcessRSmaCreateImpl(pSma, &mr.me.stbEntry.rsmaParam, suid, mr.me.name) < 0) {
|
||||
smaError("vgId:%d failed to retore rsma env for %" PRIi64 " since %s", TD_VID(pVnode), suid, terrstr());
|
||||
goto _err;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// step 2: retrieve qtaskinfo object from the rsma/qtaskinfo file and restore
|
||||
STFile tFile = {0};
|
||||
char qTaskInfoFName[TSDB_FILENAME_LEN];
|
||||
|
||||
tdRSmaQTaskGetFName(TD_VID(pVnode), TD_QTASK_CUR_FILE, qTaskInfoFName);
|
||||
if (tdInitTFile(&tFile, pVnode->pTfs, qTaskInfoFName) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
if (tdOpenTFile(&tFile, TD_FILE_READ) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
SRSmaQTaskFIter fIter = {0};
|
||||
if (tdRSmaQTaskInfoIterInit(&fIter, &tFile) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
SRSmaQTaskInfoItem infoItem = {0};
|
||||
bool isEnd = false;
|
||||
int32_t code = 0;
|
||||
while ((code = tdRSmaQTaskInfoIterNext(&fIter, &infoItem, &isEnd)) == 0) {
|
||||
if (isEnd) {
|
||||
break;
|
||||
}
|
||||
if ((code = tdRSmaQTaskInfoItemRestore(pSma, &infoItem)) < 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
tdRSmaQTaskInfoIterDestroy(&fIter);
|
||||
|
||||
if (code < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
taosArrayDestroy(suidList);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
_err:
|
||||
ASSERT(0);
|
||||
metaReaderClear(&mr);
|
||||
taosArrayDestroy(suidList);
|
||||
smaError("failed to restore rsma info since %s", terrstr());
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
static int32_t tdRSmaQTaskInfoItemRestore(SSma *pSma, const SRSmaQTaskInfoItem *infoItem) {
|
||||
SRSmaStat *pStat = (SRSmaStat *)SMA_ENV_STAT((SSmaEnv *)pSma->pRSmaEnv);
|
||||
SRSmaInfo *pRSmaInfo = NULL;
|
||||
void *qTaskInfo = NULL;
|
||||
|
||||
pRSmaInfo = taosHashGet(RSMA_INFO_HASH(pStat), &infoItem->suid, sizeof(infoItem->suid));
|
||||
|
||||
if (!pRSmaInfo || !(pRSmaInfo = *(SRSmaInfo **)pRSmaInfo)) {
|
||||
smaDebug("vgId:%d, no restore as no rsma info for suid:%" PRIu64, SMA_VID(pSma), infoItem->suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (infoItem->type == 1) {
|
||||
qTaskInfo = pRSmaInfo->items[0].taskInfo;
|
||||
} else if (infoItem->type == 2) {
|
||||
qTaskInfo = pRSmaInfo->items[1].taskInfo;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
if (!qTaskInfo) {
|
||||
smaDebug("vgId:%d, no restore as NULL rsma qTaskInfo for suid:%" PRIu64, SMA_VID(pSma), infoItem->suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (qDeserializeTaskStatus(qTaskInfo, infoItem->qTaskInfo, infoItem->len) < 0) {
|
||||
smaError("vgId:%d, restore rsma failed for suid:%" PRIi64 " level %d since %s", SMA_VID(pSma), infoItem->suid,
|
||||
infoItem->type, terrstr(terrno));
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
smaDebug("vgId:%d, restore rsma success for suid:%" PRIi64 " level %d", SMA_VID(pSma), infoItem->suid,
|
||||
infoItem->type);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t tdRSmaQTaskInfoIterInit(SRSmaQTaskFIter *pIter, STFile *pTFile) {
|
||||
memset(pIter, 0, sizeof(*pIter));
|
||||
pIter->pTFile = pTFile;
|
||||
pIter->offset = TD_FILE_HEAD_SIZE;
|
||||
|
||||
if (tdGetTFileSize(pTFile, &pIter->fsize) < 0) {
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
if ((pIter->fsize - TD_FILE_HEAD_SIZE) < RSMA_QTASKINFO_BUFSIZE) {
|
||||
pIter->nAlloc = pIter->fsize - TD_FILE_HEAD_SIZE;
|
||||
} else {
|
||||
pIter->nAlloc = RSMA_QTASKINFO_BUFSIZE;
|
||||
}
|
||||
|
||||
if (pIter->nAlloc < TD_FILE_HEAD_SIZE) {
|
||||
pIter->nAlloc = TD_FILE_HEAD_SIZE;
|
||||
}
|
||||
|
||||
pIter->buf = taosMemoryMalloc(pIter->nAlloc);
|
||||
if (!pIter->buf) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t tdRSmaQTaskInfoIterNextBlock(SRSmaQTaskFIter *pIter, bool *isFinish) {
|
||||
STFile *pTFile = pIter->pTFile;
|
||||
int64_t nBytes = RSMA_QTASKINFO_BUFSIZE;
|
||||
|
||||
if (pIter->offset >= pIter->fsize) {
|
||||
*isFinish = true;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if ((pIter->fsize - pIter->offset) < RSMA_QTASKINFO_BUFSIZE) {
|
||||
nBytes = pIter->fsize - pIter->offset;
|
||||
}
|
||||
|
||||
if (tdSeekTFile(pTFile, pIter->offset, SEEK_SET) < 0) {
|
||||
ASSERT(0);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
if (tdReadTFile(pTFile, pIter->buf, nBytes) != nBytes) {
|
||||
ASSERT(0);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
int32_t infoLen = 0;
|
||||
taosDecodeFixedI32(pIter->buf, &infoLen);
|
||||
if (infoLen > nBytes) {
|
||||
ASSERT(infoLen > RSMA_QTASKINFO_BUFSIZE);
|
||||
pIter->nAlloc = infoLen;
|
||||
void *pBuf = taosMemoryRealloc(pIter->buf, infoLen);
|
||||
if (!pBuf) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
pIter->buf = pBuf;
|
||||
nBytes = infoLen;
|
||||
|
||||
if (tdSeekTFile(pTFile, pIter->offset, SEEK_SET)) {
|
||||
ASSERT(0);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
if (tdReadTFile(pTFile, pIter->buf, nBytes) != nBytes) {
|
||||
ASSERT(0);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
}
|
||||
|
||||
pIter->offset += nBytes;
|
||||
pIter->nBytes = nBytes;
|
||||
pIter->nBufPos = 0;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t tdRSmaQTaskInfoIterNext(SRSmaQTaskFIter *pIter, SRSmaQTaskInfoItem *pItem, bool *isEnd) {
|
||||
while (1) {
|
||||
// block iter
|
||||
bool isFinish = false;
|
||||
if (tdRSmaQTaskInfoIterNextBlock(pIter, &isFinish) < 0) {
|
||||
ASSERT(0);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
if (isFinish) {
|
||||
*isEnd = true;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
// consume the block
|
||||
int32_t qTaskInfoLenWithHead = 0;
|
||||
pIter->buf = taosDecodeFixedI32(pIter->buf, &qTaskInfoLenWithHead);
|
||||
if (qTaskInfoLenWithHead < 0) {
|
||||
terrno = TSDB_CODE_TDB_FILE_CORRUPTED;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
while (1) {
|
||||
if ((pIter->nBufPos + qTaskInfoLenWithHead) <= pIter->nBytes) {
|
||||
pIter->buf = taosDecodeFixedI8(pIter->buf, &pItem->type);
|
||||
pIter->buf = taosDecodeFixedI64(pIter->buf, &pItem->suid);
|
||||
pItem->qTaskInfo = pIter->buf;
|
||||
pItem->len = tdRSmaQTaskInfoContLen(qTaskInfoLenWithHead);
|
||||
// do the restore job
|
||||
printf("%s:%d ###### restore the qtask info offset:%" PRIi64 "\n", __func__, __LINE__, pIter->offset);
|
||||
|
||||
pIter->buf = POINTER_SHIFT(pIter->buf, pItem->len);
|
||||
pIter->nBufPos += qTaskInfoLenWithHead;
|
||||
|
||||
pIter->buf = taosDecodeFixedI32(pIter->buf, &qTaskInfoLenWithHead);
|
||||
continue;
|
||||
}
|
||||
// prepare and load next block in the file
|
||||
pIter->offset -= (pIter->nBytes - pIter->nBufPos);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
|
@ -15,7 +15,7 @@
|
|||
|
||||
#include "sma.h"
|
||||
|
||||
#define TD_FILE_HEAD_SIZE 512
|
||||
// smaFileUtil ================
|
||||
|
||||
#define TD_FILE_STATE_OK 0
|
||||
#define TD_FILE_STATE_BAD 1
|
||||
|
@ -32,7 +32,7 @@ static int32_t tdEncodeTFInfo(void **buf, STFInfo *pInfo) {
|
|||
tlen += taosEncodeFixedU32(buf, pInfo->magic);
|
||||
tlen += taosEncodeFixedU32(buf, pInfo->ftype);
|
||||
tlen += taosEncodeFixedU32(buf, pInfo->fver);
|
||||
tlen += taosEncodeFixedU64(buf, pInfo->fsize);
|
||||
tlen += taosEncodeFixedI64(buf, pInfo->fsize);
|
||||
|
||||
return tlen;
|
||||
}
|
||||
|
@ -41,7 +41,7 @@ static void *tdDecodeTFInfo(void *buf, STFInfo *pInfo) {
|
|||
buf = taosDecodeFixedU32(buf, &(pInfo->magic));
|
||||
buf = taosDecodeFixedU32(buf, &(pInfo->ftype));
|
||||
buf = taosDecodeFixedU32(buf, &(pInfo->fver));
|
||||
buf = taosDecodeFixedU64(buf, &(pInfo->fsize));
|
||||
buf = taosDecodeFixedI64(buf, &(pInfo->fsize));
|
||||
return buf;
|
||||
}
|
||||
|
||||
|
@ -69,6 +69,11 @@ int64_t tdSeekTFile(STFile *pTFile, int64_t offset, int whence) {
|
|||
return loffset;
|
||||
}
|
||||
|
||||
int64_t tdGetTFileSize(STFile *pTFile, int64_t *size) {
|
||||
ASSERT(TD_FILE_OPENED(pTFile));
|
||||
return taosFStatFile(pTFile->pFile, size, NULL);
|
||||
}
|
||||
|
||||
int64_t tdReadTFile(STFile *pTFile, void *buf, int64_t nbyte) {
|
||||
ASSERT(TD_FILE_OPENED(pTFile));
|
||||
|
||||
|
@ -236,3 +241,6 @@ int32_t tdCreateTFile(STFile *pTFile, STfs *pTfs, bool updateHeader, int8_t fTyp
|
|||
}
|
||||
|
||||
int32_t tdRemoveTFile(STFile *pTFile) { return tfsRemoveFile(TD_FILE_F(pTFile)); }
|
||||
|
||||
// smaXXXUtil ================
|
||||
// ...
|
|
@ -2876,6 +2876,33 @@ int32_t tsdbGetCtbIdList(SMeta* pMeta, int64_t suid, SArray* list) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Get all suids since suid
|
||||
*
|
||||
* @param pMeta
|
||||
* @param suid return all suids in one vnode if suid is 0
|
||||
* @param list
|
||||
* @return int32_t
|
||||
*/
|
||||
int32_t tsdbGetStbIdList(SMeta* pMeta, int64_t suid, SArray* list) {
|
||||
SMStbCursor* pCur = metaOpenStbCursor(pMeta, suid);
|
||||
if(!pCur) {
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
tb_uid_t id = metaStbCursorNext(pCur);
|
||||
if (id == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
taosArrayPush(list, &id);
|
||||
}
|
||||
|
||||
metaCloseStbCursor(pCur);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void destroyHelper(void* param) {
|
||||
if (param == NULL) {
|
||||
return;
|
||||
|
|
|
@ -72,7 +72,12 @@ int vnodeBegin(SVnode *pVnode) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int vnodeShouldCommit(SVnode *pVnode) { return pVnode->inUse->size > pVnode->config.szBuf / 3; }
|
||||
int vnodeShouldCommit(SVnode *pVnode) {
|
||||
if (pVnode->inUse) {
|
||||
return pVnode->inUse->size > pVnode->config.szBuf / 3;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
int vnodeSaveInfo(const char *dir, const SVnodeInfo *pInfo) {
|
||||
char fname[TSDB_FILENAME_LEN];
|
||||
|
|
|
@ -152,12 +152,13 @@ SVnode *vnodeOpen(const char *path, STfs *pTfs, SMsgCb msgCb) {
|
|||
return pVnode;
|
||||
|
||||
_err:
|
||||
if (pVnode->pSma) smaClose(pVnode->pSma);
|
||||
if (pVnode->pQuery) vnodeQueryClose(pVnode);
|
||||
if (pVnode->pTq) tqClose(pVnode->pTq);
|
||||
if (pVnode->pWal) walClose(pVnode->pWal);
|
||||
if (pVnode->pTsdb) tsdbClose(&pVnode->pTsdb);
|
||||
if (pVnode->pMeta) metaClose(pVnode->pMeta);
|
||||
if (pVnode->pSma) smaClose(pVnode->pSma);
|
||||
|
||||
|
||||
tsem_destroy(&(pVnode->canCommit));
|
||||
taosMemoryFree(pVnode);
|
||||
|
@ -166,13 +167,14 @@ _err:
|
|||
|
||||
void vnodeClose(SVnode *pVnode) {
|
||||
if (pVnode) {
|
||||
smaCloseEnv(pVnode->pSma);
|
||||
vnodeCommit(pVnode);
|
||||
vnodeSyncClose(pVnode);
|
||||
vnodeQueryClose(pVnode);
|
||||
walClose(pVnode->pWal);
|
||||
tqClose(pVnode->pTq);
|
||||
if (pVnode->pTsdb) tsdbClose(&pVnode->pTsdb);
|
||||
smaClose(pVnode->pSma);
|
||||
smaCloseEx(pVnode->pSma);
|
||||
metaClose(pVnode->pMeta);
|
||||
vnodeCloseBufPool(pVnode);
|
||||
// destroy handle
|
||||
|
|
|
@ -137,6 +137,25 @@ void vnodeProposeMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
|||
vError("vgId:%d, failed to pre-process msg:%p since %s", vgId, pMsg, terrstr());
|
||||
} else {
|
||||
code = syncPropose(pVnode->sync, pMsg, vnodeIsMsgWeak(pMsg->msgType));
|
||||
if (code == 1) {
|
||||
do {
|
||||
static int32_t cnt = 0;
|
||||
if (cnt++ % 1000 == 1) {
|
||||
vInfo("vgId:%d, msg:%p apply right now, apply index:%ld, msgtype:%s,%d", vgId, pMsg,
|
||||
pMsg->info.conn.applyIndex, TMSG_INFO(pMsg->msgType), pMsg->msgType);
|
||||
}
|
||||
} while (0);
|
||||
|
||||
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
||||
if (vnodeProcessWriteReq(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
|
||||
rsp.code = terrno;
|
||||
vInfo("vgId:%d, msg:%p failed to apply right now since %s", vgId, pMsg, terrstr());
|
||||
}
|
||||
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -163,11 +182,13 @@ void vnodeProposeMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
|||
SRpcMsg rsp = {.code = TSDB_CODE_RPC_REDIRECT, .info = pMsg->info};
|
||||
tmsgSendRedirectRsp(&rsp, &newEpSet);
|
||||
} else {
|
||||
if (code != 1) {
|
||||
if (terrno != 0) code = terrno;
|
||||
vError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", vgId, pMsg, tstrerror(code), code);
|
||||
SRpcMsg rsp = {.code = code, .info = pMsg->info};
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
|
||||
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", vgId, pMsg, code);
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
|
@ -260,7 +281,7 @@ int32_t vnodeProcessSyncReq(SVnode *pVnode, SRpcMsg *pMsg, SRpcMsg **pRsp) {
|
|||
SyncClientRequest *pSyncMsg = syncClientRequestFromRpcMsg2(pRpcMsg);
|
||||
assert(pSyncMsg != NULL);
|
||||
|
||||
ret = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg);
|
||||
ret = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg, NULL);
|
||||
syncClientRequestDestroy(pSyncMsg);
|
||||
|
||||
} else if (pRpcMsg->msgType == TDMT_SYNC_REQUEST_VOTE) {
|
||||
|
@ -359,16 +380,10 @@ static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta c
|
|||
SyncIndex beginIndex = SYNC_INDEX_INVALID;
|
||||
char logBuf[256] = {0};
|
||||
|
||||
if (pFsm->FpGetSnapshotInfo != NULL) {
|
||||
(*pFsm->FpGetSnapshotInfo)(pFsm, &snapshot);
|
||||
beginIndex = snapshot.lastApplyIndex;
|
||||
}
|
||||
|
||||
if (cbMeta.index > beginIndex) {
|
||||
snprintf(
|
||||
logBuf, sizeof(logBuf),
|
||||
snprintf(logBuf, sizeof(logBuf),
|
||||
"==callback== ==CommitCb== execute, pFsm:%p, index:%ld, isWeak:%d, code:%d, state:%d %s, beginIndex :%ld\n",
|
||||
pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state, syncUtilState2String(cbMeta.state), beginIndex);
|
||||
pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state, syncUtilState2String(cbMeta.state),
|
||||
beginIndex);
|
||||
syncRpcMsgLog2(logBuf, (SRpcMsg *)pMsg);
|
||||
|
||||
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||
|
@ -377,16 +392,6 @@ static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta c
|
|||
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
|
||||
rpcMsg.info.conn.applyIndex = cbMeta.index;
|
||||
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
|
||||
|
||||
} else {
|
||||
char logBuf[256] = {0};
|
||||
snprintf(logBuf, sizeof(logBuf),
|
||||
"==callback== ==CommitCb== do not execute, pFsm:%p, index:%ld, isWeak:%d, code:%d, state:%d %s, "
|
||||
"beginIndex :%ld\n",
|
||||
pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state, syncUtilState2String(cbMeta.state),
|
||||
beginIndex);
|
||||
syncRpcMsgLog2(logBuf, (SRpcMsg *)pMsg);
|
||||
}
|
||||
}
|
||||
|
||||
static void vnodeSyncPreCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
||||
|
|
|
@ -403,7 +403,7 @@ typedef struct SIntervalAggOperatorInfo {
|
|||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo; // basic info
|
||||
SAggSupporter aggSup; // aggregate supporter
|
||||
|
||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||
SGroupResInfo groupResInfo; // multiple results build supporter
|
||||
SInterval interval; // interval info
|
||||
int32_t primaryTsIndex; // primary time stamp slot id from result of downstream operator.
|
||||
|
@ -730,15 +730,12 @@ SOperatorInfo* createAggregateOperatorInfo(SOperatorInfo* downstream, SExprInfo*
|
|||
SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhysiNode *pNode, SExecTaskInfo* pTaskInfo);
|
||||
SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhysiNode* pProjPhyNode, SExecTaskInfo* pTaskInfo);
|
||||
SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortPhyNode, SExecTaskInfo* pTaskInfo);
|
||||
|
||||
SOperatorInfo* createMultiwaySortMergeOperatorInfo(SOperatorInfo** downStreams, int32_t numStreams, SSDataBlock* pInputBlock,
|
||||
SSDataBlock* pResBlock, SArray* pSortInfo, SArray* pColMatchColInfo,
|
||||
SExecTaskInfo* pTaskInfo);
|
||||
SOperatorInfo* createMultiwayMergeOperatorInfo(SOperatorInfo** dowStreams, size_t numStreams, SMergePhysiNode* pMergePhysiNode, SExecTaskInfo* pTaskInfo);
|
||||
SOperatorInfo* createSortedMergeOperatorInfo(SOperatorInfo** downstream, int32_t numOfDownstream, SExprInfo* pExprInfo, int32_t num, SArray* pSortInfo, SArray* pGroupInfo, SExecTaskInfo* pTaskInfo);
|
||||
|
||||
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||
STimeWindowAggSupp *pTwAggSupp, SExecTaskInfo* pTaskInfo, bool isStream);
|
||||
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo, bool isStream);
|
||||
|
||||
SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||
|
@ -808,9 +805,10 @@ int32_t getMaximumIdleDurationSec();
|
|||
* ops: root operator
|
||||
* data: *data save the result of encode, need to be freed by caller
|
||||
* length: *length save the length of *data
|
||||
* nOptrWithVal: *nOptrWithVal save the number of optr with value
|
||||
* return: result code, 0 means success
|
||||
*/
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** data, int32_t *length);
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** data, int32_t *length, int32_t *nOptrWithVal);
|
||||
|
||||
/*
|
||||
* ops: root operator, created by caller
|
||||
|
|
|
@ -159,7 +159,12 @@ int32_t getNumOfTotalRes(SGroupResInfo* pGroupResInfo) {
|
|||
}
|
||||
|
||||
SArray* createSortInfo(SNodeList* pNodeList) {
|
||||
size_t numOfCols = LIST_LENGTH(pNodeList);
|
||||
size_t numOfCols = 0;
|
||||
if (pNodeList != NULL) {
|
||||
numOfCols = LIST_LENGTH(pNodeList);
|
||||
} else {
|
||||
numOfCols = 0;
|
||||
}
|
||||
SArray* pList = taosArrayInit(numOfCols, sizeof(SBlockOrderInfo));
|
||||
if (pList == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
|
|
@ -222,7 +222,13 @@ int32_t qSerializeTaskStatus(qTaskInfo_t tinfo, char** pOutput, int32_t* len) {
|
|||
return TSDB_CODE_INVALID_PARA;
|
||||
}
|
||||
|
||||
return encodeOperator(pTaskInfo->pRoot, pOutput, len);
|
||||
int32_t nOptrWithVal = 0;
|
||||
int32_t code = encodeOperator(pTaskInfo->pRoot, pOutput, len, &nOptrWithVal);
|
||||
if ((code == TSDB_CODE_SUCCESS) && (nOptrWithVal = 0)) {
|
||||
taosMemoryFreeClear(*pOutput);
|
||||
*len = 0;
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t qDeserializeTaskStatus(qTaskInfo_t tinfo, const char* pInput, int32_t len) {
|
||||
|
|
|
@ -273,8 +273,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR
|
|||
}
|
||||
|
||||
// 1. close current opened time window
|
||||
if (pResultRowInfo->cur.pageId != -1 && ((pResult == NULL) || (pResult->pageId != pResultRowInfo->cur.pageId &&
|
||||
pResult->offset != pResultRowInfo->cur.offset))) {
|
||||
if (pResultRowInfo->cur.pageId != -1 && ((pResult == NULL) || (pResult->pageId != pResultRowInfo->cur.pageId))) {
|
||||
SResultRowPosition pos = pResultRowInfo->cur;
|
||||
SFilePage* pPage = getBufPage(pResultBuf, pos.pageId);
|
||||
releaseBufPage(pResultBuf, pPage);
|
||||
|
@ -2929,6 +2928,13 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
|
|||
int32_t totalSize =
|
||||
sizeof(int32_t) + sizeof(int32_t) + size * (sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize);
|
||||
|
||||
// no result
|
||||
if (getTotalBufSize(pSup->pResultBuf) == 0) {
|
||||
*result = NULL;
|
||||
*length = 0;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
*result = (char*)taosMemoryCalloc(1, totalSize);
|
||||
if (*result == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -4217,8 +4223,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
|
||||
int32_t tsSlotId = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->slotId;
|
||||
bool isStream = (QUERY_NODE_PHYSICAL_PLAN_STREAM_INTERVAL == type);
|
||||
pOptr =
|
||||
createIntervalOperatorInfo(ops[0], pExprInfo, num, pResBlock, &interval, tsSlotId, &as, pTaskInfo, isStream);
|
||||
pOptr = createIntervalOperatorInfo(ops[0], pExprInfo, num, pResBlock, &interval, tsSlotId, &as, pIntervalPhyNode,
|
||||
pTaskInfo, isStream);
|
||||
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_ALIGNED_INTERVAL == type) {
|
||||
SMergeAlignedIntervalPhysiNode* pIntervalPhyNode = (SMergeAlignedIntervalPhysiNode*)pPhyNode;
|
||||
|
@ -4262,17 +4268,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
pOptr = createGroupSortOperatorInfo(ops[0], (SGroupSortPhysiNode*)pPhyNode, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE == type) {
|
||||
SMergePhysiNode* pMergePhyNode = (SMergePhysiNode*)pPhyNode;
|
||||
|
||||
SDataBlockDescNode* pDescNode = pPhyNode->pOutputDataBlockDesc;
|
||||
SSDataBlock* pResBlock = createResDataBlock(pDescNode);
|
||||
|
||||
SArray* sortInfo = createSortInfo(pMergePhyNode->pMergeKeys);
|
||||
int32_t numOfOutputCols = 0;
|
||||
SArray* pColList =
|
||||
extractColMatchInfo(pMergePhyNode->pTargets, pDescNode, &numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
|
||||
SPhysiNode* pChildNode = (SPhysiNode*)nodesListGetNode(pPhyNode->pChildren, 0);
|
||||
SSDataBlock* pInputDataBlock = createResDataBlock(pChildNode->pOutputDataBlockDesc);
|
||||
pOptr = createMultiwaySortMergeOperatorInfo(ops, size, pInputDataBlock, pResBlock, sortInfo, pColList, pTaskInfo);
|
||||
pOptr = createMultiwayMergeOperatorInfo(ops, size, pMergePhyNode, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_SESSION == type) {
|
||||
SSessionWinodwPhysiNode* pSessionNode = (SSessionWinodwPhysiNode*)pPhyNode;
|
||||
|
||||
|
@ -4465,12 +4461,12 @@ int32_t rebuildReader(SOperatorInfo* pOperator, SSubplan* plan, SReadHandle* pHa
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t* length) {
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t* length, int32_t* nOptrWithVal) {
|
||||
int32_t code = TDB_CODE_SUCCESS;
|
||||
char* pCurrent = NULL;
|
||||
int32_t currLength = 0;
|
||||
if (ops->fpSet.encodeResultRow) {
|
||||
if (result == NULL || length == NULL) {
|
||||
if (result == NULL || length == NULL || nOptrWithVal == NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
code = ops->fpSet.encodeResultRow(ops, &pCurrent, &currLength);
|
||||
|
@ -4481,8 +4477,15 @@ int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t* length) {
|
|||
*result = NULL;
|
||||
}
|
||||
return code;
|
||||
} else if (currLength == 0) {
|
||||
ASSERT(!pCurrent);
|
||||
goto _downstream;
|
||||
}
|
||||
|
||||
++(*nOptrWithVal);
|
||||
|
||||
ASSERT(currLength >= 0);
|
||||
|
||||
if (*result == NULL) {
|
||||
*result = (char*)taosMemoryCalloc(1, currLength + sizeof(int32_t));
|
||||
if (*result == NULL) {
|
||||
|
@ -4508,8 +4511,9 @@ int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t* length) {
|
|||
*length = *(int32_t*)(*result);
|
||||
}
|
||||
|
||||
_downstream:
|
||||
for (int32_t i = 0; i < ops->numOfDownstream; ++i) {
|
||||
code = encodeOperator(ops->pDownstream[i], result, length);
|
||||
code = encodeOperator(ops->pDownstream[i], result, length, nOptrWithVal);
|
||||
if (code != TDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -448,7 +448,7 @@ static SSDataBlock* doTableScanGroup(SOperatorInfo* pOperator) {
|
|||
qDebug("%s start to repeat ascending order scan data blocks due to query func required", GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
qDebug("%s qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
// do prepare for the next round table scan operation
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
|
@ -467,7 +467,7 @@ static SSDataBlock* doTableScanGroup(SOperatorInfo* pOperator) {
|
|||
qDebug("%s start to descending order scan data blocks due to query func required", GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
qDebug("%s qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
|
||||
while (pTableScanInfo->scanTimes < total) {
|
||||
|
@ -492,7 +492,7 @@ static SSDataBlock* doTableScanGroup(SOperatorInfo* pOperator) {
|
|||
GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
qDebug("%s qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
pTableScanInfo->curTWinIdx = 0;
|
||||
|
|
|
@ -142,7 +142,7 @@ SSDataBlock* getSortedBlockData(SSortHandle* pHandle, SSDataBlock* pDataBlock, i
|
|||
|
||||
SSDataBlock* loadNextDataBlock(void* param) {
|
||||
SOperatorInfo* pOperator = (SOperatorInfo*)param;
|
||||
SSDataBlock *pBlock = pOperator->fpSet.getNextFn(pOperator);
|
||||
SSDataBlock* pBlock = pOperator->fpSet.getNextFn(pOperator);
|
||||
return pBlock;
|
||||
}
|
||||
|
||||
|
@ -475,8 +475,8 @@ SOperatorInfo* createGroupSortOperatorInfo(SOperatorInfo* downstream, SGroupSort
|
|||
pOperator->exprSupp.numOfExprs = numOfCols;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doGroupSort, NULL, NULL, destroyGroupSortOperatorInfo, NULL,
|
||||
NULL, getGroupSortExplainExecInfo);
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doGroupSort, NULL, NULL, destroyGroupSortOperatorInfo,
|
||||
NULL, NULL, getGroupSortExplainExecInfo);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -494,7 +494,7 @@ _error:
|
|||
|
||||
//=====================================================================================
|
||||
// Multiway Sort Merge operator
|
||||
typedef struct SMultiwaySortMergeOperatorInfo {
|
||||
typedef struct SMultiwayMergeOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
|
||||
int32_t bufPageSize;
|
||||
|
@ -506,13 +506,14 @@ typedef struct SMultiwaySortMergeOperatorInfo {
|
|||
|
||||
SSDataBlock* pInputBlock;
|
||||
int64_t startTs; // sort start time
|
||||
bool groupSort;
|
||||
bool hasGroupId;
|
||||
uint64_t groupId;
|
||||
STupleHandle *prefetchedTuple;
|
||||
} SMultiwaySortMergeOperatorInfo;
|
||||
STupleHandle* prefetchedTuple;
|
||||
} SMultiwayMergeOperatorInfo;
|
||||
|
||||
int32_t doOpenMultiwaySortMergeOperator(SOperatorInfo* pOperator) {
|
||||
SMultiwaySortMergeOperatorInfo* pInfo = pOperator->info;
|
||||
int32_t doOpenMultiwayMergeOperator(SOperatorInfo* pOperator) {
|
||||
SMultiwayMergeOperatorInfo* pInfo = pOperator->info;
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
|
||||
if (OPTR_IS_OPENED(pOperator)) {
|
||||
|
@ -527,7 +528,8 @@ int32_t doOpenMultiwaySortMergeOperator(SOperatorInfo* pOperator) {
|
|||
pInfo->pInputBlock, pTaskInfo->id.str);
|
||||
|
||||
tsortSetFetchRawDataFp(pInfo->pSortHandle, loadNextDataBlock, NULL, NULL);
|
||||
tsortSetCompareGroupId(pInfo->pSortHandle, true);
|
||||
|
||||
tsortSetCompareGroupId(pInfo->pSortHandle, pInfo->groupSort);
|
||||
|
||||
for (int32_t i = 0; i < pOperator->numOfDownstream; ++i) {
|
||||
SSortSource* ps = taosMemoryCalloc(1, sizeof(SSortSource));
|
||||
|
@ -550,7 +552,7 @@ int32_t doOpenMultiwaySortMergeOperator(SOperatorInfo* pOperator) {
|
|||
|
||||
SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pDataBlock, int32_t capacity,
|
||||
SArray* pColMatchInfo, SOperatorInfo* pOperator) {
|
||||
SMultiwaySortMergeOperatorInfo* pInfo = pOperator->info;
|
||||
SMultiwayMergeOperatorInfo* pInfo = pOperator->info;
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
|
||||
blockDataCleanup(pDataBlock);
|
||||
|
@ -564,6 +566,7 @@ SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pData
|
|||
|
||||
while (1) {
|
||||
STupleHandle* pTupleHandle = NULL;
|
||||
if (pInfo->groupSort) {
|
||||
if (pInfo->prefetchedTuple == NULL) {
|
||||
pTupleHandle = tsortNextTuple(pHandle);
|
||||
} else {
|
||||
|
@ -571,11 +574,18 @@ SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pData
|
|||
pInfo->groupId = tsortGetGroupId(pTupleHandle);
|
||||
pInfo->prefetchedTuple = NULL;
|
||||
}
|
||||
}
|
||||
else {
|
||||
pTupleHandle = tsortNextTuple(pHandle);
|
||||
pInfo->groupId = 0;
|
||||
}
|
||||
|
||||
if (pTupleHandle == NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (pInfo->groupSort)
|
||||
{
|
||||
uint64_t tupleGroupId = tsortGetGroupId(pTupleHandle);
|
||||
if (!pInfo->hasGroupId) {
|
||||
pInfo->groupId = tupleGroupId;
|
||||
|
@ -587,12 +597,15 @@ SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pData
|
|||
pInfo->prefetchedTuple = pTupleHandle;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
appendOneRowToDataBlock(p, pTupleHandle);
|
||||
}
|
||||
if (p->info.rows >= capacity) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (p->info.rows > 0) {// todo extract method
|
||||
if (p->info.rows > 0) { // todo extract method
|
||||
blockDataEnsureCapacity(pDataBlock, p->info.rows);
|
||||
int32_t numOfCols = taosArrayGetSize(pColMatchInfo);
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
|
@ -614,13 +627,13 @@ SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pData
|
|||
return (pDataBlock->info.rows > 0) ? pDataBlock : NULL;
|
||||
}
|
||||
|
||||
SSDataBlock* doMultiwaySortMerge(SOperatorInfo* pOperator) {
|
||||
SSDataBlock* doMultiwayMerge(SOperatorInfo* pOperator) {
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
SMultiwaySortMergeOperatorInfo* pInfo = pOperator->info;
|
||||
SMultiwayMergeOperatorInfo* pInfo = pOperator->info;
|
||||
|
||||
int32_t code = pOperator->fpSet._openFn(pOperator);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -637,8 +650,8 @@ SSDataBlock* doMultiwaySortMerge(SOperatorInfo* pOperator) {
|
|||
return pBlock;
|
||||
}
|
||||
|
||||
void destroyMultiwaySortMergeOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SMultiwaySortMergeOperatorInfo* pInfo = (SMultiwaySortMergeOperatorInfo*)param;
|
||||
void destroyMultiwayMergeOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SMultiwayMergeOperatorInfo* pInfo = (SMultiwayMergeOperatorInfo*)param;
|
||||
pInfo->binfo.pRes = blockDataDestroy(pInfo->binfo.pRes);
|
||||
pInfo->pInputBlock = blockDataDestroy(pInfo->pInputBlock);
|
||||
|
||||
|
@ -646,11 +659,11 @@ void destroyMultiwaySortMergeOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosArrayDestroy(pInfo->pColMatchInfo);
|
||||
}
|
||||
|
||||
int32_t getMultiwaySortMergeExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len) {
|
||||
int32_t getMultiwayMergeExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len) {
|
||||
ASSERT(pOptr != NULL);
|
||||
SSortExecInfo* pInfo = taosMemoryCalloc(1, sizeof(SSortExecInfo));
|
||||
|
||||
SMultiwaySortMergeOperatorInfo* pOperatorInfo = (SMultiwaySortMergeOperatorInfo*)pOptr->info;
|
||||
SMultiwayMergeOperatorInfo* pOperatorInfo = (SMultiwayMergeOperatorInfo*)pOptr->info;
|
||||
|
||||
*pInfo = tsortGetSortExecInfo(pOperatorInfo->pSortHandle);
|
||||
*pOptrExplain = pInfo;
|
||||
|
@ -658,24 +671,35 @@ int32_t getMultiwaySortMergeExplainExecInfo(SOperatorInfo* pOptr, void** pOptrEx
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SOperatorInfo* createMultiwaySortMergeOperatorInfo(SOperatorInfo** downStreams, int32_t numStreams,
|
||||
SSDataBlock* pInputBlock, SSDataBlock* pResBlock, SArray* pSortInfo,
|
||||
SArray* pColMatchColInfo, SExecTaskInfo* pTaskInfo) {
|
||||
SMultiwaySortMergeOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SMultiwaySortMergeOperatorInfo));
|
||||
SOperatorInfo* createMultiwayMergeOperatorInfo(SOperatorInfo** downStreams, size_t numStreams,
|
||||
SMergePhysiNode* pMergePhyNode, SExecTaskInfo* pTaskInfo) {
|
||||
SPhysiNode* pPhyNode = (SPhysiNode*)pMergePhyNode;
|
||||
|
||||
SMultiwayMergeOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SMultiwayMergeOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
SDataBlockDescNode* pDescNode = pPhyNode->pOutputDataBlockDesc;
|
||||
SSDataBlock* pResBlock = createResDataBlock(pDescNode);
|
||||
|
||||
int32_t rowSize = pResBlock->info.rowSize;
|
||||
|
||||
if (pInfo == NULL || pOperator == NULL || rowSize > 100 * 1024 * 1024) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SArray* pSortInfo = createSortInfo(pMergePhyNode->pMergeKeys);
|
||||
int32_t numOfOutputCols = 0;
|
||||
SArray* pColMatchColInfo =
|
||||
extractColMatchInfo(pMergePhyNode->pTargets, pDescNode, &numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
|
||||
SPhysiNode* pChildNode = (SPhysiNode*)nodesListGetNode(pPhyNode->pChildren, 0);
|
||||
SSDataBlock* pInputBlock = createResDataBlock(pChildNode->pOutputDataBlockDesc);
|
||||
initResultSizeInfo(pOperator, 1024);
|
||||
|
||||
pInfo->groupSort = pMergePhyNode->groupSort;
|
||||
pInfo->binfo.pRes = pResBlock;
|
||||
pInfo->pSortInfo = pSortInfo;
|
||||
pInfo->pColMatchInfo = pColMatchColInfo;
|
||||
pInfo->pInputBlock = pInputBlock;
|
||||
pOperator->name = "MultiwaySortMerge";
|
||||
pOperator->name = "MultiwayMerge";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
|
@ -687,9 +711,8 @@ SOperatorInfo* createMultiwaySortMergeOperatorInfo(SOperatorInfo** downStreams,
|
|||
// one additional is reserved for merged result.
|
||||
pInfo->sortBufSize = pInfo->bufPageSize * (numStreams + 1);
|
||||
|
||||
pOperator->fpSet =
|
||||
createOperatorFpSet(doOpenMultiwaySortMergeOperator, doMultiwaySortMerge, NULL, NULL,
|
||||
destroyMultiwaySortMergeOperatorInfo, NULL, NULL, getMultiwaySortMergeExplainExecInfo);
|
||||
pOperator->fpSet = createOperatorFpSet(doOpenMultiwayMergeOperator, doMultiwayMerge, NULL, NULL,
|
||||
destroyMultiwayMergeOperatorInfo, NULL, NULL, getMultiwayMergeExplainExecInfo);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, downStreams, numStreams);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
|
|
@ -969,6 +969,12 @@ static int32_t doOpenIntervalAgg(SOperatorInfo* pOperator) {
|
|||
|
||||
getTableScanInfo(pOperator, &pInfo->order, &scanFlag);
|
||||
|
||||
if (pInfo->scalarSupp.pExprInfo != NULL) {
|
||||
SExprSupp* pExprSup =& pInfo->scalarSupp;
|
||||
projectApplyFunctions(pExprSup->pExprInfo, pBlock, pBlock, pExprSup->pCtx,
|
||||
pExprSup->numOfExprs, NULL);
|
||||
}
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, pInfo->order, scanFlag, true);
|
||||
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, scanFlag, NULL);
|
||||
|
@ -1381,6 +1387,11 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
|||
continue;
|
||||
}
|
||||
|
||||
if (pInfo->scalarSupp.pExprInfo != NULL) {
|
||||
SExprSupp* pExprSup = &pInfo->scalarSupp;
|
||||
projectApplyFunctions(pExprSup->pExprInfo, pBlock, pBlock, pExprSup->pCtx, pExprSup->numOfExprs, NULL);
|
||||
}
|
||||
|
||||
// The timewindow that overlaps the timestamps of the input pBlock need to be recalculated and return to the
|
||||
// caller. Note that all the time window are not close till now.
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
|
@ -1498,7 +1509,7 @@ void increaseTs(SqlFunctionCtx* pCtx) {
|
|||
|
||||
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||
STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo, bool isStream) {
|
||||
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo, bool isStream) {
|
||||
SIntervalAggOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SIntervalAggOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
|
@ -1511,6 +1522,15 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
|
|||
pInfo->execModel = pTaskInfo->execModel;
|
||||
pInfo->twAggSup = *pTwAggSupp;
|
||||
|
||||
if (pPhyNode->window.pExprs != NULL) {
|
||||
int32_t numOfScalar = 0;
|
||||
SExprInfo* pScalarExprInfo = createExprInfo(pPhyNode->window.pExprs, NULL, &numOfScalar);
|
||||
int32_t code = initExprSupp(&pInfo->scalarSupp, pScalarExprInfo, numOfScalar);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
pInfo->primaryTsIndex = primaryTsSlotId;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
@ -2473,7 +2493,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
if (IS_FINAL_OP(pInfo)) {
|
||||
int32_t childIndex = getChildIndex(pBlock);
|
||||
SOperatorInfo* pChildOp = taosArrayGetP(pInfo->pChildren, childIndex);
|
||||
SIntervalAggOperatorInfo* pChildInfo = pChildOp->info;
|
||||
SStreamFinalIntervalOperatorInfo* pChildInfo = pChildOp->info;
|
||||
SExprSupp* pChildSup = &pChildOp->exprSupp;
|
||||
|
||||
doClearWindows(&pChildInfo->aggSup, pChildSup, &pChildInfo->interval, pChildInfo->primaryTsIndex,
|
||||
|
|
|
@ -191,6 +191,11 @@ bool getUniqueFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
|||
bool uniqueFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
|
||||
int32_t uniqueFunction(SqlFunctionCtx *pCtx);
|
||||
|
||||
bool getModeFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||
bool modeFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
|
||||
int32_t modeFunction(SqlFunctionCtx *pCtx);
|
||||
int32_t modeFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
|
||||
|
||||
bool getTwaFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||
bool twaFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
|
||||
int32_t twaFunction(SqlFunctionCtx *pCtx);
|
||||
|
|
|
@ -1045,20 +1045,29 @@ static int32_t translateFirstLastMerge(SFunctionNode* pFunc, char* pErrBuf, int3
|
|||
return translateFirstLastImpl(pFunc, pErrBuf, len, false);
|
||||
}
|
||||
|
||||
static int32_t translateUnique(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||
static int32_t translateUniqueMode(SFunctionNode* pFunc, char* pErrBuf, int32_t len, bool isUnique) {
|
||||
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
|
||||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (!nodesExprHasColumn(pPara)) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR, "The parameters of UNIQUE must contain columns");
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR, "The parameters of %s must contain columns",
|
||||
isUnique ? "UNIQUE" : "MODE");
|
||||
}
|
||||
|
||||
pFunc->node.resType = ((SExprNode*)pPara)->resType;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateUnique(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||
return translateUniqueMode(pFunc, pErrBuf, len, true);
|
||||
}
|
||||
|
||||
static int32_t translateMode(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||
return translateUniqueMode(pFunc, pErrBuf, len, false);
|
||||
}
|
||||
|
||||
static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||
int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
|
||||
if (numOfParams == 0 || numOfParams > 2) {
|
||||
|
@ -2117,6 +2126,16 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.processFunc = uniqueFunction,
|
||||
.finalizeFunc = NULL
|
||||
},
|
||||
{
|
||||
.name = "mode",
|
||||
.type = FUNCTION_TYPE_MODE,
|
||||
.classification = FUNC_MGT_AGG_FUNC,
|
||||
.translateFunc = translateMode,
|
||||
.getEnvFunc = getModeFuncEnv,
|
||||
.initFunc = modeFunctionSetup,
|
||||
.processFunc = modeFunction,
|
||||
.finalizeFunc = modeFinalize,
|
||||
},
|
||||
{
|
||||
.name = "abs",
|
||||
.type = FUNCTION_TYPE_ABS,
|
||||
|
|
|
@ -32,6 +32,7 @@
|
|||
#define TAIL_MAX_OFFSET 100
|
||||
|
||||
#define UNIQUE_MAX_RESULT_SIZE (1024 * 1024 * 10)
|
||||
#define MODE_MAX_RESULT_SIZE UNIQUE_MAX_RESULT_SIZE
|
||||
|
||||
#define HLL_BUCKET_BITS 14 // The bits of the bucket
|
||||
#define HLL_DATA_BITS (64 - HLL_BUCKET_BITS)
|
||||
|
@ -246,6 +247,19 @@ typedef struct SUniqueInfo {
|
|||
char pItems[];
|
||||
} SUniqueInfo;
|
||||
|
||||
typedef struct SModeItem {
|
||||
int64_t count;
|
||||
char data[];
|
||||
} SModeItem;
|
||||
|
||||
typedef struct SModeInfo {
|
||||
int32_t numOfPoints;
|
||||
uint8_t colType;
|
||||
int16_t colBytes;
|
||||
SHashObj* pHash;
|
||||
char pItems[];
|
||||
} SModeInfo;
|
||||
|
||||
typedef struct SDerivInfo {
|
||||
double prevValue; // previous value
|
||||
TSKEY prevTs; // previous timestamp
|
||||
|
@ -4694,21 +4708,100 @@ int32_t uniqueFunction(SqlFunctionCtx* pCtx) {
|
|||
return pInfo->numOfPoints;
|
||||
}
|
||||
|
||||
int32_t uniqueFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
||||
bool getModeFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
|
||||
pEnv->calcMemSize = sizeof(SModeInfo) + MODE_MAX_RESULT_SIZE;
|
||||
return true;
|
||||
}
|
||||
|
||||
bool modeFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo) {
|
||||
if (!functionSetup(pCtx, pResInfo)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
SModeInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||
pInfo->numOfPoints = 0;
|
||||
pInfo->colType = pCtx->resDataInfo.type;
|
||||
pInfo->colBytes = pCtx->resDataInfo.bytes;
|
||||
if (pInfo->pHash != NULL) {
|
||||
taosHashClear(pInfo->pHash);
|
||||
} else {
|
||||
pInfo->pHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static void doModeAdd(SModeInfo* pInfo, char* data, bool isNull) {
|
||||
// ignore null elements
|
||||
if (isNull) {
|
||||
return;
|
||||
}
|
||||
|
||||
int32_t hashKeyBytes = IS_VAR_DATA_TYPE(pInfo->colType) ? varDataTLen(data) : pInfo->colBytes;
|
||||
SModeItem** pHashItem = taosHashGet(pInfo->pHash, data, hashKeyBytes);
|
||||
if (pHashItem == NULL) {
|
||||
int32_t size = sizeof(SModeItem) + pInfo->colBytes;
|
||||
SModeItem* pItem = (SModeItem*)(pInfo->pItems + pInfo->numOfPoints * size);
|
||||
memcpy(pItem->data, data, pInfo->colBytes);
|
||||
pItem->count += 1;
|
||||
|
||||
taosHashPut(pInfo->pHash, data, hashKeyBytes, &pItem, sizeof(SModeItem*));
|
||||
pInfo->numOfPoints++;
|
||||
} else {
|
||||
(*pHashItem)->count += 1;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t modeFunction(SqlFunctionCtx* pCtx) {
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
SUniqueInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||
SModeInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||
|
||||
SInputColumnInfoData* pInput = &pCtx->input;
|
||||
|
||||
SColumnInfoData* pInputCol = pInput->pData[0];
|
||||
SColumnInfoData* pOutput = (SColumnInfoData*)pCtx->pOutput;
|
||||
|
||||
int32_t startOffset = pCtx->offset;
|
||||
for (int32_t i = pInput->startRowIndex; i < pInput->numOfRows + pInput->startRowIndex; ++i) {
|
||||
char* data = colDataGetData(pInputCol, i);
|
||||
doModeAdd(pInfo, data, colDataIsNull_s(pInputCol, i));
|
||||
|
||||
if (sizeof(SModeInfo) + pInfo->numOfPoints * (sizeof(SModeItem) + pInfo->colBytes) >= MODE_MAX_RESULT_SIZE) {
|
||||
taosHashCleanup(pInfo->pHash);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
}
|
||||
|
||||
SET_VAL(pResInfo, 1, 1);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t modeFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
SModeInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||
int32_t slotId = pCtx->pExpr->base.resSchema.slotId;
|
||||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
||||
int32_t currentRow = pBlock->info.rows;
|
||||
|
||||
for (int32_t i = 0; i < pResInfo->numOfRes; ++i) {
|
||||
SUniqueItem* pItem = (SUniqueItem*)(pInfo->pItems + i * (sizeof(SUniqueItem) + pInfo->colBytes));
|
||||
colDataAppend(pCol, i, pItem->data, false);
|
||||
// TODO: handle ts output
|
||||
int32_t resIndex;
|
||||
int32_t maxCount = 0;
|
||||
for (int32_t i = 0; i < pInfo->numOfPoints; ++i) {
|
||||
SModeItem* pItem = (SModeItem*)(pInfo->pItems + i * (sizeof(SModeItem) + pInfo->colBytes));
|
||||
if (pItem->count > maxCount) {
|
||||
maxCount = pItem->count;
|
||||
resIndex = i;
|
||||
} else if (pItem->count == maxCount) {
|
||||
resIndex = -1;
|
||||
}
|
||||
}
|
||||
|
||||
SModeItem* pResItem = (SModeItem*)(pInfo->pItems + resIndex * (sizeof(SModeItem) + pInfo->colBytes));
|
||||
colDataAppend(pCol, currentRow, pResItem->data, (resIndex == -1) ? true : false);
|
||||
|
||||
return pResInfo->numOfRes;
|
||||
}
|
||||
|
||||
|
||||
bool getTwaFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
|
||||
pEnv->calcMemSize = sizeof(STwaInfo);
|
||||
return true;
|
||||
|
|
|
@ -110,7 +110,7 @@ static void udfdProcessRpcRsp(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
|
|||
static int32_t udfdFillUdfInfoFromMNode(void *clientRpc, char *udfName, SUdf *udf);
|
||||
static int32_t udfdConnectToMnode();
|
||||
static int32_t udfdLoadUdf(char *udfName, SUdf *udf);
|
||||
static bool udfdRpcRfp(int32_t code);
|
||||
static bool udfdRpcRfp(int32_t code, tmsg_t msgType);
|
||||
static int initEpSetFromCfg(const char *firstEp, const char *secondEp, SCorEpSet *pEpSet);
|
||||
static int32_t udfdOpenClientRpc();
|
||||
static int32_t udfdCloseClientRpc();
|
||||
|
@ -546,9 +546,12 @@ int32_t udfdLoadUdf(char *udfName, SUdf *udf) {
|
|||
}
|
||||
return 0;
|
||||
}
|
||||
static bool udfdRpcRfp(int32_t code) {
|
||||
static bool udfdRpcRfp(int32_t code, tmsg_t msgType) {
|
||||
if (code == TSDB_CODE_RPC_REDIRECT || code == TSDB_CODE_RPC_NETWORK_UNAVAIL || code == TSDB_CODE_NODE_NOT_DEPLOYED ||
|
||||
code == TSDB_CODE_SYN_NOT_LEADER || code == TSDB_CODE_APP_NOT_READY) {
|
||||
if (msgType == TDMT_VND_QUERY || msgType == TDMT_VND_FETCH) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
|
|
|
@ -2,12 +2,8 @@
|
|||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#include "tudf.h"
|
||||
#include "taosudf.h"
|
||||
|
||||
#undef malloc
|
||||
#define malloc malloc
|
||||
#undef free
|
||||
#define free free
|
||||
|
||||
DLL_EXPORT int32_t udf1_init() {
|
||||
return 0;
|
||||
|
|
|
@ -1,13 +1,9 @@
|
|||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <math.h>
|
||||
|
||||
#include "tudf.h"
|
||||
|
||||
#undef malloc
|
||||
#define malloc malloc
|
||||
#undef free
|
||||
#define free free
|
||||
#include "taosudf.h"
|
||||
|
||||
DLL_EXPORT int32_t udf2_init() {
|
||||
return 0;
|
||||
|
|
|
@ -337,7 +337,11 @@ int32_t trimString(const char* src, int32_t len, char* dst, int32_t dlen) {
|
|||
static bool isValidateTag(char* input) {
|
||||
if (!input) return false;
|
||||
for (size_t i = 0; i < strlen(input); ++i) {
|
||||
#ifdef WINDOWS
|
||||
if (input[i] < 0x20 || input[i] > 0x7E) return false;
|
||||
#else
|
||||
if (isprint(input[i]) == 0) return false;
|
||||
#endif
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
@ -377,6 +381,7 @@ int32_t parseJsontoTagData(const char* json, SArray* pTagVals, STag** ppTag, SMs
|
|||
|
||||
char* jsonKey = item->string;
|
||||
if (!isValidateTag(jsonKey)) {
|
||||
fprintf(stdout,"%s(%d) %s %08" PRId64 "\n", __FILE__, __LINE__,__func__,taosGetSelfPthreadId());fflush(stdout);
|
||||
retCode = buildSyntaxErrMsg(pMsgBuf, "json key not validate", jsonKey);
|
||||
goto end;
|
||||
}
|
||||
|
|
|
@ -1616,6 +1616,82 @@ static int32_t rewriteUniqueOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLog
|
|||
return rewriteUniqueOptimizeImpl(pCxt, pLogicSubplan, pIndef);
|
||||
}
|
||||
|
||||
// merge projects
|
||||
static bool mergeProjectsMayBeOptimized(SLogicNode* pNode) {
|
||||
if (QUERY_NODE_LOGIC_PLAN_PROJECT != nodeType(pNode) || 1 != LIST_LENGTH(pNode->pChildren)) {
|
||||
return false;
|
||||
}
|
||||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pNode->pChildren, 0);
|
||||
if (QUERY_NODE_LOGIC_PLAN_PROJECT != nodeType(pChild) || 1 < LIST_LENGTH(pChild->pChildren) ||
|
||||
NULL != pChild->pConditions || NULL != pNode->pLimit || NULL != pNode->pSlimit) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
typedef struct SMergeProjectionsContext {
|
||||
SProjectLogicNode* pChildProj;
|
||||
int32_t errCode;
|
||||
} SMergeProjectionsContext;
|
||||
|
||||
static EDealRes mergeProjectionsExpr(SNode** pNode, void* pContext) {
|
||||
SMergeProjectionsContext* pCxt = pContext;
|
||||
SProjectLogicNode* pChildProj = pCxt->pChildProj;
|
||||
if (QUERY_NODE_COLUMN == nodeType(*pNode)) {
|
||||
SNode* pTarget;
|
||||
FOREACH(pTarget, ((SLogicNode*)pChildProj)->pTargets) {
|
||||
if (nodesEqualNode(pTarget, *pNode)) {
|
||||
SNode* pProjection;
|
||||
FOREACH(pProjection, pChildProj->pProjections) {
|
||||
if (0 == strcmp(((SColumnNode*)pTarget)->colName, ((SExprNode*)pProjection)->aliasName)) {
|
||||
SNode* pExpr = nodesCloneNode(pProjection);
|
||||
if (pExpr == NULL) {
|
||||
pCxt->errCode = terrno;
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
nodesDestroyNode(*pNode);
|
||||
*pNode = pExpr;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return DEAL_RES_IGNORE_CHILD;
|
||||
}
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
||||
static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SLogicNode* pSelfNode) {
|
||||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSelfNode->pChildren, 0);
|
||||
SMergeProjectionsContext cxt = {.pChildProj = (SProjectLogicNode*)pChild, .errCode = TSDB_CODE_SUCCESS};
|
||||
|
||||
nodesRewriteExprs(((SProjectLogicNode*)pSelfNode)->pProjections, mergeProjectionsExpr, &cxt);
|
||||
int32_t code = cxt.errCode;
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
if (1 == LIST_LENGTH(pChild->pChildren)) {
|
||||
SLogicNode* pGrandChild = (SLogicNode*)nodesListGetNode(pChild->pChildren, 0);
|
||||
code = replaceLogicNode(pLogicSubplan, pChild, pGrandChild);
|
||||
} else { // no grand child
|
||||
NODES_CLEAR_LIST(pSelfNode->pChildren);
|
||||
}
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
NODES_CLEAR_LIST(pChild->pChildren);
|
||||
}
|
||||
nodesDestroyNode((SNode*)pChild);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t mergeProjectsOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan) {
|
||||
SLogicNode* pProjectNode = optFindPossibleNode(pLogicSubplan->pNode, mergeProjectsMayBeOptimized);
|
||||
if (NULL == pProjectNode) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
return mergeProjectsOptimizeImpl(pCxt, pLogicSubplan, pProjectNode);
|
||||
}
|
||||
|
||||
// clang-format off
|
||||
static const SOptimizeRule optimizeRuleSet[] = {
|
||||
{.pName = "ScanPath", .optimizeFunc = scanPathOptimize},
|
||||
|
@ -1623,6 +1699,7 @@ static const SOptimizeRule optimizeRuleSet[] = {
|
|||
{.pName = "SortPrimaryKey", .optimizeFunc = sortPrimaryKeyOptimize},
|
||||
{.pName = "SmaIndex", .optimizeFunc = smaIndexOptimize},
|
||||
{.pName = "PartitionTags", .optimizeFunc = partTagsOptimize},
|
||||
{.pName = "MergeProjects", .optimizeFunc = mergeProjectsOptimize},
|
||||
{.pName = "EliminateProject", .optimizeFunc = eliminateProjOptimize},
|
||||
{.pName = "EliminateSetOperator", .optimizeFunc = eliminateSetOpOptimize},
|
||||
{.pName = "RewriteTail", .optimizeFunc = rewriteTailOptimize},
|
||||
|
|
|
@ -50,7 +50,7 @@ typedef struct SSyncIO {
|
|||
void *pSyncNode;
|
||||
int32_t (*FpOnSyncPing)(SSyncNode *pSyncNode, SyncPing *pMsg);
|
||||
int32_t (*FpOnSyncPingReply)(SSyncNode *pSyncNode, SyncPingReply *pMsg);
|
||||
int32_t (*FpOnSyncClientRequest)(SSyncNode *pSyncNode, SyncClientRequest *pMsg);
|
||||
int32_t (*FpOnSyncClientRequest)(SSyncNode *pSyncNode, SyncClientRequest *pMsg, SyncIndex *pRetIndex);
|
||||
int32_t (*FpOnSyncRequestVote)(SSyncNode *pSyncNode, SyncRequestVote *pMsg);
|
||||
int32_t (*FpOnSyncRequestVoteReply)(SSyncNode *pSyncNode, SyncRequestVoteReply *pMsg);
|
||||
int32_t (*FpOnSyncAppendEntries)(SSyncNode *pSyncNode, SyncAppendEntries *pMsg);
|
||||
|
|
|
@ -169,7 +169,7 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pSyncInfo);
|
|||
void syncNodeStart(SSyncNode* pSyncNode);
|
||||
void syncNodeStartStandBy(SSyncNode* pSyncNode);
|
||||
void syncNodeClose(SSyncNode* pSyncNode);
|
||||
int32_t syncNodePropose(SSyncNode* pSyncNode, const SRpcMsg* pMsg, bool isWeak);
|
||||
int32_t syncNodePropose(SSyncNode* pSyncNode, SRpcMsg* pMsg, bool isWeak);
|
||||
|
||||
// option
|
||||
bool syncNodeSnapshotEnable(SSyncNode* pSyncNode);
|
||||
|
@ -233,6 +233,7 @@ SyncIndex syncNodeGetPreIndex(SSyncNode* pSyncNode, SyncIndex index);
|
|||
SyncTerm syncNodeGetPreTerm(SSyncNode* pSyncNode, SyncIndex index);
|
||||
int32_t syncNodeGetPreIndexTerm(SSyncNode* pSyncNode, SyncIndex index, SyncIndex* pPreIndex, SyncTerm* pPreTerm);
|
||||
|
||||
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg);
|
||||
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag);
|
||||
|
||||
int32_t syncNodeUpdateNewConfigIndex(SSyncNode* ths, SSyncCfg* pNewCfg);
|
||||
|
|
|
@ -49,14 +49,14 @@ int32_t raftCfgClose(SRaftCfg *pRaftCfg);
|
|||
int32_t raftCfgPersist(SRaftCfg *pRaftCfg);
|
||||
int32_t raftCfgAddConfigIndex(SRaftCfg *pRaftCfg, SyncIndex configIndex);
|
||||
|
||||
cJSON *syncCfg2Json(SSyncCfg *pSyncCfg);
|
||||
char *syncCfg2Str(SSyncCfg *pSyncCfg);
|
||||
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg);
|
||||
cJSON * syncCfg2Json(SSyncCfg *pSyncCfg);
|
||||
char * syncCfg2Str(SSyncCfg *pSyncCfg);
|
||||
char * syncCfg2SimpleStr(SSyncCfg *pSyncCfg);
|
||||
int32_t syncCfgFromJson(const cJSON *pRoot, SSyncCfg *pSyncCfg);
|
||||
int32_t syncCfgFromStr(const char *s, SSyncCfg *pSyncCfg);
|
||||
|
||||
cJSON *raftCfg2Json(SRaftCfg *pRaftCfg);
|
||||
char *raftCfg2Str(SRaftCfg *pRaftCfg);
|
||||
cJSON * raftCfg2Json(SRaftCfg *pRaftCfg);
|
||||
char * raftCfg2Str(SRaftCfg *pRaftCfg);
|
||||
int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg);
|
||||
int32_t raftCfgFromStr(const char *s, SRaftCfg *pRaftCfg);
|
||||
|
||||
|
|
|
@ -56,6 +56,31 @@ void syncEntryPrint2(char* s, const SSyncRaftEntry* pObj);
|
|||
void syncEntryLog(const SSyncRaftEntry* pObj);
|
||||
void syncEntryLog2(char* s, const SSyncRaftEntry* pObj);
|
||||
|
||||
//-----------------------------------
|
||||
typedef struct SRaftEntryCache {
|
||||
SHashObj* pEntryHash;
|
||||
int32_t maxCount;
|
||||
int32_t currentCount;
|
||||
TdThreadMutex mutex;
|
||||
SSyncNode* pSyncNode;
|
||||
} SRaftEntryCache;
|
||||
|
||||
SRaftEntryCache* raftCacheCreate(SSyncNode* pSyncNode, int32_t maxCount);
|
||||
void raftCacheDestroy(SRaftEntryCache* pCache);
|
||||
int32_t raftCachePutEntry(struct SRaftEntryCache* pCache, SSyncRaftEntry* pEntry);
|
||||
int32_t raftCacheGetEntry(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry);
|
||||
int32_t raftCacheGetEntryP(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry);
|
||||
int32_t raftCacheDelEntry(struct SRaftEntryCache* pCache, SyncIndex index);
|
||||
int32_t raftCacheGetAndDel(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry);
|
||||
int32_t raftCacheClear(struct SRaftEntryCache* pCache);
|
||||
|
||||
cJSON* raftCache2Json(SRaftEntryCache* pObj);
|
||||
char* raftCache2Str(SRaftEntryCache* pObj);
|
||||
void raftCachePrint(SRaftEntryCache* pObj);
|
||||
void raftCachePrint2(char* s, SRaftEntryCache* pObj);
|
||||
void raftCacheLog(SRaftEntryCache* pObj);
|
||||
void raftCacheLog2(char* s, SRaftEntryCache* pObj);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -31,7 +31,10 @@ extern "C" {
|
|||
typedef struct SSyncLogStoreData {
|
||||
SSyncNode* pSyncNode;
|
||||
SWal* pWal;
|
||||
|
||||
TdThreadMutex mutex;
|
||||
SWalReadHandle* pWalHandle;
|
||||
|
||||
// SyncIndex beginIndex; // valid begin index, default 0, may be set beginIndex > 0
|
||||
} SSyncLogStoreData;
|
||||
|
||||
|
|
|
@ -35,12 +35,13 @@ extern "C" {
|
|||
|
||||
#define SYNC_SNAPSHOT_RETRY_MS 5000
|
||||
|
||||
//---------------------------------------------------
|
||||
typedef struct SSyncSnapshotSender {
|
||||
bool start;
|
||||
int32_t seq;
|
||||
int32_t ack;
|
||||
void * pReader;
|
||||
void * pCurrentBlock;
|
||||
void *pReader;
|
||||
void *pCurrentBlock;
|
||||
int32_t blockLen;
|
||||
SSnapshot snapshot;
|
||||
SSyncCfg lastConfig;
|
||||
|
@ -56,24 +57,24 @@ SSyncSnapshotSender *snapshotSenderCreate(SSyncNode *pSyncNode, int32_t replicaI
|
|||
void snapshotSenderDestroy(SSyncSnapshotSender *pSender);
|
||||
bool snapshotSenderIsStart(SSyncSnapshotSender *pSender);
|
||||
void snapshotSenderStart(SSyncSnapshotSender *pSender, SSnapshot snapshot, void *pReader);
|
||||
void snapshotSenderStop(SSyncSnapshotSender *pSender);
|
||||
void snapshotSenderStop(SSyncSnapshotSender *pSender, bool finish);
|
||||
int32_t snapshotSend(SSyncSnapshotSender *pSender);
|
||||
int32_t snapshotReSend(SSyncSnapshotSender *pSender);
|
||||
cJSON * snapshotSender2Json(SSyncSnapshotSender *pSender);
|
||||
char * snapshotSender2Str(SSyncSnapshotSender *pSender);
|
||||
char * snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event);
|
||||
|
||||
cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender);
|
||||
char *snapshotSender2Str(SSyncSnapshotSender *pSender);
|
||||
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event);
|
||||
|
||||
//---------------------------------------------------
|
||||
typedef struct SSyncSnapshotReceiver {
|
||||
bool start;
|
||||
|
||||
int32_t ack;
|
||||
void * pWriter;
|
||||
void *pWriter;
|
||||
SyncTerm term;
|
||||
SyncTerm privateTerm;
|
||||
SSnapshot snapshot;
|
||||
|
||||
SSyncNode *pSyncNode;
|
||||
SRaftId fromId;
|
||||
SSyncNode *pSyncNode;
|
||||
|
||||
} SSyncSnapshotReceiver;
|
||||
|
||||
|
@ -81,11 +82,14 @@ SSyncSnapshotReceiver *snapshotReceiverCreate(SSyncNode *pSyncNode, SRaftId from
|
|||
void snapshotReceiverDestroy(SSyncSnapshotReceiver *pReceiver);
|
||||
void snapshotReceiverStart(SSyncSnapshotReceiver *pReceiver, SyncTerm privateTerm, SyncSnapshotSend *pBeginMsg);
|
||||
bool snapshotReceiverIsStart(SSyncSnapshotReceiver *pReceiver);
|
||||
void snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver, bool apply);
|
||||
cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver);
|
||||
char * snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver);
|
||||
char * snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event);
|
||||
void snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver);
|
||||
|
||||
cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver);
|
||||
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver);
|
||||
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event);
|
||||
|
||||
//---------------------------------------------------
|
||||
// on message
|
||||
int32_t syncNodeOnSnapshotSendCb(SSyncNode *ths, SyncSnapshotSend *pMsg);
|
||||
int32_t syncNodeOnSnapshotRspCb(SSyncNode *ths, SyncSnapshotRsp *pMsg);
|
||||
|
||||
|
|
|
@ -240,7 +240,10 @@ int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntries
|
|||
SSyncSnapshotSender* pSender = syncNodeGetSnapshotSender(ths, &(pMsg->srcId));
|
||||
ASSERT(pSender != NULL);
|
||||
|
||||
SSnapshot snapshot;
|
||||
SSnapshot snapshot = {.data = NULL,
|
||||
.lastApplyIndex = SYNC_INDEX_INVALID,
|
||||
.lastApplyTerm = 0,
|
||||
.lastConfigIndex = SYNC_INDEX_INVALID};
|
||||
void* pReader = NULL;
|
||||
ths->pFsm->FpGetSnapshot(ths->pFsm, &snapshot, NULL, &pReader);
|
||||
if (snapshot.lastApplyIndex >= SYNC_INDEX_BEGIN && nextIndex <= snapshot.lastApplyIndex + 1 &&
|
||||
|
@ -249,10 +252,6 @@ int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntries
|
|||
ASSERT(pReader != NULL);
|
||||
snapshotSenderStart(pSender, snapshot, pReader);
|
||||
|
||||
char* eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender start");
|
||||
syncNodeEventLog(ths, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
} else {
|
||||
// no snapshot
|
||||
if (pReader != NULL) {
|
||||
|
@ -260,23 +259,6 @@ int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntries
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
bool hasSnapshot = syncNodeHasSnapshot(ths);
|
||||
SSnapshot snapshot;
|
||||
ths->pFsm->FpGetSnapshotInfo(ths->pFsm, &snapshot);
|
||||
|
||||
// start sending snapshot first time
|
||||
// start here, stop by receiver
|
||||
if (hasSnapshot && nextIndex <= snapshot.lastApplyIndex + 1 && !snapshotSenderIsStart(pSender) &&
|
||||
pMsg->privateTerm < pSender->privateTerm) {
|
||||
snapshotSenderStart(pSender);
|
||||
|
||||
char* eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender start");
|
||||
syncNodeEventLog(ths, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
}
|
||||
*/
|
||||
|
||||
SyncIndex sentryIndex = pSender->snapshot.lastApplyIndex + 1;
|
||||
|
||||
// update nextIndex to sentryIndex
|
||||
|
@ -300,5 +282,5 @@ int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntries
|
|||
syncIndexMgrLog2("recv sync-append-entries-reply, after pNextIndex:", ths->pNextIndex);
|
||||
syncIndexMgrLog2("recv sync-append-entries-reply, after pMatchIndex:", ths->pMatchIndex);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
|
@ -102,6 +102,7 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
|||
}
|
||||
}
|
||||
|
||||
// maybe execute fsm
|
||||
if (newCommitIndex > pSyncNode->commitIndex) {
|
||||
SyncIndex beginIndex = pSyncNode->commitIndex + 1;
|
||||
SyncIndex endIndex = newCommitIndex;
|
||||
|
|
|
@ -30,7 +30,7 @@ static int32_t syncIODestroy(SSyncIO *io);
|
|||
static int32_t syncIOStartInternal(SSyncIO *io);
|
||||
static int32_t syncIOStopInternal(SSyncIO *io);
|
||||
|
||||
static void *syncIOConsumerFunc(void *param);
|
||||
static void * syncIOConsumerFunc(void *param);
|
||||
static void syncIOProcessRequest(void *pParent, SRpcMsg *pMsg, SEpSet *pEpSet);
|
||||
static void syncIOProcessReply(void *pParent, SRpcMsg *pMsg, SEpSet *pEpSet);
|
||||
static int32_t syncIOAuth(void *parent, char *meterId, char *spi, char *encrypt, char *secret, char *ckey);
|
||||
|
@ -242,9 +242,9 @@ static int32_t syncIOStopInternal(SSyncIO *io) {
|
|||
}
|
||||
|
||||
static void *syncIOConsumerFunc(void *param) {
|
||||
SSyncIO *io = param;
|
||||
SSyncIO * io = param;
|
||||
STaosQall *qall;
|
||||
SRpcMsg *pRpcMsg, rpcMsg;
|
||||
SRpcMsg * pRpcMsg, rpcMsg;
|
||||
qall = taosAllocateQall();
|
||||
|
||||
while (1) {
|
||||
|
@ -281,7 +281,7 @@ static void *syncIOConsumerFunc(void *param) {
|
|||
if (io->FpOnSyncClientRequest != NULL) {
|
||||
SyncClientRequest *pSyncMsg = syncClientRequestFromRpcMsg2(pRpcMsg);
|
||||
ASSERT(pSyncMsg != NULL);
|
||||
io->FpOnSyncClientRequest(io->pSyncNode, pSyncMsg);
|
||||
io->FpOnSyncClientRequest(io->pSyncNode, pSyncMsg, NULL);
|
||||
syncClientRequestDestroy(pSyncMsg);
|
||||
}
|
||||
|
||||
|
|
|
@ -126,7 +126,7 @@ cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
|
|||
|
||||
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
|
||||
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
|
|
@ -50,7 +50,7 @@ static int32_t syncNodeAppendNoop(SSyncNode* ths);
|
|||
// process message ----
|
||||
int32_t syncNodeOnPingCb(SSyncNode* ths, SyncPing* pMsg);
|
||||
int32_t syncNodeOnPingReplyCb(SSyncNode* ths, SyncPingReply* pMsg);
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg);
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncIndex* pRetIndex);
|
||||
|
||||
// life cycle
|
||||
static void syncFreeNode(void* param);
|
||||
|
@ -627,7 +627,7 @@ int32_t syncPropose(int64_t rid, SRpcMsg* pMsg, bool isWeak) {
|
|||
return ret;
|
||||
}
|
||||
|
||||
int32_t syncNodePropose(SSyncNode* pSyncNode, const SRpcMsg* pMsg, bool isWeak) {
|
||||
int32_t syncNodePropose(SSyncNode* pSyncNode, SRpcMsg* pMsg, bool isWeak) {
|
||||
int32_t ret = 0;
|
||||
|
||||
char eventLog[128];
|
||||
|
@ -664,13 +664,34 @@ int32_t syncNodePropose(SSyncNode* pSyncNode, const SRpcMsg* pMsg, bool isWeak)
|
|||
SRpcMsg rpcMsg;
|
||||
syncClientRequest2RpcMsg(pSyncMsg, &rpcMsg);
|
||||
|
||||
// optimized one replica
|
||||
if (syncNodeIsOptimizedOneReplica(pSyncNode, pMsg)) {
|
||||
SyncIndex retIndex;
|
||||
int32_t code = syncNodeOnClientRequestCb(pSyncNode, pSyncMsg, &retIndex);
|
||||
if (code == 0) {
|
||||
pMsg->info.conn.applyIndex = retIndex;
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
syncRespMgrDel(pSyncNode->pSyncRespMgr, seqNum);
|
||||
ret = 1;
|
||||
sDebug("vgId:%d optimized index:%ld success, msgtype:%s,%d", pSyncNode->vgId, retIndex,
|
||||
TMSG_INFO(pMsg->msgType), pMsg->msgType);
|
||||
} else {
|
||||
ret = -1;
|
||||
terrno = TSDB_CODE_SYN_INTERNAL_ERROR;
|
||||
sError("vgId:%d optimized index:%ld error, msgtype:%s,%d", pSyncNode->vgId, retIndex, TMSG_INFO(pMsg->msgType),
|
||||
pMsg->msgType);
|
||||
}
|
||||
|
||||
} else {
|
||||
if (pSyncNode->FpEqMsg != NULL && (*pSyncNode->FpEqMsg)(pSyncNode->msgcb, &rpcMsg) == 0) {
|
||||
ret = 0;
|
||||
} else {
|
||||
ret = -1;
|
||||
terrno = TSDB_CODE_SYN_INTERNAL_ERROR;
|
||||
sError("syncPropose pSyncNode->FpEqMsg is NULL");
|
||||
sError("enqueue msg error, FpEqMsg is NULL");
|
||||
}
|
||||
}
|
||||
|
||||
syncClientRequestDestroy(pSyncMsg);
|
||||
goto _END;
|
||||
|
||||
|
@ -1334,11 +1355,16 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
|
|||
int32_t userStrLen = strlen(str);
|
||||
|
||||
SSnapshot snapshot = {.data = NULL, .lastApplyIndex = -1, .lastApplyTerm = 0};
|
||||
if (pSyncNode->pFsm->FpGetSnapshotInfo != NULL) {
|
||||
if (pSyncNode->pFsm != NULL && pSyncNode->pFsm->FpGetSnapshotInfo != NULL) {
|
||||
pSyncNode->pFsm->FpGetSnapshotInfo(pSyncNode->pFsm, &snapshot);
|
||||
}
|
||||
SyncIndex logLastIndex = pSyncNode->pLogStore->syncLogLastIndex(pSyncNode->pLogStore);
|
||||
SyncIndex logBeginIndex = pSyncNode->pLogStore->syncLogBeginIndex(pSyncNode->pLogStore);
|
||||
|
||||
SyncIndex logLastIndex = SYNC_INDEX_INVALID;
|
||||
SyncIndex logBeginIndex = SYNC_INDEX_INVALID;
|
||||
if (pSyncNode->pLogStore != NULL) {
|
||||
logLastIndex = pSyncNode->pLogStore->syncLogLastIndex(pSyncNode->pLogStore);
|
||||
logBeginIndex = pSyncNode->pLogStore->syncLogBeginIndex(pSyncNode->pLogStore);
|
||||
}
|
||||
|
||||
char* pCfgStr = syncCfg2SimpleStr(&(pSyncNode->pRaftCfg->cfg));
|
||||
char* printStr = "";
|
||||
|
@ -2377,7 +2403,7 @@ int32_t syncNodeOnPingReplyCb(SSyncNode* ths, SyncPingReply* pMsg) {
|
|||
// /\ UNCHANGED <<messages, serverVars, candidateVars,
|
||||
// leaderVars, commitIndex>>
|
||||
//
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg) {
|
||||
int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncIndex* pRetIndex) {
|
||||
int32_t ret = 0;
|
||||
syncClientRequestLog2("==syncNodeOnClientRequestCb==", pMsg);
|
||||
|
||||
|
@ -2436,6 +2462,14 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg) {
|
|||
rpcFreeCont(rpcMsg.pCont);
|
||||
}
|
||||
|
||||
if (pRetIndex != NULL) {
|
||||
if (ret == 0 && pEntry != NULL) {
|
||||
*pRetIndex = pEntry->index;
|
||||
} else {
|
||||
*pRetIndex = SYNC_INDEX_INVALID;
|
||||
}
|
||||
}
|
||||
|
||||
syncEntryDestory(pEntry);
|
||||
return ret;
|
||||
}
|
||||
|
@ -2600,6 +2634,10 @@ static int32_t syncNodeProposeConfigChangeFinish(SSyncNode* ths, SyncReconfigFin
|
|||
return 0;
|
||||
}
|
||||
|
||||
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg) {
|
||||
return (ths->replicaNum == 1 && syncUtilUserCommit(pMsg->msgType) && ths->vgId != 1);
|
||||
}
|
||||
|
||||
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag) {
|
||||
int32_t code = 0;
|
||||
ESyncState state = flag;
|
||||
|
@ -2621,7 +2659,20 @@ int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex,
|
|||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
||||
|
||||
// user commit
|
||||
if (ths->pFsm->FpCommitCb != NULL && syncUtilUserCommit(pEntry->originalRpcType)) {
|
||||
if ((ths->pFsm->FpCommitCb != NULL) && syncUtilUserCommit(pEntry->originalRpcType)) {
|
||||
bool internalExecute = true;
|
||||
if ((ths->replicaNum == 1) && ths->restoreFinish && (ths->vgId != 1)) {
|
||||
internalExecute = false;
|
||||
}
|
||||
|
||||
do {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "index:%ld, internalExecute:%d", i, internalExecute);
|
||||
syncNodeEventLog(ths, logBuf);
|
||||
} while (0);
|
||||
|
||||
// execute fsm in apply thread, or execute outside syncPropose
|
||||
if (internalExecute) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
|
@ -2635,6 +2686,7 @@ int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex,
|
|||
|
||||
ths->pFsm->FpCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
|
||||
// config change
|
||||
if (pEntry->originalRpcType == TDMT_SYNC_CONFIG_CHANGE) {
|
||||
|
|
|
@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
|
|||
|
||||
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
@ -109,7 +109,7 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
|||
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
||||
if (pSyncCfg != NULL) {
|
||||
int32_t len = 512;
|
||||
char *s = taosMemoryMalloc(len);
|
||||
char * s = taosMemoryMalloc(len);
|
||||
memset(s, 0, len);
|
||||
|
||||
snprintf(s, len, "{replica-num:%d, my-index:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
||||
|
@ -205,7 +205,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
|
|||
|
||||
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
||||
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
@ -271,7 +271,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
|
|||
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
||||
}
|
||||
|
||||
cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
||||
ASSERT(code == 0);
|
||||
|
||||
|
|
|
@ -180,3 +180,247 @@ void syncEntryLog2(char* s, const SSyncRaftEntry* pObj) {
|
|||
sTrace("syncEntryLog2 | len:%zu | %s | %s", strlen(serialized), s, serialized);
|
||||
taosMemoryFree(serialized);
|
||||
}
|
||||
|
||||
//-----------------------------------
|
||||
SRaftEntryCache* raftCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
||||
SRaftEntryCache* pCache = taosMemoryMalloc(sizeof(SRaftEntryCache));
|
||||
if (pCache == NULL) {
|
||||
sError("vgId:%d raft cache create error", pSyncNode->vgId);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pCache->pEntryHash =
|
||||
taosHashInit(sizeof(SyncIndex), taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
||||
if (pCache->pEntryHash == NULL) {
|
||||
sError("vgId:%d raft cache create hash error", pSyncNode->vgId);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
taosThreadMutexInit(&(pCache->mutex), NULL);
|
||||
pCache->maxCount = maxCount;
|
||||
pCache->currentCount = 0;
|
||||
pCache->pSyncNode = pSyncNode;
|
||||
|
||||
return pCache;
|
||||
}
|
||||
|
||||
void raftCacheDestroy(SRaftEntryCache* pCache) {
|
||||
if (pCache != NULL) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
taosHashCleanup(pCache->pEntryHash);
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
taosThreadMutexDestroy(&(pCache->mutex));
|
||||
taosMemoryFree(pCache);
|
||||
}
|
||||
}
|
||||
|
||||
// success, return 1
|
||||
// max count, return 0
|
||||
// error, return -1
|
||||
int32_t raftCachePutEntry(struct SRaftEntryCache* pCache, SSyncRaftEntry* pEntry) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
|
||||
if (pCache->currentCount >= pCache->maxCount) {
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
taosHashPut(pCache->pEntryHash, &(pEntry->index), sizeof(pEntry->index), pEntry, pEntry->bytes);
|
||||
++(pCache->currentCount);
|
||||
|
||||
do {
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "raft cache add, type:%s,%d, type2:%s,%d, index:%ld, bytes:%d",
|
||||
TMSG_INFO(pEntry->msgType), pEntry->msgType, TMSG_INFO(pEntry->originalRpcType), pEntry->originalRpcType,
|
||||
pEntry->index, pEntry->bytes);
|
||||
syncNodeEventLog(pCache->pSyncNode, eventLog);
|
||||
} while (0);
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 1;
|
||||
}
|
||||
|
||||
// success, return 0
|
||||
// error, return -1
|
||||
// not exist, return -1, terrno = TSDB_CODE_WAL_LOG_NOT_EXIST
|
||||
int32_t raftCacheGetEntry(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry) {
|
||||
if (ppEntry == NULL) {
|
||||
return -1;
|
||||
}
|
||||
*ppEntry = NULL;
|
||||
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
void* pTmp = taosHashGet(pCache->pEntryHash, &index, sizeof(index));
|
||||
if (pTmp != NULL) {
|
||||
SSyncRaftEntry* pEntry = pTmp;
|
||||
*ppEntry = taosMemoryMalloc(pEntry->bytes);
|
||||
memcpy(*ppEntry, pTmp, pEntry->bytes);
|
||||
|
||||
do {
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "raft cache get, type:%s,%d, type2:%s,%d, index:%ld",
|
||||
TMSG_INFO((*ppEntry)->msgType), (*ppEntry)->msgType, TMSG_INFO((*ppEntry)->originalRpcType),
|
||||
(*ppEntry)->originalRpcType, (*ppEntry)->index);
|
||||
syncNodeEventLog(pCache->pSyncNode, eventLog);
|
||||
} while (0);
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||
return -1;
|
||||
}
|
||||
|
||||
// success, return 0
|
||||
// error, return -1
|
||||
// not exist, return -1, terrno = TSDB_CODE_WAL_LOG_NOT_EXIST
|
||||
int32_t raftCacheGetEntryP(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry) {
|
||||
if (ppEntry == NULL) {
|
||||
return -1;
|
||||
}
|
||||
*ppEntry = NULL;
|
||||
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
void* pTmp = taosHashGet(pCache->pEntryHash, &index, sizeof(index));
|
||||
if (pTmp != NULL) {
|
||||
SSyncRaftEntry* pEntry = pTmp;
|
||||
*ppEntry = pEntry;
|
||||
|
||||
do {
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "raft cache get, type:%s,%d, type2:%s,%d, index:%ld",
|
||||
TMSG_INFO((*ppEntry)->msgType), (*ppEntry)->msgType, TMSG_INFO((*ppEntry)->originalRpcType),
|
||||
(*ppEntry)->originalRpcType, (*ppEntry)->index);
|
||||
syncNodeEventLog(pCache->pSyncNode, eventLog);
|
||||
} while (0);
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t raftCacheDelEntry(struct SRaftEntryCache* pCache, SyncIndex index) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
taosHashRemove(pCache->pEntryHash, &index, sizeof(index));
|
||||
--(pCache->currentCount);
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t raftCacheGetAndDel(struct SRaftEntryCache* pCache, SyncIndex index, SSyncRaftEntry** ppEntry) {
|
||||
if (ppEntry == NULL) {
|
||||
return -1;
|
||||
}
|
||||
*ppEntry = NULL;
|
||||
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
void* pTmp = taosHashGet(pCache->pEntryHash, &index, sizeof(index));
|
||||
if (pTmp != NULL) {
|
||||
SSyncRaftEntry* pEntry = pTmp;
|
||||
*ppEntry = taosMemoryMalloc(pEntry->bytes);
|
||||
memcpy(*ppEntry, pTmp, pEntry->bytes);
|
||||
|
||||
do {
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "raft cache get-and-del, type:%s,%d, type2:%s,%d, index:%ld",
|
||||
TMSG_INFO((*ppEntry)->msgType), (*ppEntry)->msgType, TMSG_INFO((*ppEntry)->originalRpcType),
|
||||
(*ppEntry)->originalRpcType, (*ppEntry)->index);
|
||||
syncNodeEventLog(pCache->pSyncNode, eventLog);
|
||||
} while (0);
|
||||
|
||||
taosHashRemove(pCache->pEntryHash, &index, sizeof(index));
|
||||
--(pCache->currentCount);
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t raftCacheClear(struct SRaftEntryCache* pCache) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
taosHashClear(pCache->pEntryHash);
|
||||
pCache->currentCount = 0;
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
return 0;
|
||||
}
|
||||
|
||||
//-----------------------------------
|
||||
cJSON* raftCache2Json(SRaftEntryCache* pCache) {
|
||||
char u64buf[128] = {0};
|
||||
cJSON* pRoot = cJSON_CreateObject();
|
||||
|
||||
if (pCache != NULL) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
|
||||
snprintf(u64buf, sizeof(u64buf), "%p", pCache->pSyncNode);
|
||||
cJSON_AddStringToObject(pRoot, "pSyncNode", u64buf);
|
||||
cJSON_AddNumberToObject(pRoot, "currentCount", pCache->currentCount);
|
||||
cJSON_AddNumberToObject(pRoot, "maxCount", pCache->maxCount);
|
||||
cJSON* pEntries = cJSON_CreateArray();
|
||||
cJSON_AddItemToObject(pRoot, "entries", pEntries);
|
||||
|
||||
SSyncRaftEntry* pIter = (SSyncRaftEntry*)taosHashIterate(pCache->pEntryHash, NULL);
|
||||
if (pIter != NULL) {
|
||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)pIter;
|
||||
cJSON_AddItemToArray(pEntries, syncEntry2Json(pEntry));
|
||||
}
|
||||
while (pIter) {
|
||||
pIter = taosHashIterate(pCache->pEntryHash, pIter);
|
||||
if (pIter != NULL) {
|
||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)pIter;
|
||||
cJSON_AddItemToArray(pEntries, syncEntry2Json(pEntry));
|
||||
}
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
}
|
||||
|
||||
cJSON* pJson = cJSON_CreateObject();
|
||||
cJSON_AddItemToObject(pJson, "SRaftEntryCache", pRoot);
|
||||
return pJson;
|
||||
}
|
||||
|
||||
char* raftCache2Str(SRaftEntryCache* pCache) {
|
||||
cJSON* pJson = raftCache2Json(pCache);
|
||||
char* serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
||||
void raftCachePrint(SRaftEntryCache* pCache) {
|
||||
char* serialized = raftCache2Str(pCache);
|
||||
printf("raftCachePrint | len:%lu | %s \n", strlen(serialized), serialized);
|
||||
fflush(NULL);
|
||||
taosMemoryFree(serialized);
|
||||
}
|
||||
|
||||
void raftCachePrint2(char* s, SRaftEntryCache* pCache) {
|
||||
char* serialized = raftCache2Str(pCache);
|
||||
printf("raftCachePrint2 | len:%lu | %s | %s \n", strlen(serialized), s, serialized);
|
||||
fflush(NULL);
|
||||
taosMemoryFree(serialized);
|
||||
}
|
||||
|
||||
void raftCacheLog(SRaftEntryCache* pCache) {
|
||||
char* serialized = raftCache2Str(pCache);
|
||||
sTrace("raftCacheLog | len:%lu | %s", strlen(serialized), serialized);
|
||||
taosMemoryFree(serialized);
|
||||
}
|
||||
|
||||
void raftCacheLog2(char* s, SRaftEntryCache* pCache) {
|
||||
if (gRaftDetailLog) {
|
||||
char* serialized = raftCache2Str(pCache);
|
||||
sTraceLong("raftCacheLog2 | len:%lu | %s | %s", strlen(serialized), s, serialized);
|
||||
taosMemoryFree(serialized);
|
||||
}
|
||||
}
|
|
@ -257,6 +257,8 @@ static int32_t raftLogGetEntry(struct SSyncLogStore* pLogStore, SyncIndex index,
|
|||
return -1;
|
||||
}
|
||||
|
||||
taosThreadMutexLock(&(pData->mutex));
|
||||
|
||||
code = walReadWithHandle(pWalHandle, index);
|
||||
if (code != 0) {
|
||||
int32_t err = terrno;
|
||||
|
@ -281,6 +283,7 @@ static int32_t raftLogGetEntry(struct SSyncLogStore* pLogStore, SyncIndex index,
|
|||
terrno = saveErr;
|
||||
*/
|
||||
|
||||
taosThreadMutexUnlock(&(pData->mutex));
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -301,6 +304,7 @@ static int32_t raftLogGetEntry(struct SSyncLogStore* pLogStore, SyncIndex index,
|
|||
terrno = saveErr;
|
||||
*/
|
||||
|
||||
taosThreadMutexUnlock(&(pData->mutex));
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -364,6 +368,7 @@ SSyncLogStore* logStoreCreate(SSyncNode* pSyncNode) {
|
|||
pData->pWal = pSyncNode->pWal;
|
||||
ASSERT(pData->pWal != NULL);
|
||||
|
||||
taosThreadMutexInit(&(pData->mutex), NULL);
|
||||
pData->pWalHandle = walOpenReadHandle(pData->pWal);
|
||||
ASSERT(pData->pWalHandle != NULL);
|
||||
|
||||
|
@ -408,9 +413,14 @@ SSyncLogStore* logStoreCreate(SSyncNode* pSyncNode) {
|
|||
void logStoreDestory(SSyncLogStore* pLogStore) {
|
||||
if (pLogStore != NULL) {
|
||||
SSyncLogStoreData* pData = pLogStore->data;
|
||||
|
||||
taosThreadMutexLock(&(pData->mutex));
|
||||
if (pData->pWalHandle != NULL) {
|
||||
walCloseReadHandle(pData->pWalHandle);
|
||||
pData->pWalHandle = NULL;
|
||||
}
|
||||
taosThreadMutexUnlock(&(pData->mutex));
|
||||
taosThreadMutexDestroy(&(pData->mutex));
|
||||
|
||||
taosMemoryFree(pLogStore->data);
|
||||
taosMemoryFree(pLogStore);
|
||||
|
@ -460,6 +470,8 @@ SSyncRaftEntry* logStoreGetEntry(SSyncLogStore* pLogStore, SyncIndex index) {
|
|||
SWal* pWal = pData->pWal;
|
||||
|
||||
if (index >= SYNC_INDEX_BEGIN && index <= logStoreLastIndex(pLogStore)) {
|
||||
taosThreadMutexLock(&(pData->mutex));
|
||||
|
||||
// SWalReadHandle* pWalHandle = walOpenReadHandle(pWal);
|
||||
SWalReadHandle* pWalHandle = pData->pWalHandle;
|
||||
ASSERT(pWalHandle != NULL);
|
||||
|
@ -503,6 +515,7 @@ SSyncRaftEntry* logStoreGetEntry(SSyncLogStore* pLogStore, SyncIndex index) {
|
|||
terrno = saveErr;
|
||||
*/
|
||||
|
||||
taosThreadMutexUnlock(&(pData->mutex));
|
||||
return pEntry;
|
||||
|
||||
} else {
|
||||
|
|
|
@ -216,7 +216,7 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
|
|||
|
||||
char *raftStore2Str(SRaftStore *pRaftStore) {
|
||||
cJSON *pJson = raftStore2Json(pRaftStore);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
|
|
@ -32,11 +32,13 @@ SSyncRespMgr *syncRespMgrCreate(void *data, int64_t ttl) {
|
|||
}
|
||||
|
||||
void syncRespMgrDestroy(SSyncRespMgr *pObj) {
|
||||
if (pObj != NULL) {
|
||||
taosThreadMutexLock(&(pObj->mutex));
|
||||
taosHashCleanup(pObj->pRespHash);
|
||||
taosThreadMutexUnlock(&(pObj->mutex));
|
||||
taosThreadMutexDestroy(&(pObj->mutex));
|
||||
taosMemoryFree(pObj);
|
||||
}
|
||||
}
|
||||
|
||||
int64_t syncRespMgrAdd(SSyncRespMgr *pObj, SRespStub *pStub) {
|
||||
|
|
|
@ -21,8 +21,10 @@
|
|||
#include "syncUtil.h"
|
||||
#include "wal.h"
|
||||
|
||||
//----------------------------------
|
||||
static void snapshotReceiverDoStart(SSyncSnapshotReceiver *pReceiver, SyncTerm privateTerm,
|
||||
SyncSnapshotSend *pBeginMsg);
|
||||
static void snapshotReceiverGotData(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pMsg);
|
||||
|
||||
//----------------------------------
|
||||
SSyncSnapshotSender *snapshotSenderCreate(SSyncNode *pSyncNode, int32_t replicaIndex) {
|
||||
|
@ -49,7 +51,7 @@ SSyncSnapshotSender *snapshotSenderCreate(SSyncNode *pSyncNode, int32_t replicaI
|
|||
pSender->pSyncNode->pFsm->FpGetSnapshotInfo(pSender->pSyncNode->pFsm, &(pSender->snapshot));
|
||||
pSender->finish = false;
|
||||
} else {
|
||||
sError("snapshotSenderCreate cannot create sender");
|
||||
sError("vgId:%d, cannot create snapshot sender", pSyncNode->vgId);
|
||||
}
|
||||
|
||||
return pSender;
|
||||
|
@ -57,39 +59,59 @@ SSyncSnapshotSender *snapshotSenderCreate(SSyncNode *pSyncNode, int32_t replicaI
|
|||
|
||||
void snapshotSenderDestroy(SSyncSnapshotSender *pSender) {
|
||||
if (pSender != NULL) {
|
||||
// free current block
|
||||
if (pSender->pCurrentBlock != NULL) {
|
||||
taosMemoryFree(pSender->pCurrentBlock);
|
||||
pSender->pCurrentBlock = NULL;
|
||||
}
|
||||
|
||||
// close reader
|
||||
if (pSender->pReader != NULL) {
|
||||
int32_t ret = pSender->pSyncNode->pFsm->FpSnapshotStopRead(pSender->pSyncNode->pFsm, pSender->pReader);
|
||||
ASSERT(ret == 0);
|
||||
pSender->pReader = NULL;
|
||||
}
|
||||
|
||||
// free sender
|
||||
taosMemoryFree(pSender);
|
||||
}
|
||||
}
|
||||
|
||||
bool snapshotSenderIsStart(SSyncSnapshotSender *pSender) { return pSender->start; }
|
||||
|
||||
// begin send snapshot (current term, seq begin)
|
||||
// begin send snapshot by snapshot, pReader
|
||||
void snapshotSenderStart(SSyncSnapshotSender *pSender, SSnapshot snapshot, void *pReader) {
|
||||
ASSERT(!snapshotSenderIsStart(pSender));
|
||||
|
||||
pSender->seq = SYNC_SNAPSHOT_SEQ_BEGIN;
|
||||
pSender->ack = SYNC_SNAPSHOT_SEQ_INVALID;
|
||||
|
||||
// init snapshot and reader
|
||||
ASSERT(pSender->pReader == NULL);
|
||||
pSender->pReader = pReader;
|
||||
pSender->snapshot = snapshot;
|
||||
|
||||
// init current block
|
||||
if (pSender->pCurrentBlock != NULL) {
|
||||
taosMemoryFree(pSender->pCurrentBlock);
|
||||
}
|
||||
pSender->blockLen = 0;
|
||||
|
||||
// update term
|
||||
pSender->term = pSender->pSyncNode->pRaftStore->currentTerm;
|
||||
++(pSender->privateTerm);
|
||||
|
||||
// update state
|
||||
pSender->finish = false;
|
||||
pSender->start = true;
|
||||
pSender->seq = SYNC_SNAPSHOT_SEQ_BEGIN;
|
||||
pSender->ack = SYNC_SNAPSHOT_SEQ_INVALID;
|
||||
|
||||
// init last config
|
||||
if (pSender->snapshot.lastConfigIndex != SYNC_INDEX_INVALID) {
|
||||
int32_t code = 0;
|
||||
SSyncRaftEntry *pEntry = NULL;
|
||||
bool getLastConfig = false;
|
||||
|
||||
code = pSender->pSyncNode->pLogStore->syncLogGetEntry(pSender->pSyncNode->pLogStore,
|
||||
pSender->snapshot.lastConfigIndex, &pEntry);
|
||||
|
||||
bool getLastConfig = false;
|
||||
if (code == 0) {
|
||||
ASSERT(pEntry != NULL);
|
||||
|
||||
|
@ -106,35 +128,35 @@ void snapshotSenderStart(SSyncSnapshotSender *pSender, SSnapshot snapshot, void
|
|||
syncEntryDestory(pEntry);
|
||||
} else {
|
||||
if (pSender->snapshot.lastConfigIndex == pSender->pSyncNode->pRaftCfg->lastConfigIndex) {
|
||||
sTrace("vgId:%d sync sender get cfg from local", pSender->pSyncNode->vgId);
|
||||
sTrace("vgId:%d, sync sender get cfg from local", pSender->pSyncNode->vgId);
|
||||
pSender->lastConfig = pSender->pSyncNode->pRaftCfg->cfg;
|
||||
getLastConfig = true;
|
||||
}
|
||||
}
|
||||
|
||||
// last config not found in wal, update to -1
|
||||
if (!getLastConfig) {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "snapshot sender update lcindex from %ld to -1",
|
||||
pSender->snapshot.lastConfigIndex);
|
||||
pSender->snapshot.lastConfigIndex = -1;
|
||||
SyncIndex oldLastConfigIndex = pSender->snapshot.lastConfigIndex;
|
||||
SyncIndex newLastConfigIndex = SYNC_INDEX_INVALID;
|
||||
pSender->snapshot.lastConfigIndex = SYNC_INDEX_INVALID;
|
||||
memset(&(pSender->lastConfig), 0, sizeof(SSyncCfg));
|
||||
|
||||
// event log
|
||||
do {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "snapshot sender update lcindex from %ld to %ld", oldLastConfigIndex,
|
||||
newLastConfigIndex);
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, logBuf);
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
memset(&(pSender->lastConfig), 0, sizeof(SSyncCfg));
|
||||
} while (0);
|
||||
}
|
||||
|
||||
} else {
|
||||
// no last config
|
||||
memset(&(pSender->lastConfig), 0, sizeof(SSyncCfg));
|
||||
}
|
||||
|
||||
pSender->sendingMS = SYNC_SNAPSHOT_RETRY_MS;
|
||||
pSender->term = pSender->pSyncNode->pRaftStore->currentTerm;
|
||||
++(pSender->privateTerm);
|
||||
pSender->finish = false;
|
||||
pSender->start = true;
|
||||
|
||||
// build begin msg
|
||||
SyncSnapshotSend *pMsg = syncSnapshotSendBuild(0, pSender->pSyncNode->vgId);
|
||||
pMsg->srcId = pSender->pSyncNode->myRaftId;
|
||||
|
@ -151,40 +173,47 @@ void snapshotSenderStart(SSyncSnapshotSender *pSender, SSnapshot snapshot, void
|
|||
SRpcMsg rpcMsg;
|
||||
syncSnapshotSend2RpcMsg(pMsg, &rpcMsg);
|
||||
syncNodeSendMsgById(&(pMsg->destId), pSender->pSyncNode, &rpcMsg);
|
||||
syncSnapshotSendDestroy(pMsg);
|
||||
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender send");
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender start");
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
syncSnapshotSendDestroy(pMsg);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
void snapshotSenderStop(SSyncSnapshotSender *pSender) {
|
||||
void snapshotSenderStop(SSyncSnapshotSender *pSender, bool finish) {
|
||||
// close reader
|
||||
if (pSender->pReader != NULL) {
|
||||
int32_t ret = pSender->pSyncNode->pFsm->FpSnapshotStopRead(pSender->pSyncNode->pFsm, pSender->pReader);
|
||||
ASSERT(ret == 0);
|
||||
pSender->pReader = NULL;
|
||||
}
|
||||
|
||||
// free current block
|
||||
if (pSender->pCurrentBlock != NULL) {
|
||||
taosMemoryFree(pSender->pCurrentBlock);
|
||||
pSender->pCurrentBlock = NULL;
|
||||
pSender->blockLen = 0;
|
||||
}
|
||||
|
||||
// update flag
|
||||
pSender->start = false;
|
||||
pSender->finish = finish;
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
char *s = snapshotSender2Str(pSender);
|
||||
sInfo("snapshotSenderStop %s", s);
|
||||
taosMemoryFree(s);
|
||||
}
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender stop");
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
// when sender receiver ack, call this function to send msg from seq
|
||||
// when sender receive ack, call this function to send msg from seq
|
||||
// seq = ack + 1, already updated
|
||||
int32_t snapshotSend(SSyncSnapshotSender *pSender) {
|
||||
// free memory last time (seq - 1)
|
||||
// free memory last time (current seq - 1)
|
||||
if (pSender->pCurrentBlock != NULL) {
|
||||
taosMemoryFree(pSender->pCurrentBlock);
|
||||
pSender->pCurrentBlock = NULL;
|
||||
|
@ -198,7 +227,7 @@ int32_t snapshotSend(SSyncSnapshotSender *pSender) {
|
|||
if (pSender->blockLen > 0) {
|
||||
// has read data
|
||||
} else {
|
||||
// read finish
|
||||
// read finish, update seq to end
|
||||
pSender->seq = SYNC_SNAPSHOT_SEQ_END;
|
||||
}
|
||||
|
||||
|
@ -219,25 +248,28 @@ int32_t snapshotSend(SSyncSnapshotSender *pSender) {
|
|||
SRpcMsg rpcMsg;
|
||||
syncSnapshotSend2RpcMsg(pMsg, &rpcMsg);
|
||||
syncNodeSendMsgById(&(pMsg->destId), pSender->pSyncNode, &rpcMsg);
|
||||
|
||||
if (pSender->seq == SYNC_SNAPSHOT_SEQ_END) {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender finish");
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
} else {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender sending");
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
}
|
||||
|
||||
syncSnapshotSendDestroy(pMsg);
|
||||
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = NULL;
|
||||
if (pSender->seq == SYNC_SNAPSHOT_SEQ_END) {
|
||||
eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender finish");
|
||||
} else {
|
||||
eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender sending");
|
||||
}
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
// send snapshot data from cache
|
||||
int32_t snapshotReSend(SSyncSnapshotSender *pSender) {
|
||||
if (pSender->pCurrentBlock != NULL) {
|
||||
// send current block data
|
||||
if (pSender->pCurrentBlock != NULL && pSender->blockLen > 0) {
|
||||
// build msg
|
||||
SyncSnapshotSend *pMsg = syncSnapshotSendBuild(pSender->blockLen, pSender->pSyncNode->vgId);
|
||||
pMsg->srcId = pSender->pSyncNode->myRaftId;
|
||||
pMsg->destId = (pSender->pSyncNode->replicasId)[pSender->replicaIndex];
|
||||
|
@ -249,16 +281,20 @@ int32_t snapshotReSend(SSyncSnapshotSender *pSender) {
|
|||
pMsg->seq = pSender->seq;
|
||||
memcpy(pMsg->data, pSender->pCurrentBlock, pSender->blockLen);
|
||||
|
||||
// send msg
|
||||
SRpcMsg rpcMsg;
|
||||
syncSnapshotSend2RpcMsg(pMsg, &rpcMsg);
|
||||
syncNodeSendMsgById(&(pMsg->destId), pSender->pSyncNode, &rpcMsg);
|
||||
syncSnapshotSendDestroy(pMsg);
|
||||
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender resend");
|
||||
syncNodeEventLog(pSender->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
syncSnapshotSendDestroy(pMsg);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -294,7 +330,6 @@ cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender) {
|
|||
snprintf(u64buf, sizeof(u64buf), "%lu", pSender->snapshot.lastApplyTerm);
|
||||
cJSON_AddStringToObject(pSnapshot, "lastApplyTerm", u64buf);
|
||||
cJSON_AddItemToObject(pRoot, "snapshot", pSnapshot);
|
||||
|
||||
snprintf(u64buf, sizeof(u64buf), "%lu", pSender->sendingMS);
|
||||
cJSON_AddStringToObject(pRoot, "sendingMS", u64buf);
|
||||
snprintf(u64buf, sizeof(u64buf), "%p", pSender->pSyncNode);
|
||||
|
@ -324,7 +359,7 @@ char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
|
|||
char *s = taosMemoryMalloc(len);
|
||||
|
||||
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
|
||||
char host[128];
|
||||
char host[64];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(destId.addr, host, sizeof(host), &port);
|
||||
|
||||
|
@ -355,12 +390,12 @@ SSyncSnapshotReceiver *snapshotReceiverCreate(SSyncNode *pSyncNode, SRaftId from
|
|||
pReceiver->term = pSyncNode->pRaftStore->currentTerm;
|
||||
pReceiver->privateTerm = 0;
|
||||
pReceiver->snapshot.data = NULL;
|
||||
pReceiver->snapshot.lastApplyIndex = -1;
|
||||
pReceiver->snapshot.lastApplyIndex = SYNC_INDEX_INVALID;
|
||||
pReceiver->snapshot.lastApplyTerm = 0;
|
||||
pReceiver->snapshot.lastConfigIndex = -1;
|
||||
pReceiver->snapshot.lastConfigIndex = SYNC_INDEX_INVALID;
|
||||
|
||||
} else {
|
||||
sInfo("snapshotReceiverCreate cannot create receiver");
|
||||
sError("vgId:%d, cannot create snapshot receiver", pSyncNode->vgId);
|
||||
}
|
||||
|
||||
return pReceiver;
|
||||
|
@ -368,60 +403,54 @@ SSyncSnapshotReceiver *snapshotReceiverCreate(SSyncNode *pSyncNode, SRaftId from
|
|||
|
||||
void snapshotReceiverDestroy(SSyncSnapshotReceiver *pReceiver) {
|
||||
if (pReceiver != NULL) {
|
||||
// close writer
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
int32_t ret =
|
||||
pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, false);
|
||||
ASSERT(ret == 0);
|
||||
pReceiver->pWriter = NULL;
|
||||
}
|
||||
|
||||
// free receiver
|
||||
taosMemoryFree(pReceiver);
|
||||
}
|
||||
}
|
||||
|
||||
bool snapshotReceiverIsStart(SSyncSnapshotReceiver *pReceiver) { return pReceiver->start; }
|
||||
|
||||
// begin receive snapshot msg (current term, seq begin)
|
||||
// static do start by privateTerm, pBeginMsg
|
||||
// receive first snapshot data
|
||||
// write first block data
|
||||
static void snapshotReceiverDoStart(SSyncSnapshotReceiver *pReceiver, SyncTerm privateTerm,
|
||||
SyncSnapshotSend *pBeginMsg) {
|
||||
// update state
|
||||
pReceiver->term = pReceiver->pSyncNode->pRaftStore->currentTerm;
|
||||
pReceiver->privateTerm = privateTerm;
|
||||
pReceiver->ack = SYNC_SNAPSHOT_SEQ_BEGIN;
|
||||
pReceiver->fromId = pBeginMsg->srcId;
|
||||
pReceiver->start = true;
|
||||
|
||||
// update snapshot
|
||||
pReceiver->snapshot.lastApplyIndex = pBeginMsg->lastIndex;
|
||||
pReceiver->snapshot.lastApplyTerm = pBeginMsg->lastTerm;
|
||||
pReceiver->snapshot.lastConfigIndex = pBeginMsg->lastConfigIndex;
|
||||
|
||||
// write data
|
||||
ASSERT(pReceiver->pWriter == NULL);
|
||||
int32_t ret = pReceiver->pSyncNode->pFsm->FpSnapshotStartWrite(pReceiver->pSyncNode->pFsm, &(pReceiver->pWriter));
|
||||
ASSERT(ret == 0);
|
||||
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver start");
|
||||
syncNodeEventLog(pReceiver->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
// if receiver receive msg from seq = SYNC_SNAPSHOT_SEQ_BEGIN, start receiver
|
||||
// if already start, force close, start again
|
||||
void snapshotReceiverStart(SSyncSnapshotReceiver *pReceiver, SyncTerm privateTerm, SyncSnapshotSend *pBeginMsg) {
|
||||
if (!snapshotReceiverIsStart(pReceiver)) {
|
||||
// start
|
||||
snapshotReceiverDoStart(pReceiver, privateTerm, pBeginMsg);
|
||||
pReceiver->start = true;
|
||||
|
||||
} else {
|
||||
// already start
|
||||
sInfo("snapshot recv, receiver already start");
|
||||
|
||||
// force stop
|
||||
static void snapshotReceiverForceStop(SSyncSnapshotReceiver *pReceiver) {
|
||||
// force close, abandon incomplete data
|
||||
int32_t ret =
|
||||
pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, false);
|
||||
ASSERT(ret == 0);
|
||||
pReceiver->pWriter = NULL;
|
||||
|
||||
// start again
|
||||
snapshotReceiverDoStart(pReceiver, privateTerm, pBeginMsg);
|
||||
pReceiver->start = true;
|
||||
}
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
char *s = snapshotReceiver2Str(pReceiver);
|
||||
sInfo("snapshotReceiverStart %s", s);
|
||||
taosMemoryFree(s);
|
||||
}
|
||||
}
|
||||
|
||||
void snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver, bool apply) {
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
int32_t ret =
|
||||
pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, false);
|
||||
|
@ -431,14 +460,106 @@ void snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver, bool apply) {
|
|||
|
||||
pReceiver->start = false;
|
||||
|
||||
if (apply) {
|
||||
// ++(pReceiver->privateTerm);
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver force stop");
|
||||
syncNodeEventLog(pReceiver->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
// if receiver receive msg from seq = SYNC_SNAPSHOT_SEQ_BEGIN, start receiver
|
||||
// if already start, force close, start again
|
||||
void snapshotReceiverStart(SSyncSnapshotReceiver *pReceiver, SyncTerm privateTerm, SyncSnapshotSend *pBeginMsg) {
|
||||
if (!snapshotReceiverIsStart(pReceiver)) {
|
||||
// first start
|
||||
snapshotReceiverDoStart(pReceiver, privateTerm, pBeginMsg);
|
||||
|
||||
} else {
|
||||
// already start
|
||||
sInfo("vgId:%d, snapshot recv, receiver already start", pReceiver->pSyncNode->vgId);
|
||||
|
||||
// force close, abandon incomplete data
|
||||
snapshotReceiverForceStop(pReceiver);
|
||||
|
||||
// start again
|
||||
snapshotReceiverDoStart(pReceiver, privateTerm, pBeginMsg);
|
||||
}
|
||||
}
|
||||
|
||||
void snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver) {
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
int32_t ret =
|
||||
pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, false);
|
||||
ASSERT(ret == 0);
|
||||
pReceiver->pWriter = NULL;
|
||||
}
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
char *s = snapshotReceiver2Str(pReceiver);
|
||||
sInfo("snapshotReceiverStop %s", s);
|
||||
taosMemoryFree(s);
|
||||
pReceiver->start = false;
|
||||
|
||||
// event log
|
||||
do {
|
||||
SSnapshot snapshot;
|
||||
pReceiver->pSyncNode->pFsm->FpGetSnapshotInfo(pReceiver->pSyncNode->pFsm, &snapshot);
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver stop");
|
||||
syncNodeEventLog(pReceiver->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
static void snapshotReceiverFinish(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pMsg) {
|
||||
ASSERT(pMsg->seq == SYNC_SNAPSHOT_SEQ_END);
|
||||
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
int32_t code = 0;
|
||||
if (pMsg->dataLen > 0) {
|
||||
code = pReceiver->pSyncNode->pFsm->FpSnapshotDoWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, pMsg->data,
|
||||
pMsg->dataLen);
|
||||
ASSERT(code == 0);
|
||||
}
|
||||
|
||||
code = pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, true);
|
||||
ASSERT(code == 0);
|
||||
pReceiver->pWriter = NULL;
|
||||
}
|
||||
|
||||
pReceiver->ack = SYNC_SNAPSHOT_SEQ_END;
|
||||
|
||||
// update commit index
|
||||
if (pReceiver->snapshot.lastApplyIndex > pReceiver->pSyncNode->commitIndex) {
|
||||
pReceiver->pSyncNode->commitIndex = pReceiver->snapshot.lastApplyIndex;
|
||||
}
|
||||
|
||||
// reset wal
|
||||
pReceiver->pSyncNode->pLogStore->syncLogRestoreFromSnapshot(pReceiver->pSyncNode->pLogStore, pMsg->lastIndex);
|
||||
|
||||
// event log
|
||||
do {
|
||||
SSnapshot snapshot;
|
||||
pReceiver->pSyncNode->pFsm->FpGetSnapshotInfo(pReceiver->pSyncNode->pFsm, &snapshot);
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver got last data, finish, apply snapshot");
|
||||
syncNodeEventLog(pReceiver->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
static void snapshotReceiverGotData(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pMsg) {
|
||||
ASSERT(pMsg->seq == pReceiver->ack + 1);
|
||||
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
if (pMsg->dataLen > 0) {
|
||||
int32_t code = pReceiver->pSyncNode->pFsm->FpSnapshotDoWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter,
|
||||
pMsg->data, pMsg->dataLen);
|
||||
ASSERT(code == 0);
|
||||
}
|
||||
pReceiver->ack = pMsg->seq;
|
||||
|
||||
// event log
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver receiving");
|
||||
syncNodeEventLog(pReceiver->pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -520,33 +641,20 @@ int32_t syncNodeOnSnapshotSendCb(SSyncNode *pSyncNode, SyncSnapshotSend *pMsg) {
|
|||
// get receiver
|
||||
SSyncSnapshotReceiver *pReceiver = pSyncNode->pNewNodeReceiver;
|
||||
bool needRsp = false;
|
||||
int32_t writeCode = 0;
|
||||
|
||||
// state, term, seq/ack
|
||||
if (pSyncNode->state == TAOS_SYNC_STATE_FOLLOWER) {
|
||||
if (pMsg->term == pSyncNode->pRaftStore->currentTerm) {
|
||||
if (pMsg->seq == SYNC_SNAPSHOT_SEQ_BEGIN) {
|
||||
// begin
|
||||
// begin, no data
|
||||
snapshotReceiverStart(pReceiver, pMsg->privateTerm, pMsg);
|
||||
pReceiver->ack = pMsg->seq;
|
||||
needRsp = true;
|
||||
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver begin");
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
|
||||
} else if (pMsg->seq == SYNC_SNAPSHOT_SEQ_END) {
|
||||
// end, finish FSM
|
||||
writeCode = pSyncNode->pFsm->FpSnapshotDoWrite(pSyncNode->pFsm, pReceiver->pWriter, pMsg->data, pMsg->dataLen);
|
||||
ASSERT(writeCode == 0);
|
||||
|
||||
pSyncNode->pFsm->FpSnapshotStopWrite(pSyncNode->pFsm, pReceiver->pWriter, true);
|
||||
if (pReceiver->snapshot.lastApplyIndex > pReceiver->pSyncNode->commitIndex) {
|
||||
pReceiver->pSyncNode->commitIndex = pReceiver->snapshot.lastApplyIndex;
|
||||
}
|
||||
|
||||
// pSyncNode->pLogStore->syncLogSetBeginIndex(pSyncNode->pLogStore, pMsg->lastIndex + 1);
|
||||
pSyncNode->pLogStore->syncLogRestoreFromSnapshot(pSyncNode->pLogStore, pMsg->lastIndex);
|
||||
snapshotReceiverFinish(pReceiver, pMsg);
|
||||
snapshotReceiverStop(pReceiver);
|
||||
needRsp = true;
|
||||
|
||||
// maybe update lastconfig
|
||||
if (pMsg->lastConfigIndex >= SYNC_INDEX_BEGIN) {
|
||||
|
@ -561,89 +669,74 @@ int32_t syncNodeOnSnapshotSendCb(SSyncNode *pSyncNode, SyncSnapshotSend *pMsg) {
|
|||
syncNodeDoConfigChange(pSyncNode, &newSyncCfg, pMsg->lastConfigIndex);
|
||||
}
|
||||
|
||||
SSnapshot snapshot;
|
||||
pSyncNode->pFsm->FpGetSnapshotInfo(pSyncNode->pFsm, &snapshot);
|
||||
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver finish, apply snapshot");
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
pReceiver->pWriter = NULL;
|
||||
snapshotReceiverStop(pReceiver, true);
|
||||
pReceiver->ack = pMsg->seq;
|
||||
needRsp = true;
|
||||
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver stop");
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
} else if (pMsg->seq == SYNC_SNAPSHOT_SEQ_FORCE_CLOSE) {
|
||||
pSyncNode->pFsm->FpSnapshotStopWrite(pSyncNode->pFsm, pReceiver->pWriter, false);
|
||||
snapshotReceiverStop(pReceiver, false);
|
||||
// force close
|
||||
snapshotReceiverForceStop(pReceiver);
|
||||
needRsp = false;
|
||||
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver force close");
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
} else if (pMsg->seq > SYNC_SNAPSHOT_SEQ_BEGIN && pMsg->seq < SYNC_SNAPSHOT_SEQ_END) {
|
||||
// transfering
|
||||
if (pMsg->seq == pReceiver->ack + 1) {
|
||||
writeCode =
|
||||
pSyncNode->pFsm->FpSnapshotDoWrite(pSyncNode->pFsm, pReceiver->pWriter, pMsg->data, pMsg->dataLen);
|
||||
ASSERT(writeCode == 0);
|
||||
pReceiver->ack = pMsg->seq;
|
||||
snapshotReceiverGotData(pReceiver, pMsg);
|
||||
}
|
||||
needRsp = true;
|
||||
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver receiving");
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
// send ack
|
||||
if (needRsp) {
|
||||
// build msg
|
||||
SyncSnapshotRsp *pRspMsg = syncSnapshotRspBuild(pSyncNode->vgId);
|
||||
pRspMsg->srcId = pSyncNode->myRaftId;
|
||||
pRspMsg->destId = pMsg->srcId;
|
||||
pRspMsg->term = pSyncNode->pRaftStore->currentTerm;
|
||||
pRspMsg->lastIndex = pMsg->lastIndex;
|
||||
pRspMsg->lastTerm = pMsg->lastTerm;
|
||||
pRspMsg->ack = pReceiver->ack;
|
||||
pRspMsg->code = writeCode;
|
||||
pRspMsg->privateTerm = pReceiver->privateTerm;
|
||||
pRspMsg->ack = pReceiver->ack; // receiver maybe already closed
|
||||
pRspMsg->code = 0;
|
||||
pRspMsg->privateTerm = pReceiver->privateTerm; // receiver maybe already closed
|
||||
|
||||
// send msg
|
||||
SRpcMsg rpcMsg;
|
||||
syncSnapshotRsp2RpcMsg(pRspMsg, &rpcMsg);
|
||||
syncNodeSendMsgById(&(pRspMsg->destId), pSyncNode, &rpcMsg);
|
||||
|
||||
syncSnapshotRspDestroy(pRspMsg);
|
||||
}
|
||||
} else {
|
||||
// error log
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver term not equal");
|
||||
syncNodeErrorLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
} else {
|
||||
syncNodeLog2("syncNodeOnSnapshotSendCb not follower", pSyncNode);
|
||||
// error log
|
||||
do {
|
||||
char *eventLog = snapshotReceiver2SimpleStr(pReceiver, "snapshot receiver not follower");
|
||||
syncNodeErrorLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void snapshotSenderUpdateProgress(SSyncSnapshotSender *pSender, SyncSnapshotRsp *pMsg) {
|
||||
ASSERT(pMsg->ack == pSender->seq);
|
||||
pSender->ack = pMsg->ack;
|
||||
++(pSender->seq);
|
||||
}
|
||||
|
||||
// sender receives ack, set seq = ack + 1, send msg from seq
|
||||
// if ack == SYNC_SNAPSHOT_SEQ_END, stop sender
|
||||
int32_t syncNodeOnSnapshotRspCb(SSyncNode *pSyncNode, SyncSnapshotRsp *pMsg) {
|
||||
// if already drop replica, do not process
|
||||
if (!syncNodeInRaftGroup(pSyncNode, &(pMsg->srcId)) && pSyncNode->state == TAOS_SYNC_STATE_LEADER) {
|
||||
sInfo("recv SyncSnapshotRsp maybe replica already dropped");
|
||||
return 0;
|
||||
sError("vgId:%d, recv sync-snapshot-rsp, maybe replica already dropped", pSyncNode->vgId);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// get sender
|
||||
|
@ -655,27 +748,49 @@ int32_t syncNodeOnSnapshotRspCb(SSyncNode *pSyncNode, SyncSnapshotRsp *pMsg) {
|
|||
if (pMsg->term == pSyncNode->pRaftStore->currentTerm) {
|
||||
// receiver ack is finish, close sender
|
||||
if (pMsg->ack == SYNC_SNAPSHOT_SEQ_END) {
|
||||
pSender->finish = true;
|
||||
snapshotSenderStop(pSender);
|
||||
snapshotSenderStop(pSender, true);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// send next msg
|
||||
if (pMsg->ack == pSender->seq) {
|
||||
// update sender ack
|
||||
pSender->ack = pMsg->ack;
|
||||
(pSender->seq)++;
|
||||
snapshotSenderUpdateProgress(pSender, pMsg);
|
||||
snapshotSend(pSender);
|
||||
|
||||
} else if (pMsg->ack == pSender->seq - 1) {
|
||||
snapshotReSend(pSender);
|
||||
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
do {
|
||||
char logBuf[64];
|
||||
snprintf(logBuf, sizeof(logBuf), "error ack, recv ack:%d, my seq:%d", pMsg->ack, pSender->seq);
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, logBuf);
|
||||
syncNodeErrorLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
syncNodeLog2("syncNodeOnSnapshotRspCb not leader", pSyncNode);
|
||||
// error log
|
||||
do {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender term not equal");
|
||||
syncNodeErrorLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
// error log
|
||||
do {
|
||||
char *eventLog = snapshotSender2SimpleStr(pSender, "snapshot sender not leader");
|
||||
syncNodeErrorLog(pSyncNode, eventLog);
|
||||
taosMemoryFree(eventLog);
|
||||
} while (0);
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -127,7 +127,7 @@ cJSON *voteGranted2Json(SVotesGranted *pVotesGranted) {
|
|||
|
||||
char *voteGranted2Str(SVotesGranted *pVotesGranted) {
|
||||
cJSON *pJson = voteGranted2Json(pVotesGranted);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
@ -256,7 +256,7 @@ cJSON *votesRespond2Json(SVotesRespond *pVotesRespond) {
|
|||
|
||||
char *votesRespond2Str(SVotesRespond *pVotesRespond) {
|
||||
cJSON *pJson = votesRespond2Json(pVotesRespond);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
|
|
@ -17,6 +17,7 @@ add_executable(syncVotesRespondTest "")
|
|||
add_executable(syncIndexMgrTest "")
|
||||
add_executable(syncLogStoreTest "")
|
||||
add_executable(syncEntryTest "")
|
||||
add_executable(syncEntryCacheTest "")
|
||||
add_executable(syncRequestVoteTest "")
|
||||
add_executable(syncRequestVoteReplyTest "")
|
||||
add_executable(syncAppendEntriesTest "")
|
||||
|
@ -129,6 +130,10 @@ target_sources(syncEntryTest
|
|||
PRIVATE
|
||||
"syncEntryTest.cpp"
|
||||
)
|
||||
target_sources(syncEntryCacheTest
|
||||
PRIVATE
|
||||
"syncEntryCacheTest.cpp"
|
||||
)
|
||||
target_sources(syncRequestVoteTest
|
||||
PRIVATE
|
||||
"syncRequestVoteTest.cpp"
|
||||
|
@ -362,6 +367,11 @@ target_include_directories(syncEntryTest
|
|||
"${TD_SOURCE_DIR}/include/libs/sync"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||
)
|
||||
target_include_directories(syncEntryCacheTest
|
||||
PUBLIC
|
||||
"${TD_SOURCE_DIR}/include/libs/sync"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||
)
|
||||
target_include_directories(syncRequestVoteTest
|
||||
PUBLIC
|
||||
"${TD_SOURCE_DIR}/include/libs/sync"
|
||||
|
@ -610,6 +620,10 @@ target_link_libraries(syncEntryTest
|
|||
sync
|
||||
gtest_main
|
||||
)
|
||||
target_link_libraries(syncEntryCacheTest
|
||||
sync
|
||||
gtest_main
|
||||
)
|
||||
target_link_libraries(syncRequestVoteTest
|
||||
sync
|
||||
gtest_main
|
||||
|
|
|
@ -0,0 +1,162 @@
|
|||
#include <stdio.h>
|
||||
#include "syncEnv.h"
|
||||
#include "syncIO.h"
|
||||
#include "syncInt.h"
|
||||
#include "syncRaftLog.h"
|
||||
#include "syncRaftStore.h"
|
||||
#include "syncUtil.h"
|
||||
|
||||
void logTest() {
|
||||
sTrace("--- sync log test: trace");
|
||||
sDebug("--- sync log test: debug");
|
||||
sInfo("--- sync log test: info");
|
||||
sWarn("--- sync log test: warn");
|
||||
sError("--- sync log test: error");
|
||||
sFatal("--- sync log test: fatal");
|
||||
}
|
||||
|
||||
SSyncRaftEntry* createEntry(int i) {
|
||||
int32_t dataLen = 20;
|
||||
SSyncRaftEntry* pEntry = syncEntryBuild(dataLen);
|
||||
assert(pEntry != NULL);
|
||||
pEntry->msgType = 88;
|
||||
pEntry->originalRpcType = 99;
|
||||
pEntry->seqNum = 3;
|
||||
pEntry->isWeak = true;
|
||||
pEntry->term = 100 + i;
|
||||
pEntry->index = i;
|
||||
snprintf(pEntry->data, dataLen, "value%d", i);
|
||||
|
||||
return pEntry;
|
||||
}
|
||||
|
||||
SSyncNode* createFakeNode() {
|
||||
SSyncNode* pSyncNode = (SSyncNode*)taosMemoryMalloc(sizeof(SSyncNode));
|
||||
ASSERT(pSyncNode != NULL);
|
||||
memset(pSyncNode, 0, sizeof(SSyncNode));
|
||||
|
||||
return pSyncNode;
|
||||
}
|
||||
|
||||
SRaftEntryCache* createCache(int maxCount) {
|
||||
SSyncNode* pSyncNode = createFakeNode();
|
||||
ASSERT(pSyncNode != NULL);
|
||||
|
||||
SRaftEntryCache* pCache = raftCacheCreate(pSyncNode, maxCount);
|
||||
ASSERT(pCache != NULL);
|
||||
|
||||
return pCache;
|
||||
}
|
||||
|
||||
void test1() {
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
code = raftCachePutEntry(pCache, pEntry);
|
||||
ASSERT(code == 1);
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
raftCacheLog2((char*)"==test1 write 5 entries==", pCache);
|
||||
|
||||
SyncIndex index;
|
||||
index = 1;
|
||||
code = raftCacheDelEntry(pCache, index);
|
||||
ASSERT(code == 0);
|
||||
index = 3;
|
||||
code = raftCacheDelEntry(pCache, index);
|
||||
ASSERT(code == 0);
|
||||
raftCacheLog2((char*)"==test1 delete 1,3==", pCache);
|
||||
|
||||
code = raftCacheClear(pCache);
|
||||
ASSERT(code == 0);
|
||||
raftCacheLog2((char*)"==clear all==", pCache);
|
||||
}
|
||||
|
||||
void test2() {
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
code = raftCachePutEntry(pCache, pEntry);
|
||||
ASSERT(code == 1);
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
raftCacheLog2((char*)"==test2 write 5 entries==", pCache);
|
||||
|
||||
SyncIndex index;
|
||||
index = 1;
|
||||
SSyncRaftEntry* pEntry;
|
||||
code = raftCacheGetEntry(pCache, index, &pEntry);
|
||||
ASSERT(code == 0);
|
||||
syncEntryDestory(pEntry);
|
||||
syncEntryLog2((char*)"==test2 get entry 1==", pEntry);
|
||||
|
||||
index = 2;
|
||||
code = raftCacheGetEntryP(pCache, index, &pEntry);
|
||||
ASSERT(code == 0);
|
||||
syncEntryLog2((char*)"==test2 get entry pointer 2==", pEntry);
|
||||
|
||||
// not found
|
||||
index = 8;
|
||||
code = raftCacheGetEntry(pCache, index, &pEntry);
|
||||
ASSERT(code == -1 && terrno == TSDB_CODE_WAL_LOG_NOT_EXIST);
|
||||
sTrace("==test2 get entry 8 not found==");
|
||||
|
||||
// not found
|
||||
index = 9;
|
||||
code = raftCacheGetEntryP(pCache, index, &pEntry);
|
||||
ASSERT(code == -1 && terrno == TSDB_CODE_WAL_LOG_NOT_EXIST);
|
||||
sTrace("==test2 get entry pointer 9 not found==");
|
||||
}
|
||||
|
||||
void test3() {
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
code = raftCachePutEntry(pCache, pEntry);
|
||||
ASSERT(code == 1);
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
for (int i = 6; i < 10; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
code = raftCachePutEntry(pCache, pEntry);
|
||||
ASSERT(code == 0);
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
raftCacheLog2((char*)"==test3 write 10 entries, max count is 5==", pCache);
|
||||
}
|
||||
|
||||
void test4() {
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 5; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
code = raftCachePutEntry(pCache, pEntry);
|
||||
ASSERT(code == 1);
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
raftCacheLog2((char*)"==test4 write 5 entries==", pCache);
|
||||
|
||||
SyncIndex index;
|
||||
index = 3;
|
||||
SSyncRaftEntry* pEntry;
|
||||
code = raftCacheGetAndDel(pCache, index, &pEntry);
|
||||
ASSERT(code == 0);
|
||||
syncEntryLog2((char*)"==test4 get-and-del entry 3==", pEntry);
|
||||
raftCacheLog2((char*)"==test4 after get-and-del entry 3==", pCache);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv) {
|
||||
gRaftDetailLog = true;
|
||||
tsAsyncLog = 0;
|
||||
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE + DEBUG_DEBUG;
|
||||
|
||||
test1();
|
||||
test2();
|
||||
test3();
|
||||
test4();
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -113,7 +113,7 @@ void test2() {
|
|||
pLogStore = logStoreCreate(pSyncNode);
|
||||
assert(pLogStore);
|
||||
pSyncNode->pLogStore = pLogStore;
|
||||
//pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
// pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
pLogStore->syncLogRestoreFromSnapshot(pLogStore, 4);
|
||||
logStoreLog2((char*)"\n\n\ntest2 ----- ", pLogStore);
|
||||
|
||||
|
@ -229,7 +229,7 @@ void test4() {
|
|||
assert(pLogStore);
|
||||
pSyncNode->pLogStore = pLogStore;
|
||||
logStoreLog2((char*)"\n\n\ntest4 ----- ", pLogStore);
|
||||
//pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
// pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
pLogStore->syncLogRestoreFromSnapshot(pLogStore, 4);
|
||||
|
||||
for (int i = 5; i <= 9; ++i) {
|
||||
|
@ -291,7 +291,7 @@ void test5() {
|
|||
assert(pLogStore);
|
||||
pSyncNode->pLogStore = pLogStore;
|
||||
logStoreLog2((char*)"\n\n\ntest5 ----- ", pLogStore);
|
||||
//pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
// pLogStore->syncLogSetBeginIndex(pLogStore, 5);
|
||||
pLogStore->syncLogRestoreFromSnapshot(pLogStore, 4);
|
||||
|
||||
for (int i = 5; i <= 9; ++i) {
|
||||
|
@ -419,15 +419,12 @@ void test6() {
|
|||
logStoreDestory(pLogStore);
|
||||
cleanup();
|
||||
|
||||
|
||||
|
||||
// restart
|
||||
init();
|
||||
pLogStore = logStoreCreate(pSyncNode);
|
||||
assert(pLogStore);
|
||||
pSyncNode->pLogStore = pLogStore;
|
||||
|
||||
|
||||
do {
|
||||
SyncIndex firstVer = walGetFirstVer(pWal);
|
||||
SyncIndex lastVer = walGetLastVer(pWal);
|
||||
|
@ -461,13 +458,13 @@ int main(int argc, char** argv) {
|
|||
}
|
||||
sTrace("gAssert : %d", gAssert);
|
||||
|
||||
/*
|
||||
/*
|
||||
test1();
|
||||
test2();
|
||||
test3();
|
||||
test4();
|
||||
test5();
|
||||
*/
|
||||
*/
|
||||
test6();
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -312,7 +312,7 @@ void test5() {
|
|||
pSyncNode->pLogStore = pLogStore;
|
||||
logStoreLog2((char*)"\n\n\ntest5 ----- ", pLogStore);
|
||||
|
||||
//pSyncNode->pLogStore->syncLogSetBeginIndex(pSyncNode->pLogStore, 6);
|
||||
// pSyncNode->pLogStore->syncLogSetBeginIndex(pSyncNode->pLogStore, 6);
|
||||
pLogStore->syncLogRestoreFromSnapshot(pSyncNode->pLogStore, 5);
|
||||
for (int i = 6; i <= 10; ++i) {
|
||||
int32_t dataLen = 10;
|
||||
|
|
|
@ -229,7 +229,7 @@ typedef struct {
|
|||
|
||||
SAsyncPool* transCreateAsyncPool(uv_loop_t* loop, int sz, void* arg, AsyncCB cb);
|
||||
void transDestroyAsyncPool(SAsyncPool* pool);
|
||||
int transSendAsync(SAsyncPool* pool, queue* mq);
|
||||
int transAsyncSend(SAsyncPool* pool, queue* mq);
|
||||
|
||||
#define TRANS_DESTROY_ASYNC_POOL_MSG(pool, msgType, freeFunc) \
|
||||
do { \
|
||||
|
|
|
@ -52,7 +52,7 @@ typedef struct {
|
|||
char user[TSDB_UNI_LEN]; // meter ID
|
||||
|
||||
void (*cfp)(void* parent, SRpcMsg*, SEpSet*);
|
||||
bool (*retry)(int32_t code);
|
||||
bool (*retry)(int32_t code, tmsg_t msgType);
|
||||
int index;
|
||||
|
||||
void* parent;
|
||||
|
|
|
@ -164,11 +164,6 @@ void rpcSetDefaultAddr(void* thandle, const char* ip, const char* fqdn) {
|
|||
transSetDefaultAddr(thandle, ip, fqdn);
|
||||
}
|
||||
|
||||
// void rpcSetMsgTraceId(SRpcMsg* pMsg, STraceId uid) {
|
||||
// SRpcHandleInfo* pInfo = &pMsg->info;
|
||||
// pInfo->traceId = uid;
|
||||
//}
|
||||
|
||||
int32_t rpcInit() {
|
||||
// impl later
|
||||
return 0;
|
||||
|
|
|
@ -967,7 +967,7 @@ void cliSendQuit(SCliThrd* thrd) {
|
|||
// cli can stop gracefully
|
||||
SCliMsg* msg = taosMemoryCalloc(1, sizeof(SCliMsg));
|
||||
msg->type = Quit;
|
||||
transSendAsync(thrd->asyncPool, &msg->q);
|
||||
transAsyncSend(thrd->asyncPool, &msg->q);
|
||||
}
|
||||
void cliWalkCb(uv_handle_t* handle, void* arg) {
|
||||
if (!uv_is_closing(handle)) {
|
||||
|
@ -1030,7 +1030,7 @@ int cliAppCb(SCliConn* pConn, STransMsg* pResp, SCliMsg* pMsg) {
|
|||
*/
|
||||
STransConnCtx* pCtx = pMsg->ctx;
|
||||
int32_t code = pResp->code;
|
||||
if (pTransInst->retry != NULL && pTransInst->retry(code)) {
|
||||
if (pTransInst->retry != NULL && pTransInst->retry(code, pResp->msgType - 1)) {
|
||||
pMsg->sent = 0;
|
||||
pCtx->retryCnt += 1;
|
||||
if (code == TSDB_CODE_RPC_NETWORK_UNAVAIL) {
|
||||
|
@ -1138,7 +1138,7 @@ void transReleaseCliHandle(void* handle) {
|
|||
cmsg->msg = tmsg;
|
||||
cmsg->type = Release;
|
||||
|
||||
transSendAsync(pThrd->asyncPool, &cmsg->q);
|
||||
transAsyncSend(pThrd->asyncPool, &cmsg->q);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1171,7 +1171,7 @@ void transSendRequest(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STra
|
|||
STraceId* trace = &pReq->info.traceId;
|
||||
tGTrace("%s send request at thread:%08" PRId64 ", dst: %s:%d, app:%p", transLabel(pTransInst), pThrd->pid,
|
||||
EPSET_GET_INUSE_IP(&pCtx->epSet), EPSET_GET_INUSE_PORT(&pCtx->epSet), pReq->info.ahandle);
|
||||
ASSERT(transSendAsync(pThrd->asyncPool, &(cliMsg->q)) == 0);
|
||||
ASSERT(transAsyncSend(pThrd->asyncPool, &(cliMsg->q)) == 0);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1205,7 +1205,7 @@ void transSendRecv(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STransM
|
|||
tGTrace("%s send request at thread:%08" PRId64 ", dst: %s:%d, app:%p", transLabel(pTransInst), pThrd->pid,
|
||||
EPSET_GET_INUSE_IP(&pCtx->epSet), EPSET_GET_INUSE_PORT(&pCtx->epSet), pReq->info.ahandle);
|
||||
|
||||
transSendAsync(pThrd->asyncPool, &(cliMsg->q));
|
||||
transAsyncSend(pThrd->asyncPool, &(cliMsg->q));
|
||||
tsem_wait(sem);
|
||||
tsem_destroy(sem);
|
||||
taosMemoryFree(sem);
|
||||
|
@ -1234,7 +1234,7 @@ void transSetDefaultAddr(void* shandle, const char* ip, const char* fqdn) {
|
|||
SCliThrd* thrd = ((SCliObj*)pTransInst->tcphandle)->pThreadObj[i];
|
||||
tDebug("%s update epset at thread:%08" PRId64 "", pTransInst->label, thrd->pid);
|
||||
|
||||
transSendAsync(thrd->asyncPool, &(cliMsg->q));
|
||||
transAsyncSend(thrd->asyncPool, &(cliMsg->q));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -202,7 +202,7 @@ void transDestroyAsyncPool(SAsyncPool* pool) {
|
|||
taosMemoryFree(pool->asyncs);
|
||||
taosMemoryFree(pool);
|
||||
}
|
||||
int transSendAsync(SAsyncPool* pool, queue* q) {
|
||||
int transAsyncSend(SAsyncPool* pool, queue* q) {
|
||||
int idx = pool->index;
|
||||
idx = idx % pool->nAsync;
|
||||
// no need mutex here
|
||||
|
|
|
@ -999,7 +999,7 @@ void sendQuitToWorkThrd(SWorkThrd* pThrd) {
|
|||
SSvrMsg* msg = taosMemoryCalloc(1, sizeof(SSvrMsg));
|
||||
msg->type = Quit;
|
||||
tDebug("server send quit msg to work thread");
|
||||
transSendAsync(pThrd->asyncPool, &msg->q);
|
||||
transAsyncSend(pThrd->asyncPool, &msg->q);
|
||||
}
|
||||
|
||||
void transCloseServer(void* arg) {
|
||||
|
@ -1070,7 +1070,7 @@ void transReleaseSrvHandle(void* handle) {
|
|||
m->type = Release;
|
||||
|
||||
tTrace("%s conn %p start to release", transLabel(pThrd->pTransInst), exh->handle);
|
||||
transSendAsync(pThrd->asyncPool, &m->q);
|
||||
transAsyncSend(pThrd->asyncPool, &m->q);
|
||||
transReleaseExHandle(refMgt, refId);
|
||||
return;
|
||||
_return1:
|
||||
|
@ -1099,7 +1099,7 @@ void transSendResponse(const STransMsg* msg) {
|
|||
|
||||
STraceId* trace = (STraceId*)&msg->info.traceId;
|
||||
tGTrace("conn %p start to send resp (1/2)", exh->handle);
|
||||
transSendAsync(pThrd->asyncPool, &m->q);
|
||||
transAsyncSend(pThrd->asyncPool, &m->q);
|
||||
transReleaseExHandle(refMgt, refId);
|
||||
return;
|
||||
_return1:
|
||||
|
@ -1128,7 +1128,7 @@ void transRegisterMsg(const STransMsg* msg) {
|
|||
m->type = Register;
|
||||
|
||||
tTrace("%s conn %p start to register brokenlink callback", transLabel(pThrd->pTransInst), exh->handle);
|
||||
transSendAsync(pThrd->asyncPool, &m->q);
|
||||
transAsyncSend(pThrd->asyncPool, &m->q);
|
||||
transReleaseExHandle(refMgt, refId);
|
||||
return;
|
||||
|
||||
|
|
|
@ -157,14 +157,14 @@ int32_t taosRenameFile(const char *oldName, const char *newName) {
|
|||
#ifdef WINDOWS
|
||||
bool code = MoveFileEx(oldName, newName, MOVEFILE_REPLACE_EXISTING | MOVEFILE_COPY_ALLOWED);
|
||||
if (!code) {
|
||||
printf("failed to rename file %s to %s, reason:%s", oldName, newName, strerror(errno));
|
||||
printf("failed to rename file %s to %s, reason:%s\n", oldName, newName, strerror(errno));
|
||||
}
|
||||
|
||||
return !code;
|
||||
#else
|
||||
int32_t code = rename(oldName, newName);
|
||||
if (code < 0) {
|
||||
printf("failed to rename file %s to %s, reason:%s", oldName, newName, strerror(errno));
|
||||
printf("failed to rename file %s to %s, reason:%s\n", oldName, newName, strerror(errno));
|
||||
}
|
||||
|
||||
return code;
|
||||
|
|
|
@ -18,6 +18,10 @@
|
|||
#include "os.h"
|
||||
|
||||
#if defined(WINDOWS)
|
||||
BOOL WINAPI CtrlHandler(DWORD fdwCtrlType) {
|
||||
printf("\n" TAOS_CONSOLE_PROMPT_HEADER);
|
||||
return TRUE;
|
||||
}
|
||||
#elif defined(_TD_DARWIN_64)
|
||||
#else
|
||||
#include <dlfcn.h>
|
||||
|
@ -124,7 +128,7 @@ int taosSetConsoleEcho(bool on) {
|
|||
|
||||
void taosSetTerminalMode() {
|
||||
#if defined(WINDOWS)
|
||||
// assert(0);
|
||||
SetConsoleCtrlHandler(CtrlHandler, TRUE);
|
||||
|
||||
#else
|
||||
struct termios newtio;
|
||||
|
@ -158,7 +162,6 @@ void taosSetTerminalMode() {
|
|||
|
||||
int32_t taosGetOldTerminalMode() {
|
||||
#if defined(WINDOWS)
|
||||
// assert(0);
|
||||
#else
|
||||
/* Make sure stdin is a terminal. */
|
||||
if (!isatty(STDIN_FILENO)) {
|
||||
|
@ -176,7 +179,7 @@ int32_t taosGetOldTerminalMode() {
|
|||
|
||||
void taosResetTerminalMode() {
|
||||
#if defined(WINDOWS)
|
||||
// assert(0);
|
||||
SetConsoleCtrlHandler(CtrlHandler, FALSE);
|
||||
#else
|
||||
if (tcsetattr(0, TCSANOW, &oldtio) != 0) {
|
||||
fprintf(stderr, "Fail to reset the terminal properties!\n");
|
||||
|
|
|
@ -52,7 +52,7 @@ fi
|
|||
|
||||
docker run \
|
||||
-v $REP_MOUNT_PARAM \
|
||||
--rm --ulimit core=-1 taos_test:v1.0 sh -c "cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_TOOLS=true;make -j $THREAD_COUNT"
|
||||
--rm --ulimit core=-1 taos_test:v1.0 sh -c "cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true;make -j $THREAD_COUNT"
|
||||
|
||||
ret=$?
|
||||
exit $ret
|
||||
|
|
|
@ -637,5 +637,13 @@ class TDCom:
|
|||
column_value_str = ", ".join(str(v) for v in column_value_list)
|
||||
insert_sql = f'insert into {dbname}.{tbname} values ({column_value_str});'
|
||||
tsql.execute(insert_sql)
|
||||
|
||||
def getOneRow(self, location, containElm):
|
||||
res_list = list()
|
||||
if 0 <= location < tdSql.queryRows:
|
||||
for row in tdSql.queryResult:
|
||||
if row[location] == containElm:
|
||||
res_list.append(row)
|
||||
return res_list
|
||||
else:
|
||||
tdLog.exit(f"getOneRow out of range: row_index={location} row_count={self.query_row}")
|
||||
tdCom = TDCom()
|
||||
|
|
|
@ -1,5 +1,76 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# basic type
|
||||
TAOS_DATA_TYPE = [
|
||||
"INT", "BIGINT", "SMALLINT", "TINYINT", "INT UNSIGNED", "BIGINT UNSIGNED", "SMALLINT UNSIGNED", "TINYINT UNSIGNED",
|
||||
"FLOAT", "DOUBLE",
|
||||
"BOOL",
|
||||
"BINARY", "NCHAR", "VARCHAR",
|
||||
"TIMESTAMP",
|
||||
# "MEDIUMBLOB", "BLOB", # add in 3.x
|
||||
# "DECIMAL", "NUMERIC", # add in 3.x
|
||||
"JSON", # only for tag
|
||||
]
|
||||
|
||||
TAOS_NUM_TYPE = [
|
||||
"INT", "BIGINT", "SMALLINT", "TINYINT", "INT UNSIGNED", "BIGINT UNSIGNED", "SMALLINT UNSIGNED", "TINYINT UNSIGNED", "FLOAT", "DOUBLE",
|
||||
# "DECIMAL", "NUMERIC", # add in 3.x
|
||||
]
|
||||
TAOS_CHAR_TYPE = [
|
||||
"BINARY", "NCHAR", "VARCHAR",
|
||||
]
|
||||
TAOS_BOOL_TYPE = ["BOOL",]
|
||||
TAOS_TS_TYPE = ["TIMESTAMP",]
|
||||
TAOS_BIN_TYPE = [
|
||||
"MEDIUMBLOB", "BLOB", # add in 3.x
|
||||
]
|
||||
|
||||
TAOS_TIME_INIT = ["b", "u", "a", "s", "m", "h", "d", "w", "n", "y"]
|
||||
TAOS_PRECISION = ["ms", "us", "ns"]
|
||||
PRECISION_DEFAULT = "ms"
|
||||
PRECISION = PRECISION_DEFAULT
|
||||
|
||||
TAOS_KEYWORDS = [
|
||||
"ABORT", "CREATE", "IGNORE", "NULL", "STAR",
|
||||
"ACCOUNT", "CTIME", "IMMEDIATE", "OF", "STATE",
|
||||
"ACCOUNTS", "DATABASE", "IMPORT", "OFFSET", "STATEMENT",
|
||||
"ADD", "DATABASES", "IN", "OR", "STATE_WINDOW",
|
||||
"AFTER", "DAYS", "INITIALLY", "ORDER", "STORAGE",
|
||||
"ALL", "DBS", "INSERT", "PARTITIONS", "STREAM",
|
||||
"ALTER", "DEFERRED", "INSTEAD", "PASS", "STREAMS",
|
||||
"AND", "DELIMITERS", "INT", "PLUS", "STRING",
|
||||
"AS", "DESC", "INTEGER", "PPS", "SYNCDB",
|
||||
"ASC", "DESCRIBE", "INTERVAL", "PRECISION", "TABLE",
|
||||
"ATTACH", "DETACH", "INTO", "PREV", "TABLES",
|
||||
"BEFORE", "DISTINCT", "IS", "PRIVILEGE", "TAG",
|
||||
"BEGIN", "DIVIDE", "ISNULL", "QTIME", "TAGS",
|
||||
"BETWEEN", "DNODE", "JOIN", "QUERIES", "TBNAME",
|
||||
"BIGINT", "DNODES", "KEEP", "QUERY", "TIMES",
|
||||
"BINARY", "DOT", "KEY", "QUORUM", "TIMESTAMP",
|
||||
"BITAND", "DOUBLE", "KILL", "RAISE", "TINYINT",
|
||||
"BITNOT", "DROP", "LE", "REM", "TOPIC",
|
||||
"BITOR", "EACH", "LIKE", "REPLACE", "TOPICS",
|
||||
"BLOCKS", "END", "LIMIT", "REPLICA", "TRIGGER",
|
||||
"BOOL", "EQ", "LINEAR", "RESET", "TSERIES",
|
||||
"BY", "EXISTS", "LOCAL", "RESTRICT", "UMINUS",
|
||||
"CACHE", "EXPLAIN", "LP", "ROW", "UNION",
|
||||
"CACHELAST", "FAIL", "LSHIFT", "RP", "UNSIGNED",
|
||||
"CASCADE", "FILE", "LT", "RSHIFT", "UPDATE",
|
||||
"CHANGE", "FILL", "MATCH", "SCORES", "UPLUS",
|
||||
"CLUSTER", "FLOAT", "MAXROWS", "SELECT", "USE",
|
||||
"COLON", "FOR", "MINROWS", "SEMI", "USER",
|
||||
"COLUMN", "FROM", "MINUS", "SESSION", "USERS",
|
||||
"COMMA", "FSYNC", "MNODES", "SET", "USING",
|
||||
"COMP", "GE", "MODIFY", "SHOW", "VALUES",
|
||||
"COMPACT", "GLOB", "MODULES", "SLASH", "VARIABLE",
|
||||
"CONCAT", "GRANTS", "NCHAR", "SLIDING", "VARIABLES",
|
||||
"CONFLICT", "GROUP", "NE", "SLIMIT", "VGROUPS",
|
||||
"CONNECTION", "GT", "NONE", "SMALLINT", "VIEW",
|
||||
"CONNECTIONS", "HAVING", "NOT", "SOFFSET", "VNODES",
|
||||
"CONNS", "ID", "NOTNULL", "STABLE", "WAL",
|
||||
"COPY", "IF", "NOW", "STABLES", "WHERE",
|
||||
]
|
||||
|
||||
# basic data type boundary
|
||||
TINYINT_MAX = 127
|
||||
TINYINT_MIN = -128
|
||||
|
@ -11,7 +82,7 @@ SMALLINT_MAX = 32767
|
|||
SMALLINT_MIN = -32768
|
||||
|
||||
SMALLINT_UN_MAX = 65535
|
||||
MALLINT_UN_MIN = 0
|
||||
SMALLINT_UN_MIN = 0
|
||||
|
||||
INT_MAX = 2147483647
|
||||
INT_MIN = -2147483648
|
||||
|
@ -33,8 +104,8 @@ DOUBLE_MIN = -1.7E+308
|
|||
|
||||
# schema boundary
|
||||
BINARY_LENGTH_MAX = 16374
|
||||
NCAHR_LENGTH_MAX_ = 4093
|
||||
DBNAME_LENGTH_MAX_ = 64
|
||||
NCAHR_LENGTH_MAX = 4093
|
||||
DBNAME_LENGTH_MAX = 64
|
||||
|
||||
STBNAME_LENGTH_MAX = 192
|
||||
STBNAME_LENGTH_MIN = 1
|
||||
|
@ -67,3 +138,31 @@ MNODE_SHM_SIZE_DEFAULT = 6292480
|
|||
VNODE_SHM_SIZE_MAX = 2147483647
|
||||
VNODE_SHM_SIZE_MIN = 6292480
|
||||
VNODE_SHM_SIZE_DEFAULT = 31458304
|
||||
|
||||
# time_init
|
||||
TIME_MS = 1
|
||||
TIME_US = TIME_MS/1000
|
||||
TIME_NS = TIME_US/1000
|
||||
|
||||
TIME_S = 1000 * TIME_MS
|
||||
TIME_M = 60 * TIME_S
|
||||
TIME_H = 60 * TIME_M
|
||||
TIME_D = 24 * TIME_H
|
||||
TIME_W = 7 * TIME_D
|
||||
TIME_N = 30 * TIME_D
|
||||
TIME_Y = 365 * TIME_D
|
||||
|
||||
|
||||
# session parameters
|
||||
INTERVAL_MIN = 1 * TIME_MS if PRECISION == PRECISION_DEFAULT else 1 * TIME_US
|
||||
|
||||
|
||||
# streams and related agg-function
|
||||
SMA_INDEX_FUNCTIONS = ["MIN", "MAX"]
|
||||
ROLLUP_FUNCTIONS = ["AVG", "SUM", "MIN", "MAX", "LAST", "FIRST"]
|
||||
SMA_WATMARK_MAXDELAY_INIT = ['a', "s", "m"]
|
||||
WATERMARK_MAX = 900000
|
||||
WATERMARK_MIN = 0
|
||||
|
||||
MAX_DELAY_MAX = 900000
|
||||
MAX_DELAY_MIN = 1
|
|
@ -21,6 +21,7 @@ import psutil
|
|||
import shutil
|
||||
import pandas as pd
|
||||
from util.log import *
|
||||
from util.constant import *
|
||||
|
||||
def _parse_datetime(timestr):
|
||||
try:
|
||||
|
@ -117,8 +118,7 @@ class TDSql:
|
|||
col_name_list = []
|
||||
col_type_list = []
|
||||
self.cursor.execute(sql)
|
||||
self.queryCols = self.cursor.description
|
||||
for query_col in self.queryCols:
|
||||
for query_col in self.cursor.description:
|
||||
col_name_list.append(query_col[0])
|
||||
col_type_list.append(query_col[1])
|
||||
except Exception as e:
|
||||
|
@ -301,6 +301,41 @@ class TDSql:
|
|||
args = (caller.filename, caller.lineno, self.sql, elm, expect_elm)
|
||||
tdLog.exit("%s(%d) failed: sql:%s, elm:%s == expect_elm:%s" % args)
|
||||
|
||||
def get_times(self, time_str, precision="ms"):
|
||||
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
if time_str[-1] not in TAOS_TIME_INIT:
|
||||
tdLog.exit(f"{caller.filename}({caller.lineno}) failed: {time_str} not a standard taos time init")
|
||||
if precision not in TAOS_PRECISION:
|
||||
tdLog.exit(f"{caller.filename}({caller.lineno}) failed: {precision} not a standard taos time precision")
|
||||
|
||||
if time_str[-1] == TAOS_TIME_INIT[0]:
|
||||
times = int(time_str[:-1]) * TIME_NS
|
||||
if time_str[-1] == TAOS_TIME_INIT[1]:
|
||||
times = int(time_str[:-1]) * TIME_US
|
||||
if time_str[-1] == TAOS_TIME_INIT[2]:
|
||||
times = int(time_str[:-1]) * TIME_MS
|
||||
if time_str[-1] == TAOS_TIME_INIT[3]:
|
||||
times = int(time_str[:-1]) * TIME_S
|
||||
if time_str[-1] == TAOS_TIME_INIT[4]:
|
||||
times = int(time_str[:-1]) * TIME_M
|
||||
if time_str[-1] == TAOS_TIME_INIT[5]:
|
||||
times = int(time_str[:-1]) * TIME_H
|
||||
if time_str[-1] == TAOS_TIME_INIT[6]:
|
||||
times = int(time_str[:-1]) * TIME_D
|
||||
if time_str[-1] == TAOS_TIME_INIT[7]:
|
||||
times = int(time_str[:-1]) * TIME_W
|
||||
if time_str[-1] == TAOS_TIME_INIT[8]:
|
||||
times = int(time_str[:-1]) * TIME_N
|
||||
if time_str[-1] == TAOS_TIME_INIT[9]:
|
||||
times = int(time_str[:-1]) * TIME_Y
|
||||
|
||||
if precision == "ms":
|
||||
return int(times)
|
||||
elif precision == "us":
|
||||
return int(times*1000)
|
||||
elif precision == "ns":
|
||||
return int(times*1000*1000)
|
||||
|
||||
def taosdStatus(self, state):
|
||||
tdLog.sleep(5)
|
||||
pstate = 0
|
||||
|
|
|
@ -22,15 +22,19 @@
|
|||
|
||||
# ---- dnode
|
||||
./test.sh -f tsim/dnode/balance_replica1.sim
|
||||
#./test.sh -f tsim/dnode/balance_replica3.sim
|
||||
./test.sh -f tsim/dnode/create_dnode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_mnode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_qnode_snode.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_vnode_replica1.sim
|
||||
#./test.sh -f tsim/dnode/drop_dnode_has_vnode_replica3.sim
|
||||
#./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica1.sim
|
||||
#./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica3.sim
|
||||
#./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v1_leader.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_vnode_replica3.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica1.sim
|
||||
./test.sh -f tsim/dnode/drop_dnode_has_multi_vnode_replica3.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica1.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v1_leader.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v1_follower.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v2.sim
|
||||
./test.sh -f tsim/dnode/redistribute_vgroup_replica3_v3.sim
|
||||
|
||||
# ---- insert
|
||||
./test.sh -f tsim/insert/basic0.sim
|
||||
|
@ -81,6 +85,7 @@
|
|||
./test.sh -f tsim/stream/basic1.sim
|
||||
./test.sh -f tsim/stream/basic2.sim
|
||||
./test.sh -f tsim/stream/distributeInterval0.sim
|
||||
# ./test.sh -f tsim/stream/distributeIntervalRetrive0.sim
|
||||
# ./test.sh -f tsim/stream/distributesession0.sim
|
||||
# ./test.sh -f tsim/stream/session0.sim
|
||||
./test.sh -f tsim/stream/session1.sim
|
||||
|
|
|
@ -11,15 +11,18 @@ if not "%2" == "" (
|
|||
)
|
||||
for /F "usebackq tokens=*" %%i in (!caseFile!) do (
|
||||
set line=%%i
|
||||
call :CheckSkipCase %%i
|
||||
if !skipCase! == false (
|
||||
if "!line:~,9!" == "./test.sh" (
|
||||
set /a a+=1
|
||||
echo !a! Processing %%i
|
||||
call :GetTimeSeconds !time!
|
||||
set time1=!_timeTemp!
|
||||
echo Start at !time!
|
||||
call !line:./test.sh=wtest.bat! > result_!a!.txt 2>error_!a!.txt
|
||||
call !line:./test.sh=wtest.bat! > result_!a!.txt 2>error_!a!.txt || set /a errorlevel=8
|
||||
if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && set /a exitNum=8 && echo %%i >>failed.txt ) else ( call :colorEcho 0a "Success" &echo. )
|
||||
)
|
||||
)
|
||||
)
|
||||
exit !exitNum!
|
||||
|
||||
|
@ -56,3 +59,10 @@ for %%a in (%tt%) do (
|
|||
)
|
||||
set /a _timeTemp=(%hh%*60+%mm%)*60+%ss%
|
||||
goto :eof
|
||||
|
||||
:CheckSkipCase
|
||||
set skipCase=false
|
||||
if "%*" == "./test.sh -f tsim/query/scalarFunction.sim" ( set skipCase=true )
|
||||
if "%*" == "./test.sh -f tsim/stream/distributeInterval0.sim" ( set skipCase=true )
|
||||
if "%*" == "./test.sh -f tsim/sma/rsmaCreateInsertQuery.sim" ( set skipCase=true )
|
||||
:goto eof
|
|
@ -117,7 +117,6 @@ if $rows != 6 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
return
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
|
|
|
@ -0,0 +1,362 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/deploy.sh -n dnode2 -i 2
|
||||
system sh/deploy.sh -n dnode3 -i 3
|
||||
system sh/deploy.sh -n dnode4 -i 4
|
||||
system sh/deploy.sh -n dnode5 -i 5
|
||||
system sh/cfg.sh -n dnode1 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode2 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode3 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode4 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode5 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
system sh/exec.sh -n dnode2 -s start
|
||||
system sh/exec.sh -n dnode3 -s start
|
||||
system sh/exec.sh -n dnode4 -s start
|
||||
sql connect
|
||||
|
||||
print =============== step1 create dnode2
|
||||
sql create dnode $hostname port 7200
|
||||
sql create dnode $hostname port 7300
|
||||
sql create dnode $hostname port 7400
|
||||
sql create dnode $hostname port 7500
|
||||
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 4 replica 3
|
||||
|
||||
print =============== step32 wait vgroup2
|
||||
$x = 0
|
||||
step32:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step32
|
||||
endi
|
||||
|
||||
print =============== step33 wait vgroup3
|
||||
$x = 0
|
||||
step33:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step33
|
||||
endi
|
||||
|
||||
print =============== step34 wait vgroup4
|
||||
$x = 0
|
||||
step34:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step34
|
||||
endi
|
||||
|
||||
print =============== step35 wait vgroup5
|
||||
$x = 0
|
||||
step35:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step35
|
||||
endi
|
||||
|
||||
print =============== step36: create table
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
sql create table d1.c2 using st tags(1)
|
||||
sql create table d1.c3 using st tags(1)
|
||||
sql create table d1.c4 using st tags(1)
|
||||
sql create table d1.c5 using st tags(1)
|
||||
sql create table d1.c6 using st tags(1)
|
||||
sql show d1.tables
|
||||
if $rows != 6 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step4: start dnode5
|
||||
system sh/exec.sh -n dnode5 -s start
|
||||
$x = 0
|
||||
step4:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step4
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step4
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step4
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step4
|
||||
endi
|
||||
if $data(5)[4] != ready then
|
||||
goto step4
|
||||
endi
|
||||
|
||||
print =============== step5: balance
|
||||
sql balance vgroup
|
||||
|
||||
print =============== step62 wait vgroup2
|
||||
$x = 0
|
||||
step62:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step62
|
||||
endi
|
||||
|
||||
print =============== step63 wait vgroup3
|
||||
$x = 0
|
||||
step63:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step63
|
||||
endi
|
||||
|
||||
print =============== step64 wait vgroup4
|
||||
$x = 0
|
||||
step64:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step64
|
||||
endi
|
||||
|
||||
print =============== step65 wait vgroup5
|
||||
$x = 0
|
||||
step65:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step65
|
||||
endi
|
||||
|
||||
|
||||
|
||||
print =============== step7: select data
|
||||
|
||||
sql show d1.tables
|
||||
print rows $rows
|
||||
if $rows != 6 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
|
@ -28,7 +28,7 @@ step1:
|
|||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
if $rows != 3 then
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
|
@ -49,6 +49,128 @@ endi
|
|||
|
||||
print =============== step3 create database
|
||||
sql create database d1 vgroups 4 replica 3
|
||||
|
||||
print =============== step32 wait vgroup2
|
||||
$x = 0
|
||||
step32:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step32
|
||||
endi
|
||||
|
||||
print =============== step33 wait vgroup3
|
||||
$x = 0
|
||||
step33:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step33
|
||||
endi
|
||||
|
||||
print =============== step34 wait vgroup4
|
||||
$x = 0
|
||||
step34:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step34
|
||||
endi
|
||||
|
||||
print =============== step35 wait vgroup5
|
||||
$x = 0
|
||||
step35:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step35
|
||||
endi
|
||||
|
||||
print =============== step36: create table
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
|
@ -57,21 +179,6 @@ if $rows != 1 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql show d1.vgroups
|
||||
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4]
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[3] != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[5] != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[7] != 4 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step4: drop dnode 2
|
||||
system sh/exec.sh -n dnode5 -s start
|
||||
$x = 0
|
||||
|
@ -85,7 +192,7 @@ step4:
|
|||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
if $rows != 3 then
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
|
@ -115,21 +222,133 @@ if $rows != 4 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
print show d1.vgroups
|
||||
print =============== step62 wait vgroup2
|
||||
$x = 0
|
||||
step62:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4]
|
||||
if $rows != 1 then
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step62
|
||||
endi
|
||||
|
||||
print =============== step63 wait vgroup3
|
||||
$x = 0
|
||||
step63:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(3)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step63
|
||||
endi
|
||||
|
||||
print =============== step6: select data
|
||||
print =============== step64 wait vgroup4
|
||||
$x = 0
|
||||
step64:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step64
|
||||
endi
|
||||
|
||||
print =============== step35 wait vgroup5
|
||||
$x = 0
|
||||
step65:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15 $data16 $data17 $data18 $data19
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25 $data26 $data27 $data28 $data29
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35 $data36 $data37 $data38 $data39
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(4)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(4)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step65
|
||||
endi
|
||||
|
||||
print =============== step7: select data
|
||||
print ===> $data00 $data01
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
return
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
|
|
|
@ -49,6 +49,50 @@ endi
|
|||
|
||||
print =============== step3 create database
|
||||
sql create database d1 vgroups 1 replica 3
|
||||
$leaderExist = 0
|
||||
$leaderVnode = 0
|
||||
$follower1 = 0
|
||||
$follower2 = 0
|
||||
|
||||
$x = 0
|
||||
step3:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 2
|
||||
$follower1 = 3
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 3
|
||||
$follower1 = 2
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 4
|
||||
$follower1 = 2
|
||||
$follower2 = 3
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step3
|
||||
endi
|
||||
|
||||
print leader $leaderVnode
|
||||
print follower1 $follower1
|
||||
print follower2 $follower2
|
||||
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
|
@ -72,13 +116,6 @@ if $data(2)[7] != 4 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
|
||||
|
||||
return
|
||||
|
||||
|
||||
print =============== step4: drop dnode 2
|
||||
system sh/exec.sh -n dnode5 -s start
|
||||
$x = 0
|
||||
|
@ -92,7 +129,7 @@ step4:
|
|||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
if $rows != 3 then
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
|
@ -116,8 +153,11 @@ sql drop dnode 2
|
|||
|
||||
print show dnodes;
|
||||
sql show dnodes;
|
||||
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4]
|
||||
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4]
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
|
@ -128,9 +168,9 @@ print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4]
|
|||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
#if $data(2)[3] != 3 then
|
||||
# return -1
|
||||
#endi
|
||||
if $data(2)[3] != 5 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step6: select data
|
||||
sql show d1.tables
|
||||
|
@ -138,7 +178,37 @@ if $rows != 1 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
return
|
||||
$leaderExist = 0
|
||||
$leaderVnode = 0
|
||||
$follower1 = 0
|
||||
$follower2 = 0
|
||||
|
||||
$x = 0
|
||||
step6:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step6
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
|
|
|
@ -0,0 +1,178 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/deploy.sh -n dnode2 -i 2
|
||||
system sh/deploy.sh -n dnode3 -i 3
|
||||
system sh/deploy.sh -n dnode4 -i 4
|
||||
system sh/cfg.sh -n dnode1 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode2 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode3 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode4 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
system sh/exec.sh -n dnode2 -s start
|
||||
sql connect
|
||||
sql create user u1 pass 'taosdata'
|
||||
|
||||
print =============== step1 create dnode2
|
||||
sql create dnode $hostname port 7200
|
||||
sql create dnode $hostname port 7300
|
||||
sql create dnode $hostname port 7400
|
||||
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 1 replica 1
|
||||
|
||||
system sh/exec.sh -n dnode3 -s start
|
||||
system sh/exec.sh -n dnode4 -s start
|
||||
$x = 0
|
||||
step2:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
if $rows != 4 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
|
||||
print =============== step3: create database
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step40:
|
||||
print redistribute vgroup 2 dnode 3
|
||||
sql redistribute vgroup 2 dnode 3
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step41:
|
||||
print redistribute vgroup 2 dnode 4
|
||||
sql redistribute vgroup 2 dnode 4
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step42:
|
||||
print redistribute vgroup 2 dnode 2
|
||||
sql redistribute vgroup 2 dnode 2
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step43:
|
||||
print redistribute vgroup 2 dnode 3
|
||||
sql redistribute vgroup 2 dnode 3
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step44:
|
||||
print redistribute vgroup 2 dnode 4
|
||||
sql redistribute vgroup 2 dnode 4
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step45:
|
||||
print redistribute vgroup 2 dnode 2
|
||||
sql redistribute vgroup 2 dnode 2
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step46:
|
||||
print redistribute vgroup 2 dnode 3
|
||||
sql redistribute vgroup 2 dnode 3
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step47:
|
||||
print redistribute vgroup 2 dnode 2
|
||||
sql redistribute vgroup 2 dnode 2
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
|
@ -65,7 +65,7 @@ sql_error redistribute vgroup 3 dnode 6 dnode 3 dnode 4
|
|||
# vgroup not exist
|
||||
sql_error redistribute vgroup 3 dnode 5 dnode 3 dnode 4
|
||||
# un changed
|
||||
sql_error redistribute vgroup 2 dnode 2 dnode 3 dnode 4
|
||||
# sql_error redistribute vgroup 2 dnode 2 dnode 3 dnode 4
|
||||
# no enought vnodes
|
||||
sql_error redistribute vgroup 2 dnode 1 dnode 3 dnode 4
|
||||
# offline vnodes
|
||||
|
@ -176,8 +176,6 @@ if $rows != 1 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
return
|
||||
|
||||
print =============== step32:
|
||||
print redistribute vgroup 2 dnode $leaderVnode dnode $follower1 dnode 5
|
||||
sql redistribute vgroup 2 dnode $leaderVnode dnode $follower1 dnode 5
|
||||
|
|
|
@ -0,0 +1,245 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/deploy.sh -n dnode2 -i 2
|
||||
system sh/deploy.sh -n dnode3 -i 3
|
||||
system sh/deploy.sh -n dnode4 -i 4
|
||||
system sh/deploy.sh -n dnode5 -i 5
|
||||
system sh/deploy.sh -n dnode6 -i 6
|
||||
system sh/cfg.sh -n dnode1 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode2 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode3 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode4 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode5 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode6 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
system sh/exec.sh -n dnode2 -s start
|
||||
system sh/exec.sh -n dnode3 -s start
|
||||
system sh/exec.sh -n dnode4 -s start
|
||||
sql connect
|
||||
sql create user u1 pass 'taosdata'
|
||||
|
||||
print =============== step1 create dnode2
|
||||
sql create dnode $hostname port 7200
|
||||
sql create dnode $hostname port 7300
|
||||
sql create dnode $hostname port 7400
|
||||
sql create dnode $hostname port 7500
|
||||
sql create dnode $hostname port 7600
|
||||
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 1 replica 3
|
||||
|
||||
system sh/exec.sh -n dnode5 -s start
|
||||
system sh/exec.sh -n dnode6 -s start
|
||||
$x = 0
|
||||
step2:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 6 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(5)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(6)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
|
||||
print =============== step3: create database
|
||||
$leaderExist = 0
|
||||
$leaderVnode = 0
|
||||
$follower1 = 0
|
||||
$follower2 = 0
|
||||
|
||||
$x = 0
|
||||
step3:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 2
|
||||
$follower1 = 3
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 3
|
||||
$follower1 = 2
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 4
|
||||
$follower1 = 2
|
||||
$follower2 = 3
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step3
|
||||
endi
|
||||
|
||||
print leader $leaderVnode
|
||||
print follower1 $follower1
|
||||
print follower2 $follower2
|
||||
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step40:
|
||||
print redistribute vgroup 2 dnode $leaderVnode dnode 5 dnode 6
|
||||
sql redistribute vgroup 2 dnode $leaderVnode dnode 5 dnode 6
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step41:
|
||||
print redistribute vgroup 2 dnode 5 dnode $follower2 dnode $follower1
|
||||
sql redistribute vgroup 2 dnode 5 dnode $follower2 dnode $follower1
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step42:
|
||||
print redistribute vgroup 2 dnode $follower2 dnode 6 dnode $leaderVnode
|
||||
sql redistribute vgroup 2 dnode $follower2 dnode 6 dnode $leaderVnode
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step43:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode 5 dnode 6
|
||||
sql redistribute vgroup 2 dnode $follower1 dnode 5 dnode 6
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step44:
|
||||
print redistribute vgroup 2 dnode $follower2 dnode 5 dnode $leaderVnode
|
||||
sql redistribute vgroup 2 dnode $follower2 dnode 5 dnode $leaderVnode
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step45:
|
||||
print redistribute vgroup 2 dnode $leaderVnode dnode $follower1 dnode 6
|
||||
sql redistribute vgroup 2 dnode $leaderVnode dnode $follower1 dnode 6
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step46:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode $follower2 dnode 5
|
||||
sql redistribute vgroup 2 dnode $follower1 dnode $follower2 dnode 5
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step47:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode 6 dnode $leaderVnode
|
||||
sql redistribute vgroup 2 dnode $follower1 dnode 6 dnode $leaderVnode
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode5 -s stop -x SIGINT
|
|
@ -0,0 +1,263 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/deploy.sh -n dnode2 -i 2
|
||||
system sh/deploy.sh -n dnode3 -i 3
|
||||
system sh/deploy.sh -n dnode4 -i 4
|
||||
system sh/deploy.sh -n dnode5 -i 5
|
||||
system sh/deploy.sh -n dnode6 -i 6
|
||||
system sh/deploy.sh -n dnode7 -i 7
|
||||
system sh/deploy.sh -n dnode8 -i 8
|
||||
system sh/cfg.sh -n dnode1 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode2 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode3 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode4 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode5 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode6 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode7 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode8 -c transPullupInterval -v 1
|
||||
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
system sh/exec.sh -n dnode2 -s start
|
||||
system sh/exec.sh -n dnode3 -s start
|
||||
system sh/exec.sh -n dnode4 -s start
|
||||
sql connect
|
||||
sql create user u1 pass 'taosdata'
|
||||
|
||||
print =============== step1 create dnode2
|
||||
sql create dnode $hostname port 7200
|
||||
sql create dnode $hostname port 7300
|
||||
sql create dnode $hostname port 7400
|
||||
sql create dnode $hostname port 7500
|
||||
sql create dnode $hostname port 7600
|
||||
sql create dnode $hostname port 7700
|
||||
sql create dnode $hostname port 7800
|
||||
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 8 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 1 replica 3
|
||||
|
||||
system sh/exec.sh -n dnode5 -s start
|
||||
system sh/exec.sh -n dnode6 -s start
|
||||
system sh/exec.sh -n dnode7 -s start
|
||||
system sh/exec.sh -n dnode8 -s start
|
||||
$x = 0
|
||||
step2:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
print ===> $data20 $data21 $data22 $data23 $data24 $data25
|
||||
print ===> $data30 $data31 $data32 $data33 $data34 $data35
|
||||
print ===> $data40 $data41 $data42 $data43 $data44 $data45
|
||||
if $rows != 8 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(3)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(4)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(5)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(6)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(7)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
if $data(8)[4] != ready then
|
||||
goto step2
|
||||
endi
|
||||
|
||||
print =============== step3: create database
|
||||
$leaderExist = 0
|
||||
$leaderVnode = 0
|
||||
$follower1 = 0
|
||||
$follower2 = 0
|
||||
|
||||
$x = 0
|
||||
step3:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 60 then
|
||||
print ====> db not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(2)[4] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 2
|
||||
$follower1 = 3
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[6] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 3
|
||||
$follower1 = 2
|
||||
$follower2 = 4
|
||||
endi
|
||||
if $data(2)[8] == leader then
|
||||
$leaderExist = 1
|
||||
$leaderVnode = 4
|
||||
$follower1 = 2
|
||||
$follower2 = 3
|
||||
endi
|
||||
if $leaderExist != 1 then
|
||||
goto step3
|
||||
endi
|
||||
|
||||
print leader $leaderVnode
|
||||
print follower1 $follower1
|
||||
print follower2 $follower2
|
||||
|
||||
sql use d1
|
||||
sql create table d1.st (ts timestamp, i int) tags (j int)
|
||||
sql create table d1.c1 using st tags(1)
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step40:
|
||||
print redistribute vgroup 2 dnode 7 dnode 5 dnode 6
|
||||
sql redistribute vgroup 2 dnode 7 dnode 5 dnode 6
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step41:
|
||||
print redistribute vgroup 2 dnode $leaderVnode dnode $follower2 dnode $follower1
|
||||
sql redistribute vgroup 2 dnode $leaderVnode dnode $follower2 dnode $follower1
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step42:
|
||||
print redistribute vgroup 2 dnode 7 dnode 6 dnode 8
|
||||
sql redistribute vgroup 2 dnode 7 dnode 6 dnode 8
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step43:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode $follower2 dnode 5
|
||||
sql redistribute vgroup 2 dnode $follower1 dnode $follower2 dnode 5
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
return
|
||||
|
||||
|
||||
print =============== step44:
|
||||
print redistribute vgroup 2 dnode 6 dnode 7 dnode $leaderVnode
|
||||
sql redistribute vgroup 2 dnode 6 dnode 7 dnode $leaderVnode
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step45:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode $follower2 dnode 8
|
||||
sql redistribute vgroup 2dnode $follower1 dnode $follower2 dnode 8
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step46:
|
||||
print redistribute vgroup 2 dnode $leaderVnode dnode 6 dnode 5
|
||||
sql redistribute vgroup 2 dnode $leaderVnode dnode 6 dnode 5
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step47:
|
||||
print redistribute vgroup 2 dnode $follower1 dnode 7 dnode 8
|
||||
sql redistribute vgroup 2 dnode $follower1 dnode 7 dnode 8
|
||||
sql show d1.vgroups
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
sql show d1.tables
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode2 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode3 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode5 -s stop -x SIGINT
|
|
@ -41,7 +41,7 @@ sql create table ts1 using st tags(1,1,1);
|
|||
sql create table ts2 using st tags(2,2,2);
|
||||
sql create table ts3 using st tags(3,2,2);
|
||||
sql create table ts4 using st tags(4,2,2);
|
||||
sql create stream stream_t1 trigger at_once into streamtST1 as select _wstartts, count(*) c1, count(d) c2 , sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s);
|
||||
sql create stream stream_t1 trigger at_once watermark 1d into streamtST1 as select _wstartts, count(*) c1, count(d) c2 , sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s);
|
||||
|
||||
sleep 1000
|
||||
|
||||
|
|
|
@ -0,0 +1,291 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/deploy.sh -n dnode2 -i 2
|
||||
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
sleep 50
|
||||
sql connect
|
||||
|
||||
sql create dnode $hostname2 port 7200
|
||||
|
||||
system sh/exec.sh -n dnode2 -s start
|
||||
|
||||
print ===== step1
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ====> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ===> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
print ===> $data10 $data11 $data12 $data13 $data14 $data15
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
if $data(2)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print ===== step2
|
||||
|
||||
sql create database test vgroups 4;
|
||||
sql use test;
|
||||
sql create stable st(ts timestamp, a int, b int , c int, d double) tags(ta int,tb int,tc int);
|
||||
sql create table ts1 using st tags(1,1,1);
|
||||
sql create table ts2 using st tags(2,2,2);
|
||||
sql create table ts3 using st tags(3,2,2);
|
||||
sql create table ts4 using st tags(4,2,2);
|
||||
sql create stream stream_t1 trigger at_once into streamtST1 as select _wstartts, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s);
|
||||
|
||||
sleep 1000
|
||||
|
||||
sql insert into ts1 values(1648791213001,1,12,3,1.0);
|
||||
sql insert into ts2 values(1648791213001,1,12,3,1.0);
|
||||
sql insert into ts1 values(1648791213002,NULL,NULL,NULL,NULL);
|
||||
sql insert into ts2 values(1648791213002,NULL,NULL,NULL,NULL);
|
||||
|
||||
sql insert into ts1 values(1648791223002,2,2,3,1.1);
|
||||
sql insert into ts1 values(1648791233003,3,2,3,2.1);
|
||||
sql insert into ts2 values(1648791243004,4,2,43,73.1);
|
||||
|
||||
sql insert into ts1 values(1648791213002,24,22,23,4.1) (1648791243005,4,20,3,3.1);
|
||||
|
||||
sleep 1000
|
||||
|
||||
sql insert into ts3 values(1648791213001,12,12,13,14.1) (1648791243005,14,30,30,30.1);
|
||||
|
||||
$loop_count = 0
|
||||
loop1:
|
||||
sleep 1000
|
||||
sql select * from streamtST1;
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
# row 0
|
||||
if $data01 != 5 then
|
||||
print =====data01=$data01
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
if $data02 != 14 then
|
||||
print =====data02=$data02
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 1 then
|
||||
print =====data11=$data11
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
if $data12 != 2 then
|
||||
print =====data12=$data12
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
#row2
|
||||
if $data21 != 1 then
|
||||
print =====data21=$data21
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
if $data22 != 3 then
|
||||
print =====data22=$data22
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
#row 3
|
||||
if $data31 != 3 then
|
||||
print =====data31=$data31
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
if $data32 != 22 then
|
||||
print =====data32=$data32
|
||||
goto loop1
|
||||
endi
|
||||
|
||||
print loop1 over
|
||||
|
||||
sql insert into ts1 values(1648791223008,4,2,30,3.1) (1648791213009,4,2,3,3.1) (1648791233010,4,2,3,3.1) (1648791243011,4,2,3,3.1)(1648791243012,34,32,33,3.1);
|
||||
|
||||
$loop_count = 0
|
||||
loop2:
|
||||
sleep 1000
|
||||
sql select * from streamtST1;
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
# row 0
|
||||
if $data01 != 6 then
|
||||
print =====data01=$data01
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data02 != 18 then
|
||||
print =====data02=$data02
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 2 then
|
||||
print =====data11=$data11
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data12 != 6 then
|
||||
print =====data12=$data12
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
#row2
|
||||
if $data21 != 2 then
|
||||
print =====data21=$data21
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data22 != 7 then
|
||||
print =====data22=$data22
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
#row 3
|
||||
if $data31 != 5 then
|
||||
print =====data31=$data31
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data32 != 60 then
|
||||
print =====data32=$data32
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
print loop2 over
|
||||
|
||||
sql insert into ts4 values(1648791223008,4,2,30,3.1) (1648791213009,4,2,3,3.1) (1648791233010,4,2,3,3.1);
|
||||
|
||||
$loop_count = 0
|
||||
loop3:
|
||||
sleep 1000
|
||||
sql select * from streamtST1;
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
# row 0
|
||||
if $data01 != 7 then
|
||||
print =====data01=$data01
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data02 != 22 then
|
||||
print =====data02=$data02
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 3 then
|
||||
print =====data11=$data11
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data12 != 10 then
|
||||
print =====data12=$data12
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
#row2
|
||||
if $data21 != 3 then
|
||||
print =====data21=$data21
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data22 != 11 then
|
||||
print =====data22=$data22
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
#row 3
|
||||
if $data31 != 5 then
|
||||
print =====data31=$data31
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data32 != 60 then
|
||||
print =====data32=$data32
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
print loop3 over
|
||||
|
||||
$loop_count = 0
|
||||
loop4:
|
||||
sleep 1000
|
||||
sql select * from streamtST1;
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
# row 0
|
||||
if $data01 != 7 then
|
||||
print =====data01=$data01
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
if $data02 != 22 then
|
||||
print =====data02=$data02
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 3 then
|
||||
print =====data11=$data11
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
if $data12 != 10 then
|
||||
print =====data12=$data12
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
#row2
|
||||
if $data21 != 3 then
|
||||
print =====data21=$data21
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
if $data22 != 11 then
|
||||
print =====data22=$data22
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
#row 3
|
||||
if $data31 != 5 then
|
||||
print =====data31=$data31
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
if $data32 != 60 then
|
||||
print =====data32=$data32
|
||||
goto loop4
|
||||
endi
|
||||
|
||||
print loop4 over
|
||||
|
||||
system sh/stop_dnodes.sh
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue