Merge branch '3.0' of github.com:taosdata/TDengine into TEST/3.0/TS-4421

This commit is contained in:
Chris Zhai 2024-03-22 15:49:28 +08:00
commit 3a5a47b8f7
12 changed files with 70 additions and 47 deletions

View File

@ -49,7 +49,7 @@ TDengine ODBC driver supports two kinds of connections to TDengine cluster, nati
4.2 [Connection Type]: required field, we choose "WebSocket"
4.3 [URL]: required field, the URL for the ODBC data source, for example, `http://localhost:6041` is the URL for a local TDengine cluster, `https://gw.cloud.taosdata.com?token=your_token` is the URL for a TDengine cloud service.
4.3 [URL]: required field, the URL for the ODBC data source, for example, `http://localhost:6041` is the URL for a local TDengine cluster.
4.4 [Database]: optional field, the default database to access

View File

@ -128,7 +128,7 @@ If you want to start your application in a container, you need to add the corres
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y wget
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \
@ -229,7 +229,7 @@ Here is the full Dockerfile:
```docker
FROM golang:1.17.6-buster as builder
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \
@ -245,7 +245,7 @@ RUN go build
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y wget
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \

View File

@ -8,7 +8,7 @@ description: This document describes how to deploy TDengine on Kubernetes.
As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment.
To meet [high availability ](https://docs.taosdata.com/tdinternal/high-availability/)requirements, clusters need to meet the following requirements:
To meet [high availability](../../tdinternal/high-availability/) requirements, clusters need to meet the following requirements:
- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3
- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable.

View File

@ -19,7 +19,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
There are two ways to install taosdump:
- Install the taosTools official installer. Please find taosTools from [Release History](https://docs.taosdata.com/releases/tools/) page and download and install it.
- Install the taosTools official installer. Please find taosTools from [Release History](../../releases/tools/) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.

View File

@ -19,7 +19,7 @@ With TDengine ODBC driver, PowerBI can access time series data stored in TDengin
## Install Driver
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](https://docs.taosdata.com/get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](../../get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
### Configure Data Source
@ -38,7 +38,7 @@ To better use Power BI to analyze the data stored in TDengine, you need to under
1. Dimention: it's normally category (text) data to describe such information as device, collection point, model. In the supertable template of TDengine, we use tag columns to store the dimention information. You can use SQL like `select distinct tbname, tag1, tag2 from supertable` to get dimentions.
2. Metric: quantitive (numeric) fileds that can be calculated, like SUM, AVERAGE, MINIMUM. If the collecting frequency is 1 second, then there are 31,536,000 records in one year, it will be too low efficient to import so big data into Power BI. In TDengine, you can use data partition query, window partition query, in combination with pseudo columns related to window, to import downsampled data into Power BI. For more details, please refer to [TDengine Specialized Queries](https://docs.taosdata.com/taos-sql/distinguished/)。
2. Metric: quantitive (numeric) fileds that can be calculated, like SUM, AVERAGE, MINIMUM. If the collecting frequency is 1 second, then there are 31,536,000 records in one year, it will be too low efficient to import so big data into Power BI. In TDengine, you can use data partition query, window partition query, in combination with pseudo columns related to window, to import downsampled data into Power BI. For more details, please refer to [TDengine Specialized Queries](../../taos-sql/distinguished/)。
- Window partition query: for example, thermal meters collect one data per second, but you need to query the average temperature every 10 minutes, you can use window subclause to get the downsampling data you need. The corresponding SQL is like `select tbname, _wstart dateavg(temperature) temp from table interval(10m)`, in which _wstart is a pseudo column indicting the start time of a widow, 10m is the duration of the window, `avg(temperature)` indicates the aggregate value inside a window.

View File

@ -184,8 +184,6 @@ TDengine supports the standard JDBC 3.0 interface for manipulating databases, bu
To facilitate historical data migration, we provide a plug-in for the data synchronization tool DataX, which can automatically write data into TDengine.The automatic data migration of DataX can only support the data migration process of a single value model.
For the specific usage of DataX and how to use DataX to write data to TDengine, please refer to [DataX-based TDengine Data Migration Tool](https://www.taosdata.com/engineering/16401.html).
After migrating via DataX, we found that we can significantly improve the efficiency of migrating historical data by starting multiple processes and migrating numerous metrics simultaneously. The following are some records of the migration process. We provide these as a reference for application migration.
| Number of datax instances (number of concurrent processes) | Migration record speed (pieces/second) |

View File

@ -6,7 +6,7 @@ description: This document provides download links for all released versions of
taosTools installation packages can be downloaded at the following links:
For other historical version installers, please visit [here](https://www.taosdata.com/all-downloads).
For other historical version installers, please visit [here](https://tdengine.com/downloads/historical/).
import Release from "/components/ReleaseV3";

View File

@ -537,15 +537,25 @@ static int32_t mndProcessArbCheckSyncTimer(SRpcMsg *pReq) {
bool member0IsTimeout = mndCheckArbMemberHbTimeout(&arbGroupDup, 0, nowMs);
bool member1IsTimeout = mndCheckArbMemberHbTimeout(&arbGroupDup, 1, nowMs);
int32_t currentAssignedDnodeId = arbGroupDup.assignedLeader.dnodeId;
SArbAssignedLeader* pAssignedLeader = &arbGroupDup.assignedLeader;
int32_t currentAssignedDnodeId = pAssignedLeader->dnodeId;
// 1. both of the two members are timeout => skip
// 1. has assigned && is sync => send req
if (currentAssignedDnodeId != 0 && arbGroupDup.isSync == true) {
(void)mndSendArbSetAssignedLeaderReq(pMnode, currentAssignedDnodeId, vgId, arbToken, term,
pAssignedLeader->token);
mInfo("vgId:%d, arb send set assigned leader to dnodeId:%d", vgId, currentAssignedDnodeId);
sdbRelease(pSdb, pArbGroup);
continue;
}
// 2. both of the two members are timeout => skip
if (member0IsTimeout && member1IsTimeout) {
sdbRelease(pSdb, pArbGroup);
continue;
}
// 2. no member is timeout => check sync
// 3. no member is timeout => check sync
if (member0IsTimeout == false && member1IsTimeout == false) {
// no assigned leader and not sync
if (currentAssignedDnodeId == 0 && !arbGroupDup.isSync) {
@ -556,7 +566,7 @@ static int32_t mndProcessArbCheckSyncTimer(SRpcMsg *pReq) {
continue;
}
// 3. one of the members is timeout => set assigned leader
// 4. one of the members is timeout => set assigned leader
int32_t candidateIndex = member0IsTimeout ? 1 : 0;
SArbGroupMember *pMember = &arbGroupDup.members[candidateIndex];
@ -581,26 +591,18 @@ static int32_t mndProcessArbCheckSyncTimer(SRpcMsg *pReq) {
continue;
}
// no assigned leader => write to sdb
if (currentAssignedDnodeId == 0) {
SArbGroup newGroup = {0};
mndArbGroupDupObj(&arbGroupDup, &newGroup);
mndArbGroupSetAssignedLeader(&newGroup, candidateIndex);
if (mndPullupArbUpdateGroup(pMnode, &newGroup) != 0) {
mError("vgId:%d, arb failed to pullup set assigned leader to dnodeId:%d, since %s", vgId, pMember->info.dnodeId,
terrstr());
sdbRelease(pSdb, pArbGroup);
return -1;
}
mInfo("vgId:%d, arb pull up set assigned leader to dnodeId:%d", vgId, pMember->info.dnodeId);
// is sync && no assigned leader => write to sdb
SArbGroup newGroup = {0};
mndArbGroupDupObj(&arbGroupDup, &newGroup);
mndArbGroupSetAssignedLeader(&newGroup, candidateIndex);
if (mndPullupArbUpdateGroup(pMnode, &newGroup) != 0) {
mError("vgId:%d, arb failed to pullup set assigned leader to dnodeId:%d, since %s", vgId, pMember->info.dnodeId,
terrstr());
sdbRelease(pSdb, pArbGroup);
continue;
return -1;
}
// isSync == true && dnodeId match => send request to dnode
(void)mndSendArbSetAssignedLeaderReq(pMnode, pMember->info.dnodeId, vgId, arbToken, term, pMember->state.token);
mInfo("vgId:%d, arb send set assigned leader to dnodeId:%d", vgId, pMember->info.dnodeId);
mInfo("vgId:%d, arb pull up set assigned leader to dnodeId:%d", vgId, pMember->info.dnodeId);
sdbRelease(pSdb, pArbGroup);
}
@ -1055,13 +1057,21 @@ static int32_t mndRetrieveArbGroups(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->isSync, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.dnodeId, false);
if (pGroup->assignedLeader.dnodeId != 0) {
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.dnodeId, false);
char token[TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(token, pGroup->assignedLeader.token, TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)token, false);
char token[TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(token, pGroup->assignedLeader.token, TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)token, false);
} else {
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetNULL(pColInfo, numOfRows);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetNULL(pColInfo, numOfRows);
}
taosThreadMutexUnlock(&pGroup->mutex);

View File

@ -397,11 +397,11 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
if (pCfg->precision < TSDB_MIN_PRECISION && pCfg->precision > TSDB_MAX_PRECISION) return -1;
if (pCfg->compression < TSDB_MIN_COMP_LEVEL || pCfg->compression > TSDB_MAX_COMP_LEVEL) return -1;
if (pCfg->replications < TSDB_MIN_DB_REPLICA || pCfg->replications > TSDB_MAX_DB_REPLICA) return -1;
#ifdef TD_ENTERPRISE
#ifdef TD_ENTERPRISE
if ((pCfg->replications == 2) ^ (pCfg->withArbitrator == TSDB_MAX_DB_WITH_ARBITRATOR)) return -1;
#else
#else
if (pCfg->replications != 1 && pCfg->replications != 3) return -1;
#endif
#endif
if (pCfg->strict < TSDB_DB_STRICT_OFF || pCfg->strict > TSDB_DB_STRICT_ON) return -1;
if (pCfg->schemaless < TSDB_DB_SCHEMALESS_OFF || pCfg->schemaless > TSDB_DB_SCHEMALESS_ON) return -1;
if (pCfg->cacheLast < TSDB_CACHE_MODEL_NONE || pCfg->cacheLast > TSDB_CACHE_MODEL_BOTH) return -1;
@ -438,13 +438,14 @@ static int32_t mndCheckInChangeDbCfg(SMnode *pMnode, SDbCfg *pOldCfg, SDbCfg *pN
if (pNewCfg->daysToKeep0 < pNewCfg->daysPerFile) return -1;
if (pNewCfg->daysToKeep0 > pNewCfg->daysToKeep1) return -1;
if (pNewCfg->daysToKeep1 > pNewCfg->daysToKeep2) return -1;
if (pNewCfg->keepTimeOffset < TSDB_MIN_KEEP_TIME_OFFSET || pNewCfg->keepTimeOffset > TSDB_MAX_KEEP_TIME_OFFSET) return -1;
if (pNewCfg->keepTimeOffset < TSDB_MIN_KEEP_TIME_OFFSET || pNewCfg->keepTimeOffset > TSDB_MAX_KEEP_TIME_OFFSET)
return -1;
if (pNewCfg->walFsyncPeriod < TSDB_MIN_FSYNC_PERIOD || pNewCfg->walFsyncPeriod > TSDB_MAX_FSYNC_PERIOD) return -1;
if (pNewCfg->walLevel < TSDB_MIN_WAL_LEVEL || pNewCfg->walLevel > TSDB_MAX_WAL_LEVEL) return -1;
if (pNewCfg->cacheLast < TSDB_CACHE_MODEL_NONE || pNewCfg->cacheLast > TSDB_CACHE_MODEL_BOTH) return -1;
if (pNewCfg->cacheLastSize < TSDB_MIN_DB_CACHE_SIZE || pNewCfg->cacheLastSize > TSDB_MAX_DB_CACHE_SIZE) return -1;
if (pNewCfg->replications < TSDB_MIN_DB_REPLICA || pNewCfg->replications > TSDB_MAX_DB_REPLICA) return -1;
#ifdef TD_ENTERPRISE
#ifdef TD_ENTERPRISE
if ((pNewCfg->replications == 2) ^ (pNewCfg->withArbitrator == TSDB_MAX_DB_WITH_ARBITRATOR)) return -1;
if (pNewCfg->replications == 2 && pNewCfg->withArbitrator == TSDB_MAX_DB_WITH_ARBITRATOR) {
if (pOldCfg->replications != 1 && pOldCfg->replications != 2) {
@ -452,9 +453,13 @@ static int32_t mndCheckInChangeDbCfg(SMnode *pMnode, SDbCfg *pOldCfg, SDbCfg *pN
return -1;
}
}
#else
if (pNewCfg->replications != 2 && pOldCfg->replications == 2) {
terrno = TSDB_CODE_OPS_NOT_SUPPORT;
return -1;
}
#else
if (pNewCfg->replications != 1 && pNewCfg->replications != 3) return -1;
#endif
#endif
if (pNewCfg->sstTrigger < TSDB_MIN_STT_TRIGGER || pNewCfg->sstTrigger > TSDB_MAX_STT_TRIGGER) return -1;
if (pNewCfg->minRows < TSDB_MIN_MINROWS_FBLOCK || pNewCfg->minRows > TSDB_MAX_MINROWS_FBLOCK) return -1;
if (pNewCfg->maxRows < TSDB_MIN_MAXROWS_FBLOCK || pNewCfg->maxRows > TSDB_MAX_MAXROWS_FBLOCK) return -1;
@ -940,7 +945,7 @@ static int32_t mndSetDbCfgFromAlterDbReq(SDbObj *pDb, SAlterDbReq *pAlter) {
terrno = 0;
}
if (pAlter->withArbitrator != pDb->cfg.withArbitrator) {
if (pAlter->withArbitrator >= TSDB_MIN_DB_WITH_ARBITRATOR && pAlter->withArbitrator != pDb->cfg.withArbitrator) {
pDb->cfg.withArbitrator = pAlter->withArbitrator;
pDb->vgVersion++;
terrno = 0;

View File

@ -1299,7 +1299,7 @@ SNode* createAlterDatabaseOptions(SAstCreateContext* pCxt) {
pOptions->sstTrigger = -1;
pOptions->tablePrefix = -1;
pOptions->tableSuffix = -1;
pOptions->withArbitrator = TSDB_DEFAULT_DB_WITH_ARBITRATOR;
pOptions->withArbitrator = -1;
return (SNode*)pOptions;
}

View File

@ -91,6 +91,7 @@ int32_t syncNodeOnAppendEntriesReply(SSyncNode* ths, const SRpcMsg* pRpcMsg) {
taosThreadMutexLock(&ths->arbTokenMutex);
syncUtilGenerateArbToken(ths->myNodeInfo.nodeId, ths->vgId, ths->arbToken);
sInfo("vgId:%d, assigned leader to leader, arbToken:%s", ths->vgId, ths->arbToken);
taosThreadMutexUnlock(&ths->arbTokenMutex);
}
} else {

View File

@ -302,8 +302,16 @@ int32_t syncBecomeAssignedLeader(SSyncNode* ths, SRpcMsg* pRpcMsg) {
goto _OVER;
}
syncNodeBecomeAssignedLeader(ths);
if (syncNodeAppendNoop(ths) < 0) {
sError("vgId:%d, assigned leader failed to append noop entry since %s", ths->vgId, terrstr());
}
}
errcode = TSDB_CODE_SUCCESS;
} else {
sInfo("vgId:%d, skip to set assigned leader, token mismatch, local:%s, msg:%s", ths->vgId, ths->arbToken,
req.memberToken);
goto _OVER;
}
SVArbSetAssignedLeaderRsp rsp = {0};
@ -1056,6 +1064,7 @@ SSyncNode* syncNodeOpen(SSyncInfo* pSyncInfo, int32_t vnodeVersion) {
pSyncNode->arbTerm = -1;
taosThreadMutexInit(&pSyncNode->arbTokenMutex, NULL);
syncUtilGenerateArbToken(pSyncNode->myNodeInfo.nodeId, pSyncInfo->vgId, pSyncNode->arbToken);
sInfo("vgId:%d, arb token:%s", pSyncNode->vgId, pSyncNode->arbToken);
// init peersNum, peers, peersId
pSyncNode->peersNum = pSyncNode->raftCfg.cfg.totalReplicaNum - 1;