Merge remote-tracking branch 'upstream/develop' into odbc2
This commit is contained in:
commit
79e7343495
|
@ -0,0 +1,267 @@
|
||||||
|
[](https://travis-ci.org/taosdata/TDengine)
|
||||||
|
[](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
|
||||||
|
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||||
|
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||||
|
[](https://snapcraft.io/tdengine)
|
||||||
|
|
||||||
|
[](https://www.taosdata.com)
|
||||||
|
|
||||||
|
简体中文 | [English](./README.md)
|
||||||
|
|
||||||
|
# TDengine 简介
|
||||||
|
|
||||||
|
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。除核心的快10倍以上的时序数据库功能外,还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。
|
||||||
|
|
||||||
|
- 10 倍以上性能提升。定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。
|
||||||
|
- 硬件或云服务成本降至1/5。由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
|
||||||
|
- 全栈时序数据处理引擎。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。
|
||||||
|
- 强大的分析功能。无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell/Python/R/Matlab随时进行。
|
||||||
|
- 与第三方工具无缝连接。不用一行代码,即可与Telegraf, Grafana, EMQ X, Prometheus, Matlab, R集成。后续还将支持MQTT, OPC, Hadoop,Spark等, BI工具也将无缝连接。
|
||||||
|
- 零运维成本、零学习成本。安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC,RESTful,支持Python/Java/C/C++/Go/Node.JS, 与MySQL相似,零学习成本。
|
||||||
|
|
||||||
|
# 文档
|
||||||
|
|
||||||
|
TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。
|
||||||
|
|
||||||
|
# 生成
|
||||||
|
|
||||||
|
TDengine目前2.0版服务器仅能在Linux系统上安装和运行,后续会支持Windows、macOS等系统。客户端可以在Windows或Linux上安装和运行。任何OS的应用也可以选择RESTful接口连接服务器taosd。CPU支持X64/ARM64/MIPS64/Alpha64,后续会支持ARM32、RISC-V等CPU架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
|
||||||
|
|
||||||
|
## 安装工具
|
||||||
|
|
||||||
|
### Ubuntu 16.04 及以上版本 & Debian:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get install -y gcc cmake build-essential git
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ubuntu 14.04:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
|
||||||
|
export PATH=/usr/lib/binutils-2.26/bin:$PATH
|
||||||
|
```
|
||||||
|
|
||||||
|
编译或打包 JDBC 驱动源码,需安装 Java JDK 8 或以上版本和 Apache Maven 2.7 或以上版本。
|
||||||
|
|
||||||
|
安装 OpenJDK 8:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get install -y openjdk-8-jdk
|
||||||
|
```
|
||||||
|
|
||||||
|
安装 Apache Maven:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get install -y maven
|
||||||
|
```
|
||||||
|
|
||||||
|
### CentOS 7:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo yum install -y gcc gcc-c++ make cmake git
|
||||||
|
```
|
||||||
|
|
||||||
|
安装 OpenJDK 8:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo yum install -y java-1.8.0-openjdk
|
||||||
|
```
|
||||||
|
|
||||||
|
安装 Apache Maven:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo yum install -y maven
|
||||||
|
```
|
||||||
|
|
||||||
|
### CentOS 8 & Fedora:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
|
||||||
|
```
|
||||||
|
|
||||||
|
安装 OpenJDK 8:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install -y java-1.8.0-openjdk
|
||||||
|
```
|
||||||
|
|
||||||
|
安装 Apache Maven:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install -y maven
|
||||||
|
```
|
||||||
|
|
||||||
|
## 获取源码
|
||||||
|
|
||||||
|
首先,你需要从 GitHub 克隆源码:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/taosdata/TDengine.git
|
||||||
|
cd TDengine
|
||||||
|
```
|
||||||
|
|
||||||
|
Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话,需要在 TDengine 目录下通过此命令安装:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git submodule update --init --recursive
|
||||||
|
```
|
||||||
|
|
||||||
|
## 生成 TDengine
|
||||||
|
|
||||||
|
### Linux 系统
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir debug && cd debug
|
||||||
|
cmake .. && cmake --build .
|
||||||
|
```
|
||||||
|
|
||||||
|
在X86-64、X86、arm64 和 arm32 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
|
||||||
|
|
||||||
|
aarch64:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||||
|
```
|
||||||
|
|
||||||
|
aarch32:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cmake .. -DCPUTYPE=aarch32 && cmake --build .
|
||||||
|
```
|
||||||
|
|
||||||
|
### Windows 系统
|
||||||
|
|
||||||
|
如果你使用的是 Visual Studio 2013 版本:
|
||||||
|
|
||||||
|
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”,为 32 位操作系统指定“x86”。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir debug && cd debug
|
||||||
|
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
|
||||||
|
cmake .. -G "NMake Makefiles"
|
||||||
|
nmake
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你使用的是 Visual Studio 2019 或 2017 版本:
|
||||||
|
|
||||||
|
打开cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir debug && cd debug
|
||||||
|
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
|
||||||
|
cmake .. -G "NMake Makefiles"
|
||||||
|
nmake
|
||||||
|
```
|
||||||
|
|
||||||
|
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir debug && cd debug
|
||||||
|
cmake .. -G "NMake Makefiles"
|
||||||
|
nmake
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mac OS X 系统
|
||||||
|
|
||||||
|
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir debug && cd debug
|
||||||
|
cmake .. && cmake --build .
|
||||||
|
```
|
||||||
|
|
||||||
|
# 安装
|
||||||
|
|
||||||
|
如果你不想安装,可以直接在shell中运行。生成完成后,安装 TDengine:
|
||||||
|
```bash
|
||||||
|
make install
|
||||||
|
```
|
||||||
|
|
||||||
|
用户可以在[文件目录结构](https://www.taosdata.com/cn/documentation/administrator#directories)中了解更多在操作系统中生成的目录或文件。
|
||||||
|
|
||||||
|
安装成功后,在终端中启动 TDengine 服务:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taosd
|
||||||
|
```
|
||||||
|
|
||||||
|
用户可以使用 TDengine Shell 来连接 TDengine 服务,在终端中,输入:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taos
|
||||||
|
```
|
||||||
|
|
||||||
|
如果 TDengine Shell 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
||||||
|
|
||||||
|
## 快速运行
|
||||||
|
|
||||||
|
TDengine 生成后,在终端执行以下命令:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./build/bin/taosd -c test/cfg
|
||||||
|
```
|
||||||
|
|
||||||
|
在另一个终端,使用 TDengine shell 连接服务器:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./build/bin/taos -c test/cfg
|
||||||
|
```
|
||||||
|
|
||||||
|
"-c test/cfg"指定系统配置文件所在目录。
|
||||||
|
|
||||||
|
# 体验 TDengine
|
||||||
|
|
||||||
|
在TDengine终端中,用户可以通过SQL命令来创建/删除数据库、表等,并进行插入查询操作。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
create database demo;
|
||||||
|
use demo;
|
||||||
|
create table t (ts timestamp, speed int);
|
||||||
|
insert into t values ('2019-07-15 00:00:00', 10);
|
||||||
|
insert into t values ('2019-07-15 01:00:00', 20);
|
||||||
|
select * from t;
|
||||||
|
ts | speed |
|
||||||
|
===================================
|
||||||
|
19-07-15 00:00:00.000| 10|
|
||||||
|
19-07-15 01:00:00.000| 20|
|
||||||
|
Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
```
|
||||||
|
|
||||||
|
# 应用开发
|
||||||
|
|
||||||
|
## 官方连接器
|
||||||
|
|
||||||
|
TDengine 提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||||
|
|
||||||
|
- Java
|
||||||
|
|
||||||
|
- C/C++
|
||||||
|
|
||||||
|
- Python
|
||||||
|
|
||||||
|
- Go
|
||||||
|
|
||||||
|
- RESTful API
|
||||||
|
|
||||||
|
- Node.js
|
||||||
|
|
||||||
|
## 第三方连接器
|
||||||
|
|
||||||
|
TDengine 社区生态中也有一些非常友好的第三方连接器,可以通过以下链接访问它们的源码。
|
||||||
|
|
||||||
|
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
|
||||||
|
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
||||||
|
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
|
||||||
|
|
||||||
|
# 运行和添加测试例
|
||||||
|
|
||||||
|
TDengine 的测试框架和所有测试例全部开源。
|
||||||
|
|
||||||
|
点击[这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
|
||||||
|
|
||||||
|
# 成为社区贡献者
|
||||||
|
点击[这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
|
||||||
|
|
||||||
|
#加入技术交流群
|
||||||
|
TDengine官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
|
|
@ -6,6 +6,8 @@
|
||||||
|
|
||||||
[](https://www.taosdata.com)
|
[](https://www.taosdata.com)
|
||||||
|
|
||||||
|
English | [简体中文](./README-CN.md)
|
||||||
|
|
||||||
# What is TDengine?
|
# What is TDengine?
|
||||||
|
|
||||||
TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation.
|
TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation.
|
||||||
|
|
|
@ -2223,7 +2223,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
|
||||||
bool multiColOutput = taosArrayGetSize(pItem->pNode->pParam) > 1;
|
bool multiColOutput = taosArrayGetSize(pItem->pNode->pParam) > 1;
|
||||||
setResultColName(name, pItem, cvtFunc.originFuncId, &pParamElem->pNode->colInfo, multiColOutput);
|
setResultColName(name, pItem, cvtFunc.originFuncId, &pParamElem->pNode->colInfo, multiColOutput);
|
||||||
|
|
||||||
if (setExprInfoForFunctions(pCmd, pQueryInfo, pSchema, cvtFunc, name, colIndex + i, &index, finalResult) != 0) {
|
if (setExprInfoForFunctions(pCmd, pQueryInfo, pSchema, cvtFunc, name, colIndex++, &index, finalResult) != 0) {
|
||||||
return TSDB_CODE_TSC_INVALID_SQL;
|
return TSDB_CODE_TSC_INVALID_SQL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3092,18 +3092,26 @@ static int32_t doExtractColumnFilterInfo(SSqlCmd* pCmd, SQueryInfo* pQueryInfo,
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t retVal = TSDB_CODE_SUCCESS;
|
int32_t retVal = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
|
int32_t bufLen = 0;
|
||||||
|
if (IS_NUMERIC_TYPE(pRight->value.nType)) {
|
||||||
|
bufLen = 60;
|
||||||
|
} else {
|
||||||
|
bufLen = pRight->value.nLen + 1;
|
||||||
|
}
|
||||||
|
|
||||||
if (pExpr->tokenId == TK_LE || pExpr->tokenId == TK_LT) {
|
if (pExpr->tokenId == TK_LE || pExpr->tokenId == TK_LT) {
|
||||||
retVal = tVariantDump(&pRight->value, (char*)&pColumnFilter->upperBndd, colType, false);
|
retVal = tVariantDump(&pRight->value, (char*)&pColumnFilter->upperBndd, colType, false);
|
||||||
|
|
||||||
// TK_GT,TK_GE,TK_EQ,TK_NE are based on the pColumn->lowerBndd
|
// TK_GT,TK_GE,TK_EQ,TK_NE are based on the pColumn->lowerBndd
|
||||||
} else if (colType == TSDB_DATA_TYPE_BINARY) {
|
} else if (colType == TSDB_DATA_TYPE_BINARY) {
|
||||||
pColumnFilter->pz = (int64_t)calloc(1, pRight->value.nLen + TSDB_NCHAR_SIZE);
|
pColumnFilter->pz = (int64_t)calloc(1, bufLen * TSDB_NCHAR_SIZE);
|
||||||
pColumnFilter->len = pRight->value.nLen;
|
pColumnFilter->len = pRight->value.nLen;
|
||||||
retVal = tVariantDump(&pRight->value, (char*)pColumnFilter->pz, colType, false);
|
retVal = tVariantDump(&pRight->value, (char*)pColumnFilter->pz, colType, false);
|
||||||
|
|
||||||
} else if (colType == TSDB_DATA_TYPE_NCHAR) {
|
} else if (colType == TSDB_DATA_TYPE_NCHAR) {
|
||||||
// pRight->value.nLen + 1 is larger than the actual nchar string length
|
// pRight->value.nLen + 1 is larger than the actual nchar string length
|
||||||
pColumnFilter->pz = (int64_t)calloc(1, (pRight->value.nLen + 1) * TSDB_NCHAR_SIZE);
|
pColumnFilter->pz = (int64_t)calloc(1, bufLen * TSDB_NCHAR_SIZE);
|
||||||
retVal = tVariantDump(&pRight->value, (char*)pColumnFilter->pz, colType, false);
|
retVal = tVariantDump(&pRight->value, (char*)pColumnFilter->pz, colType, false);
|
||||||
size_t len = twcslen((wchar_t*)pColumnFilter->pz);
|
size_t len = twcslen((wchar_t*)pColumnFilter->pz);
|
||||||
pColumnFilter->len = len * TSDB_NCHAR_SIZE;
|
pColumnFilter->len = len * TSDB_NCHAR_SIZE;
|
||||||
|
|
|
@ -22,7 +22,7 @@ extern "C" {
|
||||||
|
|
||||||
int32_t tpInit();
|
int32_t tpInit();
|
||||||
void tpCleanUp();
|
void tpCleanUp();
|
||||||
void tpUpdateTs(int32_t *seq, void *pMsg);
|
void tpUpdateTs(int32_t vgId, int64_t *seq, void *pMsg);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
|
|
@ -997,7 +997,7 @@ void rand_string(char *str, int size) {
|
||||||
if (size > 0) {
|
if (size > 0) {
|
||||||
//--size;
|
//--size;
|
||||||
int n;
|
int n;
|
||||||
for (n = 0; n < size; n++) {
|
for (n = 0; n < size - 1; n++) {
|
||||||
int key = rand_tinyint() % (int)(sizeof(charset) - 1);
|
int key = rand_tinyint() % (int)(sizeof(charset) - 1);
|
||||||
str[n] = charset[key];
|
str[n] = charset[key];
|
||||||
}
|
}
|
||||||
|
|
|
@ -55,7 +55,7 @@ int32_t mnodeProcessAlterDbMsg(SMnodeMsg *pMsg);
|
||||||
#ifndef _TOPIC
|
#ifndef _TOPIC
|
||||||
int32_t tpInit() { return 0; }
|
int32_t tpInit() { return 0; }
|
||||||
void tpCleanUp() {}
|
void tpCleanUp() {}
|
||||||
void tpUpdateTs(int32_t *seq, void *pMsg) {}
|
void tpUpdateTs(int32_t vgId, int64_t *seq, void *pMsg) {}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static void mnodeDestroyDb(SDbObj *pDb) {
|
static void mnodeDestroyDb(SDbObj *pDb) {
|
||||||
|
|
|
@ -334,7 +334,7 @@ static int64_t syncRetrieveWal(SSyncPeer *pPeer) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
code = (int32_t)taosSendFile(pPeer->syncFd, sfd, NULL, size);
|
code = taosSendFile(pPeer->syncFd, sfd, NULL, size);
|
||||||
close(sfd);
|
close(sfd);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
sError("%s, failed to send wal:%s for retrieve since %s, code:0x%" PRIx64, pPeer->id, fname, strerror(errno), code);
|
sError("%s, failed to send wal:%s for retrieve since %s, code:0x%" PRIx64, pPeer->id, fname, strerror(errno), code);
|
||||||
|
|
|
@ -213,9 +213,9 @@ void *tsdbGetTableTagVal(const void* pTable, int32_t colId, int16_t type, int16_
|
||||||
char *val = tdGetKVRowValOfCol(((STable*)pTable)->tagVal, colId);
|
char *val = tdGetKVRowValOfCol(((STable*)pTable)->tagVal, colId);
|
||||||
assert(type == pCol->type && bytes == pCol->bytes);
|
assert(type == pCol->type && bytes == pCol->bytes);
|
||||||
|
|
||||||
if (val != NULL && IS_VAR_DATA_TYPE(type)) {
|
// if (val != NULL && IS_VAR_DATA_TYPE(type)) {
|
||||||
assert(varDataLen(val) < pCol->bytes);
|
// assert(varDataLen(val) < pCol->bytes);
|
||||||
}
|
// }
|
||||||
|
|
||||||
return val;
|
return val;
|
||||||
}
|
}
|
||||||
|
|
|
@ -152,14 +152,14 @@ static int32_t tsdbSyncSendMeta(SSyncH *pSynch) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t writeLen = (int32_t)mf.info.size;
|
int64_t writeLen = mf.info.size;
|
||||||
tsdbInfo("vgId:%d, metafile:%s will be sent, size:%d", REPO_ID(pRepo), mf.f.aname, writeLen);
|
tsdbInfo("vgId:%d, metafile:%s will be sent, size:%" PRId64, REPO_ID(pRepo), mf.f.aname, writeLen);
|
||||||
|
|
||||||
int32_t ret = (int32_t)taosSendFile(pSynch->socketFd, TSDB_FILE_FD(&mf), 0, writeLen);
|
int64_t ret = taosSendFile(pSynch->socketFd, TSDB_FILE_FD(&mf), 0, writeLen);
|
||||||
if (ret != writeLen) {
|
if (ret != writeLen) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
tsdbError("vgId:%d, failed to send metafile since %s, ret:%d writeLen:%d", REPO_ID(pRepo), tstrerror(terrno), ret,
|
tsdbError("vgId:%d, failed to send metafile since %s, ret:%" PRId64 " writeLen:%" PRId64, REPO_ID(pRepo),
|
||||||
writeLen);
|
tstrerror(terrno), ret, writeLen);
|
||||||
tsdbCloseMFile(&mf);
|
tsdbCloseMFile(&mf);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
@ -217,18 +217,18 @@ static int32_t tsdbSyncRecvMeta(SSyncH *pSynch) {
|
||||||
|
|
||||||
tsdbInfo("vgId:%d, metafile:%s is created", REPO_ID(pRepo), mf.f.aname);
|
tsdbInfo("vgId:%d, metafile:%s is created", REPO_ID(pRepo), mf.f.aname);
|
||||||
|
|
||||||
int32_t readLen = (int32_t)pSynch->pmf->info.size;
|
int64_t readLen = pSynch->pmf->info.size;
|
||||||
int32_t ret = taosCopyFds(pSynch->socketFd, TSDB_FILE_FD(&mf), readLen);
|
int64_t ret = taosCopyFds(pSynch->socketFd, TSDB_FILE_FD(&mf), readLen);
|
||||||
if (ret != readLen) {
|
if (ret != readLen) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
tsdbError("vgId:%d, failed to recv metafile since %s, ret:%d readLen:%d", REPO_ID(pRepo), tstrerror(terrno), ret,
|
tsdbError("vgId:%d, failed to recv metafile since %s, ret:%" PRId64 " readLen:%" PRId64, REPO_ID(pRepo),
|
||||||
readLen);
|
tstrerror(terrno), ret, readLen);
|
||||||
tsdbCloseMFile(&mf);
|
tsdbCloseMFile(&mf);
|
||||||
tsdbRemoveMFile(&mf);
|
tsdbRemoveMFile(&mf);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
tsdbInfo("vgId:%d, metafile is received, size:%d", REPO_ID(pRepo), readLen);
|
tsdbInfo("vgId:%d, metafile is received, size:%" PRId64, REPO_ID(pRepo), readLen);
|
||||||
|
|
||||||
mf.info = pSynch->pmf->info;
|
mf.info = pSynch->pmf->info;
|
||||||
tsdbCloseMFile(&mf);
|
tsdbCloseMFile(&mf);
|
||||||
|
@ -463,12 +463,12 @@ static int32_t tsdbSyncRecvDFileSetArray(SSyncH *pSynch) {
|
||||||
tsdbInfo("vgId:%d, file:%s will be received, osize:%" PRIu64 " rsize:%" PRIu64, REPO_ID(pRepo),
|
tsdbInfo("vgId:%d, file:%s will be received, osize:%" PRIu64 " rsize:%" PRIu64, REPO_ID(pRepo),
|
||||||
pDFile->f.aname, pDFile->info.size, pRDFile->info.size);
|
pDFile->f.aname, pDFile->info.size, pRDFile->info.size);
|
||||||
|
|
||||||
int32_t writeLen = (int32_t)pRDFile->info.size;
|
int64_t writeLen = pRDFile->info.size;
|
||||||
int32_t ret = taosCopyFds(pSynch->socketFd, pDFile->fd, writeLen);
|
int64_t ret = taosCopyFds(pSynch->socketFd, pDFile->fd, writeLen);
|
||||||
if (ret != writeLen) {
|
if (ret != writeLen) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
tsdbError("vgId:%d, failed to recv file:%s since %s, ret:%d writeLen:%d", REPO_ID(pRepo), pDFile->f.aname,
|
tsdbError("vgId:%d, failed to recv file:%s since %s, ret:%" PRId64 " writeLen:%" PRId64, REPO_ID(pRepo),
|
||||||
tstrerror(terrno), ret, writeLen);
|
pDFile->f.aname, tstrerror(terrno), ret, writeLen);
|
||||||
tsdbCloseDFileSet(&fset);
|
tsdbCloseDFileSet(&fset);
|
||||||
tsdbRemoveDFileSet(&fset);
|
tsdbRemoveDFileSet(&fset);
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -476,7 +476,7 @@ static int32_t tsdbSyncRecvDFileSetArray(SSyncH *pSynch) {
|
||||||
|
|
||||||
// Update new file info
|
// Update new file info
|
||||||
pDFile->info = pRDFile->info;
|
pDFile->info = pRDFile->info;
|
||||||
tsdbInfo("vgId:%d, file:%s is received, size:%d", REPO_ID(pRepo), pDFile->f.aname, writeLen);
|
tsdbInfo("vgId:%d, file:%s is received, size:%" PRId64, REPO_ID(pRepo), pDFile->f.aname, writeLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
tsdbCloseDFileSet(&fset);
|
tsdbCloseDFileSet(&fset);
|
||||||
|
@ -575,14 +575,14 @@ static int32_t tsdbSyncSendDFileSet(SSyncH *pSynch, SDFileSet *pSet) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t writeLen = (int32_t)df.info.size;
|
int64_t writeLen = df.info.size;
|
||||||
tsdbInfo("vgId:%d, file:%s will be sent, size:%d", REPO_ID(pRepo), df.f.aname, writeLen);
|
tsdbInfo("vgId:%d, file:%s will be sent, size:%" PRId64, REPO_ID(pRepo), df.f.aname, writeLen);
|
||||||
|
|
||||||
int32_t ret = (int32_t)taosSendFile(pSynch->socketFd, TSDB_FILE_FD(&df), 0, writeLen);
|
int64_t ret = taosSendFile(pSynch->socketFd, TSDB_FILE_FD(&df), 0, writeLen);
|
||||||
if (ret != writeLen) {
|
if (ret != writeLen) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
tsdbError("vgId:%d, failed to send file:%s since %s, ret:%d writeLen:%d", REPO_ID(pRepo), df.f.aname,
|
tsdbError("vgId:%d, failed to send file:%s since %s, ret:%" PRId64 " writeLen:%" PRId64, REPO_ID(pRepo),
|
||||||
tstrerror(terrno), ret, writeLen);
|
df.f.aname, tstrerror(terrno), ret, writeLen);
|
||||||
tsdbCloseDFile(&df);
|
tsdbCloseDFile(&df);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,7 +28,7 @@ int32_t taosReadn(SOCKET sock, char *buffer, int32_t len);
|
||||||
int32_t taosWriteMsg(SOCKET fd, void *ptr, int32_t nbytes);
|
int32_t taosWriteMsg(SOCKET fd, void *ptr, int32_t nbytes);
|
||||||
int32_t taosReadMsg(SOCKET fd, void *ptr, int32_t nbytes);
|
int32_t taosReadMsg(SOCKET fd, void *ptr, int32_t nbytes);
|
||||||
int32_t taosNonblockwrite(SOCKET fd, char *ptr, int32_t nbytes);
|
int32_t taosNonblockwrite(SOCKET fd, char *ptr, int32_t nbytes);
|
||||||
int32_t taosCopyFds(SOCKET sfd, int32_t dfd, int64_t len);
|
int64_t taosCopyFds(SOCKET sfd, int32_t dfd, int64_t len);
|
||||||
int32_t taosSetNonblocking(SOCKET sock, int32_t on);
|
int32_t taosSetNonblocking(SOCKET sock, int32_t on);
|
||||||
|
|
||||||
SOCKET taosOpenUdpSocket(uint32_t localIp, uint16_t localPort);
|
SOCKET taosOpenUdpSocket(uint32_t localIp, uint16_t localPort);
|
||||||
|
|
|
@ -465,36 +465,36 @@ void tinet_ntoa(char *ipstr, uint32_t ip) {
|
||||||
#define COPY_SIZE 32768
|
#define COPY_SIZE 32768
|
||||||
// sendfile shall be used
|
// sendfile shall be used
|
||||||
|
|
||||||
int32_t taosCopyFds(SOCKET sfd, int32_t dfd, int64_t len) {
|
int64_t taosCopyFds(SOCKET sfd, int32_t dfd, int64_t len) {
|
||||||
int64_t leftLen;
|
int64_t leftLen;
|
||||||
int32_t readLen, writeLen;
|
int64_t readLen, writeLen;
|
||||||
char temp[COPY_SIZE];
|
char temp[COPY_SIZE];
|
||||||
|
|
||||||
leftLen = len;
|
leftLen = len;
|
||||||
|
|
||||||
while (leftLen > 0) {
|
while (leftLen > 0) {
|
||||||
if (leftLen < COPY_SIZE)
|
if (leftLen < COPY_SIZE)
|
||||||
readLen = (int32_t)leftLen;
|
readLen = leftLen;
|
||||||
else
|
else
|
||||||
readLen = COPY_SIZE; // 4K
|
readLen = COPY_SIZE; // 4K
|
||||||
|
|
||||||
int32_t retLen = taosReadMsg(sfd, temp, (int32_t)readLen);
|
int64_t retLen = taosReadMsg(sfd, temp, (int32_t)readLen);
|
||||||
if (readLen != retLen) {
|
if (readLen != retLen) {
|
||||||
uError("read error, readLen:%d retLen:%d len:%" PRId64 " leftLen:%" PRId64 ", reason:%s", readLen, retLen, len,
|
uError("read error, readLen:%" PRId64 " retLen:%" PRId64 " len:%" PRId64 " leftLen:%" PRId64 ", reason:%s",
|
||||||
leftLen, strerror(errno));
|
readLen, retLen, len, leftLen, strerror(errno));
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
writeLen = taosWriteMsg(dfd, temp, readLen);
|
writeLen = taosWriteMsg(dfd, temp, (int32_t)readLen);
|
||||||
|
|
||||||
if (readLen != writeLen) {
|
if (readLen != writeLen) {
|
||||||
uError("copy error, readLen:%d writeLen:%d len:%" PRId64 " leftLen:%" PRId64 ", reason:%s", readLen, writeLen,
|
uError("copy error, readLen:%" PRId64 " writeLen:%" PRId64 " len:%" PRId64 " leftLen:%" PRId64 ", reason:%s",
|
||||||
len, leftLen, strerror(errno));
|
readLen, writeLen, len, leftLen, strerror(errno));
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
leftLen -= readLen;
|
leftLen -= readLen;
|
||||||
}
|
}
|
||||||
|
|
||||||
return (int32_t)len;
|
return len;
|
||||||
}
|
}
|
||||||
|
|
|
@ -40,7 +40,7 @@ typedef struct {
|
||||||
int32_t queuedWMsg;
|
int32_t queuedWMsg;
|
||||||
int32_t queuedRMsg;
|
int32_t queuedRMsg;
|
||||||
int32_t flowctrlLevel;
|
int32_t flowctrlLevel;
|
||||||
int32_t sequence; // for topic
|
int64_t sequence; // for topic
|
||||||
int8_t status;
|
int8_t status;
|
||||||
int8_t role;
|
int8_t role;
|
||||||
int8_t accessState;
|
int8_t accessState;
|
||||||
|
|
|
@ -142,7 +142,7 @@ static int32_t vnodeProcessSubmitMsg(SVnodeObj *pVnode, void *pCont, SRspRet *pR
|
||||||
vTrace("vgId:%d, submit msg is processed", pVnode->vgId);
|
vTrace("vgId:%d, submit msg is processed", pVnode->vgId);
|
||||||
|
|
||||||
if (pVnode->dbType == TSDB_DB_TYPE_TOPIC && pVnode->role == TAOS_SYNC_ROLE_MASTER) {
|
if (pVnode->dbType == TSDB_DB_TYPE_TOPIC && pVnode->role == TAOS_SYNC_ROLE_MASTER) {
|
||||||
tpUpdateTs(&pVnode->sequence, pCont);
|
tpUpdateTs(pVnode->vgId, &pVnode->sequence, pCont);
|
||||||
}
|
}
|
||||||
|
|
||||||
// save insert result into item
|
// save insert result into item
|
||||||
|
|
|
@ -34,7 +34,7 @@ done
|
||||||
function addTaoscfg {
|
function addTaoscfg {
|
||||||
for i in {1..5}
|
for i in {1..5}
|
||||||
do
|
do
|
||||||
touch /data/node$i/cfg/taos.cfg
|
touch $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg
|
echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
echo 'fqdn tdnode'$i >> $DOCKER_DIR/node$i/cfg/taos.cfg
|
echo 'fqdn tdnode'$i >> $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
echo 'arbitrator tdnode1:6042' >> $DOCKER_DIR/node$i/cfg/taos.cfg
|
echo 'arbitrator tdnode1:6042' >> $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
|
@ -101,19 +101,19 @@ function clusterUp {
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 2 ]; then
|
if [ $NUM_OF_NODES -eq 2 ]; then
|
||||||
echo "create 2 dnodes"
|
echo "create 2 dnodes"
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION docker-compose up -d
|
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose up -d
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 3 ]; then
|
if [ $NUM_OF_NODES -eq 3 ]; then
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION docker-compose -f docker-compose.yml -f node3.yml up -d
|
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml up -d
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 4 ]; then
|
if [ $NUM_OF_NODES -eq 4 ]; then
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION docker-compose -f docker-compose.yml -f node3.yml -f node4.yml up -d
|
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml up -d
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 5 ]; then
|
if [ $NUM_OF_NODES -eq 5 ]; then
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION docker-compose -f docker-compose.yml -f node3.yml -f node4.yml -f node5.yml up -d
|
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml -f node5.yml up -d
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "docker compose finish"
|
echo "docker compose finish"
|
||||||
|
|
|
@ -9,6 +9,7 @@ services:
|
||||||
- TARBITRATORPKG=${TARBITRATORPKG}
|
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
- EXTRACTDIR2=${DIR2}
|
- EXTRACTDIR2=${DIR2}
|
||||||
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode1'
|
container_name: 'tdnode1'
|
||||||
cap_add:
|
cap_add:
|
||||||
|
@ -32,19 +33,19 @@ services:
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node1/data
|
source: ${DATADIR}/node1/data
|
||||||
target: /var/lib/taos
|
target: /var/lib/taos
|
||||||
# bind log directory
|
# bind log directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node1/log
|
source: ${DATADIR}/node1/log
|
||||||
target: /var/log/taos
|
target: /var/log/taos
|
||||||
# bind configuration
|
# bind configuration
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node1/cfg
|
source: ${DATADIR}/node1/cfg
|
||||||
target: /etc/taos
|
target: /etc/taos
|
||||||
# bind core dump path
|
# bind core dump path
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node1/core
|
source: ${DATADIR}/node1/core
|
||||||
target: /coredump
|
target: /coredump
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data
|
source: /data
|
||||||
|
@ -61,6 +62,7 @@ services:
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode2'
|
container_name: 'tdnode2'
|
||||||
cap_add:
|
cap_add:
|
||||||
|
@ -84,22 +86,22 @@ services:
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node2/data
|
source: ${DATADIR}/node2/data
|
||||||
target: /var/lib/taos
|
target: /var/lib/taos
|
||||||
# bind log directory
|
# bind log directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node2/log
|
source: ${DATADIR}/node2/log
|
||||||
target: /var/log/taos
|
target: /var/log/taos
|
||||||
# bind configuration
|
# bind configuration
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node2/cfg
|
source: ${DATADIR}/node2/cfg
|
||||||
target: /etc/taos
|
target: /etc/taos
|
||||||
# bind core dump path
|
# bind core dump path
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node2/core
|
source: ${DATADIR}/node2/core
|
||||||
target: /coredump
|
target: /coredump
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data
|
source: ${DATADIR}
|
||||||
target: /root
|
target: /root
|
||||||
hostname: tdnode2
|
hostname: tdnode2
|
||||||
networks:
|
networks:
|
||||||
|
|
|
@ -7,6 +7,7 @@ services:
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode3'
|
container_name: 'tdnode3'
|
||||||
cap_add:
|
cap_add:
|
||||||
|
@ -30,22 +31,22 @@ services:
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node3/data
|
source: ${DATADIR}/node3/data
|
||||||
target: /var/lib/taos
|
target: /var/lib/taos
|
||||||
# bind log directory
|
# bind log directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node3/log
|
source: ${DATADIR}/node3/log
|
||||||
target: /var/log/taos
|
target: /var/log/taos
|
||||||
# bind configuration
|
# bind configuration
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node3/cfg
|
source: ${DATADIR}/node3/cfg
|
||||||
target: /etc/taos
|
target: /etc/taos
|
||||||
# bind core dump path
|
# bind core dump path
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node3/core
|
source: ${DATADIR}/node3/core
|
||||||
target: /coredump
|
target: /coredump
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data
|
source: ${DATADIR}
|
||||||
target: /root
|
target: /root
|
||||||
hostname: tdnode3
|
hostname: tdnode3
|
||||||
networks:
|
networks:
|
||||||
|
|
|
@ -7,6 +7,7 @@ services:
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode4'
|
container_name: 'tdnode4'
|
||||||
cap_add:
|
cap_add:
|
||||||
|
@ -30,23 +31,23 @@ services:
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node4/data
|
source: ${DATADIR}/node4/data
|
||||||
target: /var/lib/taos
|
target: /var/lib/taos
|
||||||
# bind log directory
|
# bind log directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node4/log
|
source: ${DATADIR}/node4/log
|
||||||
target: /var/log/taos
|
target: /var/log/taos
|
||||||
# bind configuration
|
# bind configuration
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node4/cfg
|
source: ${DATADIR}/node4/cfg
|
||||||
target: /etc/taos
|
target: /etc/taos
|
||||||
# bind core dump path
|
# bind core dump path
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node4/core
|
source: ${DATADIR}/node4/core
|
||||||
target: /coredump
|
target: /coredump
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data
|
source: ${DATADIR}
|
||||||
target: /root
|
target: /root
|
||||||
hostname: tdnode4
|
hostname: tdnode4
|
||||||
networks:
|
networks:
|
||||||
taos_update_net:
|
taos_update_net:
|
||||||
|
|
|
@ -7,6 +7,7 @@ services:
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode5'
|
container_name: 'tdnode5'
|
||||||
cap_add:
|
cap_add:
|
||||||
|
@ -30,22 +31,22 @@ services:
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node5/data
|
source: ${DATADIR}/node5/data
|
||||||
target: /var/lib/taos
|
target: /var/lib/taos
|
||||||
# bind log directory
|
# bind log directory
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node5/log
|
source: ${DATADIR}/node5/log
|
||||||
target: /var/log/taos
|
target: /var/log/taos
|
||||||
# bind configuration
|
# bind configuration
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node5/cfg
|
source: ${DATADIR}/node5/cfg
|
||||||
target: /etc/taos
|
target: /etc/taos
|
||||||
# bind core dump path
|
# bind core dump path
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data/node5/core
|
source: ${DATADIR}/node5/core
|
||||||
target: /coredump
|
target: /coredump
|
||||||
- type: bind
|
- type: bind
|
||||||
source: /data
|
source: ${DATADIR}
|
||||||
target: /root
|
target: /root
|
||||||
hostname: tdnode5
|
hostname: tdnode5
|
||||||
networks:
|
networks:
|
||||||
|
|
|
@ -28,18 +28,18 @@ class TDTestCase:
|
||||||
|
|
||||||
print("==============step1")
|
print("==============step1")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"create table if not exists st (ts timestamp, tagtype int, name nchar(16)) tags(dev nchar(50))")
|
"create table if not exists st (ts timestamp, tagtype int, name nchar(16), col4 binary(16)) tags(dev nchar(50), tag2 binary(16))")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'CREATE TABLE if not exists dev_001 using st tags("dev_01")')
|
'CREATE TABLE if not exists dev_001 using st tags("dev_01", "tag_01")')
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'CREATE TABLE if not exists dev_002 using st tags("dev_02")')
|
'CREATE TABLE if not exists dev_002 using st tags("dev_02", "tag_02")')
|
||||||
|
|
||||||
print("==============step2")
|
print("==============step2")
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"""INSERT INTO dev_001(ts, tagtype, name) VALUES('2020-05-13 10:00:00.000', 1, 'first'),('2020-05-13 10:00:00.001', 2, 'second'),
|
"""INSERT INTO dev_001 VALUES('2020-05-13 10:00:00.000', 1, 'first', 'binary1'),('2020-05-13 10:00:00.001', 2, 'second', 'binary2'),
|
||||||
('2020-05-13 10:00:00.002', 3, 'third') dev_002 VALUES('2020-05-13 10:00:00.003', 1, 'first'), ('2020-05-13 10:00:00.004', 2, 'second'),
|
('2020-05-13 10:00:00.002', 3, 'third' , 'binary3') dev_002 VALUES('2020-05-13 10:00:00.003', 1, 'first', 'binary4'), ('2020-05-13 10:00:00.004', 2, 'second', 'binary5'),
|
||||||
('2020-05-13 10:00:00.005', 3, 'third')""")
|
('2020-05-13 10:00:00.005', 3, 'third', 'binary6')""")
|
||||||
|
|
||||||
# > for timestamp type
|
# > for timestamp type
|
||||||
tdSql.query("select * from db.st where ts > '2020-05-13 10:00:00.002'")
|
tdSql.query("select * from db.st where ts > '2020-05-13 10:00:00.002'")
|
||||||
|
@ -85,6 +85,12 @@ class TDTestCase:
|
||||||
tdSql.query("select * from db.st where name = 'first'")
|
tdSql.query("select * from db.st where name = 'first'")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
tdSql.query("select * from db.st where col4 = 1231231")
|
||||||
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
|
tdSql.query("select * from db.st where name = 1231231")
|
||||||
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
# <> for timestamp type
|
# <> for timestamp type
|
||||||
tdSql.query("select * from db.st where ts <> '2020-05-13 10:00:00.002'")
|
tdSql.query("select * from db.st where ts <> '2020-05-13 10:00:00.002'")
|
||||||
# tdSql.checkRows(4)
|
# tdSql.checkRows(4)
|
||||||
|
@ -105,6 +111,13 @@ class TDTestCase:
|
||||||
tdSql.query("select * from db.st where name like '_econd'")
|
tdSql.query("select * from db.st where name like '_econd'")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
# for tag
|
||||||
|
tdSql.query("select * from db.st where dev=1")
|
||||||
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
|
tdSql.query("select * from db.st where tag2=1")
|
||||||
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
tdLog.success("%s successfully executed" % __file__)
|
||||||
|
|
Loading…
Reference in New Issue