Merge branch 'develop' into xiaoping/add_test_case

This commit is contained in:
Ping Xiao 2021-02-20 09:25:33 +08:00
commit 1785c81555
73 changed files with 2843 additions and 1254 deletions

12
.gitignore vendored
View File

@ -79,3 +79,15 @@ tests/comparisonTest/opentsdb/opentsdbtest/.settings/
tests/examples/JDBC/JDBCDemo/.classpath
tests/examples/JDBC/JDBCDemo/.project
tests/examples/JDBC/JDBCDemo/.settings/
# Emacs
# -*- mode: gitignore; -*-
*~
\#*\#
/.emacs.desktop
/.emacs.desktop.lock
*.elc
auto-save-list
tramp
.\#*
TAGS

7
Jenkinsfile vendored
View File

@ -5,7 +5,7 @@ node {
git url: 'https://github.com/taosdata/TDengine.git'
}
def kipstage=0
def skipstage=0
def abortPreviousBuilds() {
def currentJobName = env.JOB_NAME
def currentBuildNumber = env.BUILD_NUMBER.toInteger()
@ -88,8 +88,9 @@ pipeline {
git checkout -qf FETCH_HEAD
'''
script{
skipstage=sh(script:"git --no-pager diff --name-only FETCH_HEAD develop|grep -v -E '.*md|//src//connector|Jenkinsfile|test-all.sh' || echo 0 ",returnStdout:true)
env.skipstage=sh(script:"cd ${WORKSPACE}.tes && git --no-pager diff --name-only FETCH_HEAD develop|grep -v -E '.*md|//src//connector|Jenkinsfile|test-all.sh' || echo 0 ",returnStdout:true)
}
println env.skipstage
sh'''
rm -rf ${WORKSPACE}.tes
'''
@ -101,7 +102,7 @@ pipeline {
when {
changeRequest()
expression {
skipstage != 0
env.skipstage != 0
}
}
parallel {

View File

@ -43,7 +43,7 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG
## <a class="anchor" id="create-table"></a>创建表
TDengine对每个数据采集点需要独立建表。与标准的关系型数据一样一张表有表名Schema但除此之外还可以带有一到多个标签。创建时需要使用超级表做模板同时指定标签的具体值。以表一中的智能电表为例可以使用如下的SQL命令建表
```cmd
```mysql
CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
```
其中d1001是表名meters是超级表的表名后面紧跟标签Location的具体标签值”Beijing.Chaoyang"标签groupId的具体标签值2。虽然在创建表时需要指定标签值但可以事后修改。详细细则请见 [TAOS SQL 的表管理](https://www.taosdata.com/cn/documentation/taos-sql#table) 章节。
@ -54,10 +54,12 @@ TDengine建议将数据采集点的全局唯一ID作为表名(比如设备序列
**自动建表**:在某些特殊场景中,用户在写数据时并不确定某个数据采集点的表是否存在,此时可在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表。比如:
```cmd
```mysql
INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);
```
上述SQL语句将记录(now, 10.2, 219, 0.32) 插入进表d1001。如果表d1001还未创建则使用超级表meters做模板自动创建同时打上标签值“Beijing.Chaoyang", 2。
上述SQL语句将记录 (now, 10.2, 219, 0.32) 插入表d1001。如果表d1001还未创建则使用超级表meters做模板自动创建同时打上标签值“Beijing.Chaoyang", 2。
关于自动建表的详细语法请参见 [插入记录时自动建表](https://www.taosdata.com/cn/documentation/taos-sql#auto_create_table) 章节。
## 多列模型 vs 单列模型

View File

@ -24,6 +24,7 @@ TDengine提供了丰富的应用程序开发接口其中包括C/C++、Java、
* 在没有安装TDengine服务端软件的系统中使用连接器除RESTful外访问 TDengine 数据库需要安装相应版本的客户端安装包来使应用驱动Linux系统中文件名为libtaos.soWindows系统中为taos.dll被安装在系统中否则会产生无法找到相应库文件的错误。
* 所有执行 SQL 语句的 API例如 C/C++ Connector 中的 `tao_query`、`taos_query_a`、`taos_subscribe` 等以及其它语言中与它们对应的API每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。
* 升级到TDengine到2.0.8.0版本的用户必须更新JDBC连接TDengine必须升级taos-jdbcdriver到2.0.12及以上。
* 无论选用何种编程语言的连接器2.0 及以上版本的 TDengine 推荐数据库应用的每个线程都建立一个独立的连接或基于线程建立连接池以避免连接内的“USE statement”状态量在线程之间相互干扰但连接的查询和写入操作都是线程安全的
## <a class="anchor" id="driver"></a>安装连接器驱动步骤
@ -266,7 +267,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时需要包含TDengine
### 异步查询API
同步API之外TDengine还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下异步API处理数据插入的速度比同步API快2\~4倍。异步API采用非阻塞式的调用方式在系统真正完成某个具体数据库操作前立即返回。调用的线程可以去处理其他工作从而可以提升整个应用的性能。异步API在网络延迟严重的情况下优点尤为突出。
同步API之外TDengine还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下异步API处理数据插入的速度比同步API快24倍。异步API采用非阻塞式的调用方式在系统真正完成某个具体数据库操作前立即返回。调用的线程可以去处理其他工作从而可以提升整个应用的性能。异步API在网络延迟严重的情况下优点尤为突出。
异步API都需要应用提供相应的回调函数回调函数参数设置如下前两个参数都是一致的第三个参数依不同的API而定。第一个参数param是应用调用异步API时提供给系统的用于回调时应用能够找回具体操作的上下文依具体实现而定。第二个参数是SQL操作的结果集如果为空比如insert操作表示没有记录返回如果不为空比如select操作表示有记录返回。
@ -896,7 +897,7 @@ Node-example-raw.js
验证方法:
1. 新建安装验证目录,例如:\~/tdengine-test拷贝github上nodejsChecker.js源程序。下载地址https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js
1. 新建安装验证目录,例如:`~/tdengine-test`拷贝github上nodejsChecker.js源程序。下载地址https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js
2. 在命令中执行以下命令:

View File

@ -102,7 +102,7 @@ taosd -C
- maxSQLLength单条SQL语句允许最长限制。默认值65380字节。
- telemetryReporting: 是否允许 TDengine 采集和上报基本使用信息0表示不允许1表示允许。 默认值1。
- stream: 是否启用连续查询流计算功能0表示不允许1表示允许。 默认值1。
- queryBufferSize: 为所有并发查询占用保留的内存大小。计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。单位为字节。
- queryBufferSize: 为所有并发查询占用保留的内存大小。计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。单位为 MB2.0.15 以前的版本中,此参数的单位是字节
- ratioOfQueryCores: 设置查询线程的最大数量。最小值0 表示只有1个查询线程最大值2表示最大建立2倍CPU核数的查询线程。默认为1表示最大和CPU核数相等的查询线程。该值可以为小数即0.5表示最大建立CPU核数一半的查询线程。
**注意:**对于端口TDengine会使用从serverPort起13个连续的TCP和UDP端口号请务必在防火墙打开。因此如果是缺省配置需要打开从6030到6042共13个端口而且必须TCP和UDP都打开。

View File

@ -152,6 +152,14 @@ TDengine缺省的时间戳是毫秒精度但通过修改配置参数enableMic
```
以指定的超级表为模板,指定 tags 的值来创建数据表。
- **以超级表为模板创建数据表,并指定具体的 tags 列**
```mysql
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
```
以指定的超级表为模板,指定一部分 tags 列的值来创建数据表。(没被指定的 tags 列会设为空值。)
说明:从 2.0.17 版本开始支持这种方式。在之前的版本中,不允许指定 tags 列,而必须显式给出所有 tags 列的取值。
- **批量创建数据表**
```mysql
@ -267,6 +275,7 @@ TDengine缺省的时间戳是毫秒精度但通过修改配置参数enableMic
```
## <a class="anchor" id="tags"></a>超级表 STable 中 TAG 管理
- **添加标签**
```mysql
@ -305,7 +314,7 @@ TDengine缺省的时间戳是毫秒精度但通过修改配置参数enableMic
- **插入一条记录,数据对应到指定的列**
```mysql
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value, ...)
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...);
```
向表tb_name中插入一条记录数据对应到指定的列。SQL语句中没有出现的列数据库将自动填充为NULL。主键时间戳不能为NULL。
@ -339,25 +348,19 @@ TDengine缺省的时间戳是毫秒精度但通过修改配置参数enableMic
1) 如果时间戳为0系统将自动使用服务器当前时间作为该记录的时间戳
2) 允许插入的最老记录的时间戳是相对于当前服务器时间减去配置的keep值数据保留的天数允许插入的最新记录的时间戳是相对于当前服务器时间加上配置的days值数据文件存储数据的时间跨度单位为天。keep和days都是可以在创建数据库时指定的缺省值分别是3650天和10天。
**历史记录写入**可使用IMPORT或者INSERT命令IMPORT的语法功能与INSERT完全一样。
## <a class="anchor" id="select"></a>数据查询
### 查询语法:
- <a class="anchor" id="auto_create_table"></a>**插入记录时自动建表**
```mysql
SELECT select_expr [, select_expr ...]
FROM {tb_name_list}
[WHERE where_condition]
[INTERVAL (interval_val [, interval_offset])]
[FILL fill_val]
[SLIDING fill_val]
[GROUP BY col_list]
[ORDER BY col_list { DESC | ASC }]
[SLIMIT limit_val [, SOFFSET offset_val]]
[LIMIT limit_val [, OFFSET offset_val]]
[>> export_file]
INSERT INTO tb_name USING stb_name TAGS (tag_value1, ...) VALUES (field_value1, ...);
```
如果用户在写数据时并不确定某个表是否存在,此时可以在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表。自动建表时,要求必须以超级表为模板,并写明数据表的 tags 取值。
- **插入记录时自动建表,并指定具体的 tags 列**
```mysql
INSERT INTO tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...) VALUES (field_value1, ...);
```
在自动建表时,可以只是指定部分 tags 列的取值,未被指定的 tags 列将取为空值。
**历史记录写入**可使用IMPORT或者INSERT命令IMPORT的语法功能与INSERT完全一样。
说明:针对 insert 类型的 SQL 语句我们采用的流式解析策略在发现后面的错误之前前面正确的部分SQL仍会执行。下面的sql中insert语句是无效的但是d1001仍会被创建。
@ -385,6 +388,24 @@ taos> SHOW TABLES;
Query OK, 1 row(s) in set (0.001091s)
```
## <a class="anchor" id="select"></a>数据查询
### 查询语法:
```mysql
SELECT select_expr [, select_expr ...]
FROM {tb_name_list}
[WHERE where_condition]
[INTERVAL (interval_val [, interval_offset])]
[FILL fill_val]
[SLIDING fill_val]
[GROUP BY col_list]
[ORDER BY col_list { DESC | ASC }]
[SLIMIT limit_val [, SOFFSET offset_val]]
[LIMIT limit_val [, OFFSET offset_val]]
[>> export_file];
```
#### SELECT子句
一个选择子句可以是联合查询UNION和另一个查询的子查询SUBQUERY

View File

@ -140,3 +140,20 @@ TDengine是根据hostname唯一标志一台机器的在数据文件从机器A
- 2.0.7.0 及以后的版本,到/var/lib/taos/dnode下修复dnodeEps.json的dnodeId对应的FQDN重启。确保机器内所有机器的此文件是完全相同的。
- 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
## 17. 如何在命令行程序 taos 中临时调整日志级别
为了调试方便,从 2.0.16 版本开始,命令行程序 taos 新增了与日志记录相关的两条指令:
```mysql
ALTER LOCAL flag_name flag_value;
```
其含义是,在当前的命令行程序下,修改一个特定模块的日志记录级别(只对当前命令行程序有效,如果 taos 命令行程序重启,则需要重新设置):
- flag_name 的取值可以是debugFlagcDebugFlagtmrDebugFlaguDebugFlagrpcDebugFlag
- flag_value 的取值可以是131输出错误和警告日志135 输出错误、警告和调试日志143 输出错误、警告、调试和跟踪日志)
```mysql
ALTER LOCAL RESETLOG;
```
其含义是,清空本机所有由客户端生成的日志文件。

View File

@ -35,10 +35,11 @@ done
echo "verNumber=${verNumber}"
docker manifest create -a tdengine/tdengine:${verNumber} tdengine/tdengine-amd64:${verNumber} tdengine/tdengine-aarch64:${verNumber} tdengine/tdengine-aarch32:${verNumber}
#docker manifest create -a tdengine/tdengine:${verNumber} tdengine/tdengine-amd64:${verNumber} tdengine/tdengine-aarch64:${verNumber} tdengine/tdengine-aarch32:${verNumber}
docker manifest create -a tdengine/tdengine tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest
docker login -u tdengine -p ${passWord} #replace the docker registry username and password
docker manifest push tdengine/tdengine:${verNumber}
docker manifest push tdengine/tdengine
# how set latest version ???

View File

@ -9,6 +9,7 @@ Summary: tdengine from taosdata
Group: Application/Database
License: AGPL
URL: www.taosdata.com
AutoReqProv: no
#BuildRoot: %_topdir/BUILDROOT
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root

View File

@ -6375,16 +6375,14 @@ int32_t doCheckForCreateFromStable(SSqlObj* pSql, SSqlInfo* pInfo) {
// get table meta from mnode
code = tNameExtractFullName(&pStableMetaInfo->name, pCreateTableInfo->tagdata.name);
SArray* pList = pCreateTableInfo->pTagVals;
SArray* pValList = pCreateTableInfo->pTagVals;
code = tscGetTableMeta(pSql, pStableMetaInfo);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
size_t size = taosArrayGetSize(pList);
if (tscGetNumOfTags(pStableMetaInfo->pTableMeta) != size) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg5);
}
size_t valSize = taosArrayGetSize(pValList);
// too long tag values will return invalid sql, not be truncated automatically
SSchema *pTagSchema = tscGetTableTagSchema(pStableMetaInfo->pTableMeta);
@ -6395,10 +6393,40 @@ int32_t doCheckForCreateFromStable(SSqlObj* pSql, SSqlInfo* pInfo) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
SArray* pNameList = NULL;
size_t nameSize = 0;
int32_t schemaSize = tscGetNumOfTags(pStableMetaInfo->pTableMeta);
int32_t ret = TSDB_CODE_SUCCESS;
for (int32_t i = 0; i < size; ++i) {
SSchema* pSchema = &pTagSchema[i];
tVariantListItem* pItem = taosArrayGet(pList, i);
if (pCreateTableInfo->pTagNames) {
pNameList = pCreateTableInfo->pTagNames;
nameSize = taosArrayGetSize(pNameList);
if (valSize != nameSize) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg5);
}
if (schemaSize < valSize) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg5);
}
bool findColumnIndex = false;
for (int32_t i = 0; i < nameSize; ++i) {
SStrToken* sToken = taosArrayGet(pNameList, i);
if (TK_STRING == sToken->type) {
tscDequoteAndTrimToken(sToken);
}
tVariantListItem* pItem = taosArrayGet(pValList, i);
findColumnIndex = false;
// todo speedup by using hash list
for (int32_t t = 0; t < schemaSize; ++t) {
if (strncmp(sToken->z, pTagSchema[t].name, sToken->n) == 0 && strlen(pTagSchema[t].name) == sToken->n) {
SSchema* pSchema = &pTagSchema[t];
char tagVal[TSDB_MAX_TAGS_LEN];
if (pSchema->type == TSDB_DATA_TYPE_BINARY || pSchema->type == TSDB_DATA_TYPE_NCHAR) {
@ -6425,6 +6453,51 @@ int32_t doCheckForCreateFromStable(SSqlObj* pSql, SSqlInfo* pInfo) {
}
tdAddColToKVRow(&kvRowBuilder, pSchema->colId, pSchema->type, tagVal);
findColumnIndex = true;
break;
}
}
if (!findColumnIndex) {
return tscInvalidSQLErrMsg(pCmd->payload, "invalid tag name", sToken->z);
}
}
} else {
if (schemaSize != valSize) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg5);
}
for (int32_t i = 0; i < valSize; ++i) {
SSchema* pSchema = &pTagSchema[i];
tVariantListItem* pItem = taosArrayGet(pValList, i);
char tagVal[TSDB_MAX_TAGS_LEN];
if (pSchema->type == TSDB_DATA_TYPE_BINARY || pSchema->type == TSDB_DATA_TYPE_NCHAR) {
if (pItem->pVar.nLen > pSchema->bytes) {
tdDestroyKVRowBuilder(&kvRowBuilder);
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg3);
}
}
ret = tVariantDump(&(pItem->pVar), tagVal, pSchema->type, true);
// check again after the convert since it may be converted from binary to nchar.
if (pSchema->type == TSDB_DATA_TYPE_BINARY || pSchema->type == TSDB_DATA_TYPE_NCHAR) {
int16_t len = varDataTLen(tagVal);
if (len > pSchema->bytes) {
tdDestroyKVRowBuilder(&kvRowBuilder);
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg3);
}
}
if (ret != TSDB_CODE_SUCCESS) {
tdDestroyKVRowBuilder(&kvRowBuilder);
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg4);
}
tdAddColToKVRow(&kvRowBuilder, pSchema->colId, pSchema->type, tagVal);
}
}
SKVRow row = tdGetKVRowFromBuilder(&kvRowBuilder);

View File

@ -52,7 +52,9 @@ static bool validPassword(const char* passwd) {
static SSqlObj *taosConnectImpl(const char *ip, const char *user, const char *pass, const char *auth, const char *db,
uint16_t port, void (*fp)(void *, TAOS_RES *, int), void *param, TAOS **taos) {
taos_init();
if (taos_init()) {
return NULL;
}
if (!validUserName(user)) {
terrno = TSDB_CODE_TSC_INVALID_USER_LENGTH;

View File

@ -47,6 +47,7 @@ void *tscRpcCache; // cache to keep rpc obj
int32_t tscNumOfThreads = 1; // num of rpc threads
static pthread_mutex_t rpcObjMutex; // mutex to protect open the rpc obj concurrently
static pthread_once_t tscinit = PTHREAD_ONCE_INIT;
static volatile int tscInitRes = 0;
void tscCheckDiskUsage(void *UNUSED_PARAM(para), void *UNUSED_PARAM(param)) {
taosGetDisk();
@ -137,7 +138,11 @@ void taos_init_imp(void) {
}
taosReadGlobalCfg();
taosCheckGlobalCfg();
if (taosCheckGlobalCfg()) {
tscInitRes = -1;
return;
}
taosInitNotes();
rpcInit();
@ -159,6 +164,7 @@ void taos_init_imp(void) {
tscQhandle = taosInitScheduler(queueSize, tscNumOfThreads, "tsc");
if (NULL == tscQhandle) {
tscError("failed to init scheduler");
tscInitRes = -1;
return;
}
@ -187,7 +193,7 @@ void taos_init_imp(void) {
tscDebug("client is initialized successfully");
}
void taos_init() { pthread_once(&tscinit, taos_init_imp); }
int taos_init() { pthread_once(&tscinit, taos_init_imp); return tscInitRes;}
// this function may be called by user or system, or by both simultaneously.
void taos_cleanup(void) {

View File

@ -373,6 +373,23 @@ static void taosCheckDataDirCfg() {
}
}
static int32_t taosCheckTmpDir(void) {
if (strlen(tsTempDir) <= 0){
uError("tempDir is not set");
return -1;
}
DIR *dir = opendir(tsTempDir);
if (dir == NULL) {
uError("can not open tempDir:%s, error:%s", tsTempDir, strerror(errno));
return -1;
}
closedir(dir);
return 0;
}
static void doInitGlobalConfig(void) {
osInit();
srand(taosSafeRand());
@ -1488,6 +1505,11 @@ int32_t taosCheckGlobalCfg() {
}
taosCheckDataDirCfg();
if (taosCheckTmpDir()) {
return -1;
}
taosGetSystemInfo();
tsSetLocale();

View File

@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.5",
version="2.0.6",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",

View File

@ -3,12 +3,12 @@ from .connection import TDengineConnection
from .cursor import TDengineCursor
# Globals
apilevel = '2.0.3'
threadsafety = 0
paramstyle = 'pyformat'
__all__ = ['connection', 'cursor']
def connect(*args, **kwargs):
""" Function to return a TDengine connector object

View File

@ -4,12 +4,15 @@ from .error import *
import math
import datetime
def _convert_millisecond_to_datetime(milli):
return datetime.datetime.fromtimestamp(milli / 1000.0)
def _convert_microsecond_to_datetime(micro):
return datetime.datetime.fromtimestamp(micro / 1000000.0)
def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
@ -18,74 +21,190 @@ def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
_timestamp_converter = _convert_microsecond_to_datetime
if num_of_rows > 0:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
else:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
def _crow_bool_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_bool))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_bool))[
:abs(num_of_rows)]]
def _crow_tinyint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
def _crow_tinyint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
def _crow_smallint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)]]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_smallint_unsigned_to_python(
data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_int_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
def _crow_int_unsigned_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
def _crow_bigint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
def _crow_bigint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
def _crow_float_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C float row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
def _crow_double_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C double row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
def _crow_binary_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
assert(nbytes is not None)
if num_of_rows > 0:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
else:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
@ -98,12 +217,14 @@ def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
tmpstr = ctypes.c_char_p(data)
res.append(tmpstr.value.decode())
else:
res.append( (ctypes.cast(data+nbytes*i, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
@ -112,7 +233,11 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
if num_of_rows > 0:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
@ -120,13 +245,18 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
res.append(None)
return res
def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
"""
@ -142,11 +272,13 @@ def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
res.append( (ctypes.cast(data+nbytes*i+2, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i + 2,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
_CONVERT_FUNC = {
FieldType.C_BOOL: _crow_bool_to_python,
FieldType.C_TINYINT: _crow_tinyint_to_python,
@ -157,7 +289,11 @@ _CONVERT_FUNC = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python
FieldType.C_NCHAR: _crow_nchar_to_python,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
_CONVERT_FUNC_BLOCK = {
@ -170,16 +306,24 @@ _CONVERT_FUNC_BLOCK = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python_block,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python_block
FieldType.C_NCHAR: _crow_nchar_to_python_block,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
# Corresponding TAOS_FIELD structure in C
class TaosField(ctypes.Structure):
_fields_ = [('name', ctypes.c_char * 65),
('type', ctypes.c_char),
('bytes', ctypes.c_short)]
# C interface class
class CTaosInterface(object):
libtaos = ctypes.CDLL('libtaos.so')
@ -216,7 +360,7 @@ class CTaosInterface(object):
except AttributeError:
raise AttributeError("config is expected as a str")
if config != None:
if config is not None:
CTaosInterface.libtaos.taos_options(3, self._config)
CTaosInterface.libtaos.taos_init()
@ -227,7 +371,13 @@ class CTaosInterface(object):
"""
return self._config
def connect(self, host=None, user="root", password="taosdata", db=None, port=0):
def connect(
self,
host=None,
user="root",
password="taosdata",
db=None,
port=0):
'''
Function to connect to server
@ -236,7 +386,7 @@ class CTaosInterface(object):
# host
try:
_host = ctypes.c_char_p(host.encode(
"utf-8")) if host != None else ctypes.c_char_p(None)
"utf-8")) if host is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("host is expected as a str")
@ -255,7 +405,7 @@ class CTaosInterface(object):
# db
try:
_db = ctypes.c_char_p(
db.encode("utf-8")) if db != None else ctypes.c_char_p(None)
db.encode("utf-8")) if db is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("db is expected as a str")
@ -268,7 +418,7 @@ class CTaosInterface(object):
connection = ctypes.c_void_p(CTaosInterface.libtaos.taos_connect(
_host, _user, _password, _db, _port))
if connection.value == None:
if connection.value is None:
print('connect to TDengine failed')
raise ConnectionError("connect to TDengine failed")
# sys.exit(1)
@ -293,7 +443,8 @@ class CTaosInterface(object):
@rtype: 0 on success and -1 on failure
'''
try:
return CTaosInterface.libtaos.taos_query(connection, ctypes.c_char_p(sql.encode('utf-8')))
return CTaosInterface.libtaos.taos_query(
connection, ctypes.c_char_p(sql.encode('utf-8')))
except AttributeError:
raise AttributeError("sql is expected as a string")
# finally:
@ -360,38 +511,53 @@ class CTaosInterface(object):
result, ctypes.byref(pblock))
if num_of_rows == 0:
return None, 0
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC_BLOCK:
raise DatabaseError("Invalid data type returned from database")
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
return blocks, abs(num_of_rows)
@staticmethod
def fetchRow(result, fields):
pblock = ctypes.c_void_p(0)
pblock = CTaosInterface.libtaos.taos_fetch_row(result)
if pblock:
num_of_rows = 1
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC:
raise DatabaseError("Invalid data type returned from database")
raise DatabaseError(
"Invalid data type returned from database")
if data is None:
blocks[i] = [None]
else:
blocks[i] = _CONVERT_FUNC[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
else:
return None, 0
return blocks, abs(num_of_rows)
@staticmethod
def freeResult(result):
CTaosInterface.libtaos.taos_free_result(result)

View File

@ -2,9 +2,11 @@ from .cursor import TDengineCursor
from .subscription import TDengineSubscription
from .cinterface import CTaosInterface
class TDengineConnection(object):
""" TDengine connection object
"""
def __init__(self, *args, **kwargs):
self._conn = None
self._host = None
@ -43,7 +45,12 @@ class TDengineConnection(object):
self._config = kwargs['config']
self._chandle = CTaosInterface(self._config)
self._conn = self._chandle.connect(self._host, self._user, self._password, self._database, self._port)
self._conn = self._chandle.connect(
self._host,
self._user,
self._password,
self._database,
self._port)
def close(self):
"""Close current connection.
@ -55,7 +62,8 @@ class TDengineConnection(object):
"""
if self._conn is None:
return None
sub = CTaosInterface.subscribe(self._conn, restart, topic, sql, interval)
sub = CTaosInterface.subscribe(
self._conn, restart, topic, sql, interval)
return TDengineSubscription(sub)
def cursor(self):
@ -80,6 +88,7 @@ class TDengineConnection(object):
"""
pass
if __name__ == "__main__":
conn = TDengineConnection(host='192.168.1.107')
conn.close()

View File

@ -3,6 +3,7 @@
from .dbapi import *
class FieldType(object):
"""TDengine Field Types
"""
@ -18,13 +19,21 @@ class FieldType(object):
C_BINARY = 8
C_TIMESTAMP = 9
C_NCHAR = 10
C_TINYINT_UNSIGNED = 12
C_SMALLINT_UNSIGNED = 13
C_INT_UNSIGNED = 14
C_BIGINT_UNSIGNED = 15
# NULL value definition
# NOTE: These values should change according to C definition in tsdb.h
C_BOOL_NULL = 0x02
C_TINYINT_NULL = -128
C_TINYINT_UNSIGNED_NULL = 255
C_SMALLINT_NULL = -32768
C_SMALLINT_UNSIGNED_NULL = 65535
C_INT_NULL = -2147483648
C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float('nan')
C_DOUBLE_NULL = float('nan')
C_BINARY_NULL = bytearray([int('0xff', 16)])

View File

@ -148,6 +148,7 @@ class TDengineCursor(object):
"""Fetch the next row of a query result set, returning a single sequence, or None when no more data is available.
"""
pass
def fetchmany(self):
pass
@ -158,11 +159,26 @@ class TDengineCursor(object):
if (dataType.upper() == "TINYINT"):
if (self._description[col][1] == FieldType.C_TINYINT):
return True
if (dataType.upper() == "TINYINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_TINYINT_UNSIGNED):
return True
if (dataType.upper() == "SMALLINT"):
if (self._description[col][1] == FieldType.C_SMALLINT):
return True
if (dataType.upper() == "SMALLINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_SMALLINT_UNSIGNED):
return True
if (dataType.upper() == "INT"):
if (self._description[col][1] == FieldType.C_INT):
return True
if (dataType.upper() == "INT UNSIGNED"):
if (self._description[col][1] == FieldType.C_INT_UNSIGNED):
return True
if (dataType.upper() == "BIGINT"):
if (self._description[col][1] == FieldType.C_INT):
if (self._description[col][1] == FieldType.C_BIGINT):
return True
if (dataType.upper() == "BIGINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_BIGINT_UNSIGNED):
return True
if (dataType.upper() == "FLOAT"):
if (self._description[col][1] == FieldType.C_FLOAT):
@ -191,16 +207,20 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchRow(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchRow(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
for i in range(len(self._fields)):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def fetchall(self):
if self._result is None or self._fields is None:
raise OperationalError("Invalid use of fetchall")
@ -208,16 +228,20 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchBlock(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchBlock(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
for i in range(len(self._fields)):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def nextset(self):
"""
"""

View File

@ -4,6 +4,7 @@
import time
import datetime
class DBAPITypeObject(object):
def __init__(self, *values):
self.values = values
@ -16,19 +17,24 @@ class DBAPITypeObject(object):
else:
return -1
Date = datetime.date
Time = datetime.time
Timestamp = datetime.datetime
def DataFromTicks(ticks):
return Date(*time.localtime(ticks)[:3])
def TimeFromTicks(ticks):
return Time(*time.localtime(ticks)[3:6])
def TimestampFromTicks(ticks):
return Timestamp(*time.localtime(ticks)[:6])
Binary = bytes
# STRING = DBAPITypeObject(*constants.FieldType.get_string_types())

View File

@ -1,6 +1,7 @@
"""Python exceptions
"""
class Error(Exception):
def __init__(self, msg=None, errno=None):
self.msg = msg
@ -10,26 +11,31 @@ class Error(Exception):
def __str__(self):
return self._full_msg
class Warning(Exception):
"""Exception raised for important warnings like data truncations while inserting.
"""
pass
class InterfaceError(Error):
"""Exception raised for errors that are related to the database interface rather than the database itself.
"""
pass
class DatabaseError(Error):
"""Exception raised for errors that are related to the database.
"""
pass
class DataError(DatabaseError):
"""Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range.
"""
pass
class OperationalError(DatabaseError):
"""Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer
"""
@ -41,16 +47,19 @@ class IntegrityError(DatabaseError):
"""
pass
class InternalError(DatabaseError):
"""Exception raised when the database encounters an internal error.
"""
pass
class ProgrammingError(DatabaseError):
"""Exception raised for programming errors.
"""
pass
class NotSupportedError(DatabaseError):
"""Exception raised in case a method or database API was used which is not supported by the database,.
"""

View File

@ -1,13 +1,14 @@
from .cinterface import CTaosInterface
from .error import *
class TDengineSubscription(object):
"""TDengine subscription object
"""
def __init__(self, sub):
self._sub = sub
def consume(self):
"""Consume rows of a subscription
"""
@ -18,14 +19,14 @@ class TDengineSubscription(object):
buffer = [[] for i in range(len(fields))]
while True:
block, num_of_fields = CTaosInterface.fetchBlock(result, fields)
if num_of_fields == 0: break
if num_of_fields == 0:
break
for i in range(len(fields)):
buffer[i].extend(block[i])
self.fields = fields
return list(map(tuple, zip(*buffer)))
def close(self, keepProgress=True):
"""Close the Subscription.
"""
@ -38,7 +39,11 @@ class TDengineSubscription(object):
if __name__ == '__main__':
from .connection import TDengineConnection
conn = TDengineConnection(host="127.0.0.1", user="root", password="taosdata", database="test")
conn = TDengineConnection(
host="127.0.0.1",
user="root",
password="taosdata",
database="test")
# Generate a cursor object to run SQL commands
sub = conn.subscribe(True, "test", "select * from meters;", 1000)

View File

@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.4",
version="2.0.5",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",

View File

@ -3,12 +3,12 @@ from .connection import TDengineConnection
from .cursor import TDengineCursor
# Globals
apilevel = '2.0.3'
threadsafety = 0
paramstyle = 'pyformat'
__all__ = ['connection', 'cursor']
def connect(*args, **kwargs):
""" Function to return a TDengine connector object

View File

@ -4,12 +4,15 @@ from .error import *
import math
import datetime
def _convert_millisecond_to_datetime(milli):
return datetime.datetime.fromtimestamp(milli / 1000.0)
def _convert_microsecond_to_datetime(micro):
return datetime.datetime.fromtimestamp(micro / 1000000.0)
def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
@ -18,74 +21,190 @@ def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
_timestamp_converter = _convert_microsecond_to_datetime
if num_of_rows > 0:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
else:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
def _crow_bool_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_bool))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_bool))[
:abs(num_of_rows)]]
def _crow_tinyint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
def _crow_tinyint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
def _crow_smallint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)]]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_smallint_unsigned_to_python(
data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_int_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
def _crow_int_unsigned_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
def _crow_bigint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
def _crow_bigint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
def _crow_float_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C float row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
def _crow_double_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C double row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
def _crow_binary_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
assert(nbytes is not None)
if num_of_rows > 0:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
else:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
@ -98,12 +217,14 @@ def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
tmpstr = ctypes.c_char_p(data)
res.append(tmpstr.value.decode())
else:
res.append( (ctypes.cast(data+nbytes*i, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
@ -112,7 +233,11 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
if num_of_rows > 0:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
@ -120,13 +245,18 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
res.append(None)
return res
def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
"""
@ -142,11 +272,13 @@ def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
res.append( (ctypes.cast(data+nbytes*i+2, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i + 2,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
_CONVERT_FUNC = {
FieldType.C_BOOL: _crow_bool_to_python,
FieldType.C_TINYINT: _crow_tinyint_to_python,
@ -157,7 +289,11 @@ _CONVERT_FUNC = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python
FieldType.C_NCHAR: _crow_nchar_to_python,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
_CONVERT_FUNC_BLOCK = {
@ -170,16 +306,24 @@ _CONVERT_FUNC_BLOCK = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python_block,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python_block
FieldType.C_NCHAR: _crow_nchar_to_python_block,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
# Corresponding TAOS_FIELD structure in C
class TaosField(ctypes.Structure):
_fields_ = [('name', ctypes.c_char * 65),
('type', ctypes.c_char),
('bytes', ctypes.c_short)]
# C interface class
class CTaosInterface(object):
libtaos = ctypes.CDLL('libtaos.so')
@ -216,7 +360,7 @@ class CTaosInterface(object):
except AttributeError:
raise AttributeError("config is expected as a str")
if config != None:
if config is not None:
CTaosInterface.libtaos.taos_options(3, self._config)
CTaosInterface.libtaos.taos_init()
@ -227,7 +371,13 @@ class CTaosInterface(object):
"""
return self._config
def connect(self, host=None, user="root", password="taosdata", db=None, port=0):
def connect(
self,
host=None,
user="root",
password="taosdata",
db=None,
port=0):
'''
Function to connect to server
@ -236,7 +386,7 @@ class CTaosInterface(object):
# host
try:
_host = ctypes.c_char_p(host.encode(
"utf-8")) if host != None else ctypes.c_char_p(None)
"utf-8")) if host is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("host is expected as a str")
@ -255,7 +405,7 @@ class CTaosInterface(object):
# db
try:
_db = ctypes.c_char_p(
db.encode("utf-8")) if db != None else ctypes.c_char_p(None)
db.encode("utf-8")) if db is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("db is expected as a str")
@ -268,7 +418,7 @@ class CTaosInterface(object):
connection = ctypes.c_void_p(CTaosInterface.libtaos.taos_connect(
_host, _user, _password, _db, _port))
if connection.value == None:
if connection.value is None:
print('connect to TDengine failed')
raise ConnectionError("connect to TDengine failed")
# sys.exit(1)
@ -293,7 +443,8 @@ class CTaosInterface(object):
@rtype: 0 on success and -1 on failure
'''
try:
return CTaosInterface.libtaos.taos_query(connection, ctypes.c_char_p(sql.encode('utf-8')))
return CTaosInterface.libtaos.taos_query(
connection, ctypes.c_char_p(sql.encode('utf-8')))
except AttributeError:
raise AttributeError("sql is expected as a string")
# finally:
@ -360,35 +511,49 @@ class CTaosInterface(object):
result, ctypes.byref(pblock))
if num_of_rows == 0:
return None, 0
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC_BLOCK:
raise DatabaseError("Invalid data type returned from database")
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
return blocks, abs(num_of_rows)
@staticmethod
def fetchRow(result, fields):
pblock = ctypes.c_void_p(0)
pblock = CTaosInterface.libtaos.taos_fetch_row(result)
if pblock:
num_of_rows = 1
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC:
raise DatabaseError("Invalid data type returned from database")
raise DatabaseError(
"Invalid data type returned from database")
if data is None:
blocks[i] = [None]
else:
blocks[i] = _CONVERT_FUNC[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
else:
return None, 0
return blocks, abs(num_of_rows)

View File

@ -2,9 +2,11 @@ from .cursor import TDengineCursor
from .subscription import TDengineSubscription
from .cinterface import CTaosInterface
class TDengineConnection(object):
""" TDengine connection object
"""
def __init__(self, *args, **kwargs):
self._conn = None
self._host = None
@ -43,7 +45,12 @@ class TDengineConnection(object):
self._config = kwargs['config']
self._chandle = CTaosInterface(self._config)
self._conn = self._chandle.connect(self._host, self._user, self._password, self._database, self._port)
self._conn = self._chandle.connect(
self._host,
self._user,
self._password,
self._database,
self._port)
def close(self):
"""Close current connection.
@ -55,7 +62,8 @@ class TDengineConnection(object):
"""
if self._conn is None:
return None
sub = CTaosInterface.subscribe(self._conn, restart, topic, sql, interval)
sub = CTaosInterface.subscribe(
self._conn, restart, topic, sql, interval)
return TDengineSubscription(sub)
def cursor(self):
@ -80,6 +88,7 @@ class TDengineConnection(object):
"""
pass
if __name__ == "__main__":
conn = TDengineConnection(host='192.168.1.107')
conn.close()

View File

@ -3,6 +3,7 @@
from .dbapi import *
class FieldType(object):
"""TDengine Field Types
"""
@ -18,13 +19,21 @@ class FieldType(object):
C_BINARY = 8
C_TIMESTAMP = 9
C_NCHAR = 10
C_TINYINT_UNSIGNED = 12
C_SMALLINT_UNSIGNED = 13
C_INT_UNSIGNED = 14
C_BIGINT_UNSIGNED = 15
# NULL value definition
# NOTE: These values should change according to C definition in tsdb.h
C_BOOL_NULL = 0x02
C_TINYINT_NULL = -128
C_TINYINT_UNSIGNED_NULL = 255
C_SMALLINT_NULL = -32768
C_SMALLINT_UNSIGNED_NULL = 65535
C_INT_NULL = -2147483648
C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float('nan')
C_DOUBLE_NULL = float('nan')
C_BINARY_NULL = bytearray([int('0xff', 16)])

View File

@ -5,6 +5,7 @@ import threading
# querySeqNum = 0
class TDengineCursor(object):
"""Database cursor which is used to manage the context of a fetch operation.
@ -168,11 +169,26 @@ class TDengineCursor(object):
if (dataType.upper() == "TINYINT"):
if (self._description[col][1] == FieldType.C_TINYINT):
return True
if (dataType.upper() == "TINYINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_TINYINT_UNSIGNED):
return True
if (dataType.upper() == "SMALLINT"):
if (self._description[col][1] == FieldType.C_SMALLINT):
return True
if (dataType.upper() == "SMALLINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_SMALLINT_UNSIGNED):
return True
if (dataType.upper() == "INT"):
if (self._description[col][1] == FieldType.C_INT):
return True
if (dataType.upper() == "INT UNSIGNED"):
if (self._description[col][1] == FieldType.C_INT_UNSIGNED):
return True
if (dataType.upper() == "BIGINT"):
if (self._description[col][1] == FieldType.C_INT):
if (self._description[col][1] == FieldType.C_BIGINT):
return True
if (dataType.upper() == "BIGINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_BIGINT_UNSIGNED):
return True
if (dataType.upper() == "FLOAT"):
if (self._description[col][1] == FieldType.C_FLOAT):
@ -201,10 +217,13 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchRow(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchRow(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
@ -219,15 +238,20 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchBlock(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchBlock(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
if num_of_fields == 0: break
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
for i in range(len(self._fields)):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def nextset(self):
"""
"""
@ -268,4 +292,3 @@ class TDengineCursor(object):
(ele['name'], ele['type'], None, None, None, None, False))
return self._result

View File

@ -4,6 +4,7 @@
import time
import datetime
class DBAPITypeObject(object):
def __init__(self, *values):
self.values = values
@ -16,19 +17,24 @@ class DBAPITypeObject(object):
else:
return -1
Date = datetime.date
Time = datetime.time
Timestamp = datetime.datetime
def DataFromTicks(ticks):
return Date(*time.localtime(ticks)[:3])
def TimeFromTicks(ticks):
return Time(*time.localtime(ticks)[3:6])
def TimestampFromTicks(ticks):
return Timestamp(*time.localtime(ticks)[:6])
Binary = bytes
# STRING = DBAPITypeObject(*constants.FieldType.get_string_types())

View File

@ -1,6 +1,7 @@
"""Python exceptions
"""
class Error(Exception):
def __init__(self, msg=None, errno=None):
self.msg = msg
@ -10,26 +11,31 @@ class Error(Exception):
def __str__(self):
return self._full_msg
class Warning(Exception):
"""Exception raised for important warnings like data truncations while inserting.
"""
pass
class InterfaceError(Error):
"""Exception raised for errors that are related to the database interface rather than the database itself.
"""
pass
class DatabaseError(Error):
"""Exception raised for errors that are related to the database.
"""
pass
class DataError(DatabaseError):
"""Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range.
"""
pass
class OperationalError(DatabaseError):
"""Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer
"""
@ -41,16 +47,19 @@ class IntegrityError(DatabaseError):
"""
pass
class InternalError(DatabaseError):
"""Exception raised when the database encounters an internal error.
"""
pass
class ProgrammingError(DatabaseError):
"""Exception raised for programming errors.
"""
pass
class NotSupportedError(DatabaseError):
"""Exception raised in case a method or database API was used which is not supported by the database,.
"""

View File

@ -1,13 +1,14 @@
from .cinterface import CTaosInterface
from .error import *
class TDengineSubscription(object):
"""TDengine subscription object
"""
def __init__(self, sub):
self._sub = sub
def consume(self):
"""Consume rows of a subscription
"""
@ -18,14 +19,14 @@ class TDengineSubscription(object):
buffer = [[] for i in range(len(fields))]
while True:
block, num_of_fields = CTaosInterface.fetchBlock(result, fields)
if num_of_fields == 0: break
if num_of_fields == 0:
break
for i in range(len(fields)):
buffer[i].extend(block[i])
self.fields = fields
return list(map(tuple, zip(*buffer)))
def close(self, keepProgress=True):
"""Close the Subscription.
"""
@ -38,7 +39,11 @@ class TDengineSubscription(object):
if __name__ == '__main__':
from .connection import TDengineConnection
conn = TDengineConnection(host="127.0.0.1", user="root", password="taosdata", database="test")
conn = TDengineConnection(
host="127.0.0.1",
user="root",
password="taosdata",
database="test")
# Generate a cursor object to run SQL commands
sub = conn.subscribe(True, "test", "select * from meters;", 1000)

View File

@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.4",
version="2.0.5",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",

View File

@ -3,12 +3,12 @@ from .connection import TDengineConnection
from .cursor import TDengineCursor
# Globals
apilevel = '2.0.4'
threadsafety = 0
paramstyle = 'pyformat'
__all__ = ['connection', 'cursor']
def connect(*args, **kwargs):
""" Function to return a TDengine connector object

View File

@ -4,12 +4,15 @@ from .error import *
import math
import datetime
def _convert_millisecond_to_datetime(milli):
return datetime.datetime.fromtimestamp(milli / 1000.0)
def _convert_microsecond_to_datetime(micro):
return datetime.datetime.fromtimestamp(micro / 1000000.0)
def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
@ -18,74 +21,190 @@ def _crow_timestamp_to_python(data, num_of_rows, nbytes=None, micro=False):
_timestamp_converter = _convert_microsecond_to_datetime
if num_of_rows > 0:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
else:
return list(map(_timestamp_converter, ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
return list(map(_timestamp_converter, ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]))
def _crow_bool_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bool row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_bool))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_BOOL_NULL else bool(ele) for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_bool))[
:abs(num_of_rows)]]
def _crow_tinyint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_TINYINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_byte))[:abs(num_of_rows)]]
def _crow_tinyint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C tinyint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_TINYINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_byte))[
:abs(num_of_rows)]]
def _crow_smallint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)]]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_short))[:abs(num_of_rows)] ]
return [
None if ele == FieldType.C_SMALLINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_smallint_unsigned_to_python(
data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C smallint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_SMALLINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_short))[
:abs(num_of_rows)]]
def _crow_int_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_INT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_int))[:abs(num_of_rows)]]
def _crow_int_unsigned_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C int row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_INT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_int))[
:abs(num_of_rows)]]
def _crow_bigint_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
else:
return [ None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)] ]
return [None if ele == FieldType.C_BIGINT_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_long))[:abs(num_of_rows)]]
def _crow_bigint_unsigned_to_python(
data,
num_of_rows,
nbytes=None,
micro=False):
"""Function to convert C bigint row to python row
"""
if num_of_rows > 0:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
else:
return [
None if ele == FieldType.C_BIGINT_UNSIGNED_NULL else ele for ele in ctypes.cast(
data, ctypes.POINTER(
ctypes.c_long))[
:abs(num_of_rows)]]
def _crow_float_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C float row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_float))[:abs(num_of_rows)]]
def _crow_double_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C double row to python row
"""
if num_of_rows > 0:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
else:
return [ None if math.isnan(ele) else ele for ele in ctypes.cast(data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)] ]
return [None if math.isnan(ele) else ele for ele in ctypes.cast(
data, ctypes.POINTER(ctypes.c_double))[:abs(num_of_rows)]]
def _crow_binary_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
assert(nbytes is not None)
if num_of_rows > 0:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
else:
return [ None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode('utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
return [None if ele.value[0:1] == FieldType.C_BINARY_NULL else ele.value.decode(
'utf-8') for ele in (ctypes.cast(data, ctypes.POINTER(ctypes.c_char * nbytes)))[:abs(num_of_rows)]]
def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
@ -98,12 +217,14 @@ def _crow_nchar_to_python(data, num_of_rows, nbytes=None, micro=False):
tmpstr = ctypes.c_char_p(data)
res.append(tmpstr.value.decode())
else:
res.append( (ctypes.cast(data+nbytes*i, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C binary row to python row
"""
@ -112,7 +233,11 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
if num_of_rows > 0:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
@ -120,13 +245,18 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
rbyte=ctypes.cast(data+nbytes*i,ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(
data + nbytes * i,
ctypes.POINTER(
ctypes.c_short))[
:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2)
res.append(tmpstr.value.decode()[0:rbyte])
except ValueError:
res.append(None)
return res
def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
"""Function to convert C nchar row to python row
"""
@ -142,11 +272,13 @@ def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, micro=False):
else:
for i in range(abs(num_of_rows)):
try:
res.append( (ctypes.cast(data+nbytes*i+2, ctypes.POINTER(ctypes.c_wchar * (nbytes//4))))[0].value )
res.append((ctypes.cast(data + nbytes * i + 2,
ctypes.POINTER(ctypes.c_wchar * (nbytes // 4))))[0].value)
except ValueError:
res.append(None)
return res
_CONVERT_FUNC = {
FieldType.C_BOOL: _crow_bool_to_python,
FieldType.C_TINYINT: _crow_tinyint_to_python,
@ -157,7 +289,11 @@ _CONVERT_FUNC = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python
FieldType.C_NCHAR: _crow_nchar_to_python,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
_CONVERT_FUNC_BLOCK = {
@ -170,16 +306,24 @@ _CONVERT_FUNC_BLOCK = {
FieldType.C_DOUBLE: _crow_double_to_python,
FieldType.C_BINARY: _crow_binary_to_python_block,
FieldType.C_TIMESTAMP: _crow_timestamp_to_python,
FieldType.C_NCHAR : _crow_nchar_to_python_block
FieldType.C_NCHAR: _crow_nchar_to_python_block,
FieldType.C_TINYINT_UNSIGNED: _crow_tinyint_unsigned_to_python,
FieldType.C_SMALLINT_UNSIGNED: _crow_smallint_unsigned_to_python,
FieldType.C_INT_UNSIGNED: _crow_int_unsigned_to_python,
FieldType.C_BIGINT_UNSIGNED: _crow_bigint_unsigned_to_python
}
# Corresponding TAOS_FIELD structure in C
class TaosField(ctypes.Structure):
_fields_ = [('name', ctypes.c_char * 65),
('type', ctypes.c_char),
('bytes', ctypes.c_short)]
# C interface class
class CTaosInterface(object):
libtaos = ctypes.CDLL('libtaos.dylib')
@ -216,7 +360,7 @@ class CTaosInterface(object):
except AttributeError:
raise AttributeError("config is expected as a str")
if config != None:
if config is not None:
CTaosInterface.libtaos.taos_options(3, self._config)
CTaosInterface.libtaos.taos_init()
@ -227,7 +371,13 @@ class CTaosInterface(object):
"""
return self._config
def connect(self, host=None, user="root", password="taosdata", db=None, port=0):
def connect(
self,
host=None,
user="root",
password="taosdata",
db=None,
port=0):
'''
Function to connect to server
@ -236,7 +386,7 @@ class CTaosInterface(object):
# host
try:
_host = ctypes.c_char_p(host.encode(
"utf-8")) if host != None else ctypes.c_char_p(None)
"utf-8")) if host is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("host is expected as a str")
@ -255,7 +405,7 @@ class CTaosInterface(object):
# db
try:
_db = ctypes.c_char_p(
db.encode("utf-8")) if db != None else ctypes.c_char_p(None)
db.encode("utf-8")) if db is not None else ctypes.c_char_p(None)
except AttributeError:
raise AttributeError("db is expected as a str")
@ -268,7 +418,7 @@ class CTaosInterface(object):
connection = ctypes.c_void_p(CTaosInterface.libtaos.taos_connect(
_host, _user, _password, _db, _port))
if connection.value == None:
if connection.value is None:
print('connect to TDengine failed')
raise ConnectionError("connect to TDengine failed")
# sys.exit(1)
@ -293,7 +443,8 @@ class CTaosInterface(object):
@rtype: 0 on success and -1 on failure
'''
try:
return CTaosInterface.libtaos.taos_query(connection, ctypes.c_char_p(sql.encode('utf-8')))
return CTaosInterface.libtaos.taos_query(
connection, ctypes.c_char_p(sql.encode('utf-8')))
except AttributeError:
raise AttributeError("sql is expected as a string")
# finally:
@ -360,35 +511,49 @@ class CTaosInterface(object):
result, ctypes.byref(pblock))
if num_of_rows == 0:
return None, 0
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC_BLOCK:
raise DatabaseError("Invalid data type returned from database")
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC_BLOCK[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
return blocks, abs(num_of_rows)
@staticmethod
def fetchRow(result, fields):
pblock = ctypes.c_void_p(0)
pblock = CTaosInterface.libtaos.taos_fetch_row(result)
if pblock:
num_of_rows = 1
isMicro = (CTaosInterface.libtaos.taos_result_precision(result) == FieldType.C_TIMESTAMP_MICRO)
isMicro = (CTaosInterface.libtaos.taos_result_precision(
result) == FieldType.C_TIMESTAMP_MICRO)
blocks = [None] * len(fields)
fieldL = CTaosInterface.libtaos.taos_fetch_lengths(result)
fieldLen = [ele for ele in ctypes.cast(fieldL, ctypes.POINTER(ctypes.c_int))[:len(fields)]]
fieldLen = [
ele for ele in ctypes.cast(
fieldL, ctypes.POINTER(
ctypes.c_int))[
:len(fields)]]
for i in range(len(fields)):
data = ctypes.cast(pblock, ctypes.POINTER(ctypes.c_void_p))[i]
if fields[i]['type'] not in _CONVERT_FUNC:
raise DatabaseError("Invalid data type returned from database")
raise DatabaseError(
"Invalid data type returned from database")
if data is None:
blocks[i] = [None]
else:
blocks[i] = _CONVERT_FUNC[fields[i]['type']](data, num_of_rows, fieldLen[i], isMicro)
blocks[i] = _CONVERT_FUNC[fields[i]['type']](
data, num_of_rows, fieldLen[i], isMicro)
else:
return None, 0
return blocks, abs(num_of_rows)

View File

@ -2,9 +2,11 @@ from .cursor import TDengineCursor
from .subscription import TDengineSubscription
from .cinterface import CTaosInterface
class TDengineConnection(object):
""" TDengine connection object
"""
def __init__(self, *args, **kwargs):
self._conn = None
self._host = None
@ -43,7 +45,12 @@ class TDengineConnection(object):
self._config = kwargs['config']
self._chandle = CTaosInterface(self._config)
self._conn = self._chandle.connect(self._host, self._user, self._password, self._database, self._port)
self._conn = self._chandle.connect(
self._host,
self._user,
self._password,
self._database,
self._port)
def close(self):
"""Close current connection.
@ -55,7 +62,8 @@ class TDengineConnection(object):
"""
if self._conn is None:
return None
sub = CTaosInterface.subscribe(self._conn, restart, topic, sql, interval)
sub = CTaosInterface.subscribe(
self._conn, restart, topic, sql, interval)
return TDengineSubscription(sub)
def cursor(self):
@ -80,6 +88,7 @@ class TDengineConnection(object):
"""
pass
if __name__ == "__main__":
conn = TDengineConnection(host='192.168.1.107')
conn.close()

View File

@ -3,6 +3,7 @@
from .dbapi import *
class FieldType(object):
"""TDengine Field Types
"""
@ -18,13 +19,21 @@ class FieldType(object):
C_BINARY = 8
C_TIMESTAMP = 9
C_NCHAR = 10
C_TINYINT_UNSIGNED = 12
C_SMALLINT_UNSIGNED = 13
C_INT_UNSIGNED = 14
C_BIGINT_UNSIGNED = 15
# NULL value definition
# NOTE: These values should change according to C definition in tsdb.h
C_BOOL_NULL = 0x02
C_TINYINT_NULL = -128
C_TINYINT_UNSIGNED_NULL = 255
C_SMALLINT_NULL = -32768
C_SMALLINT_UNSIGNED_NULL = 65535
C_INT_NULL = -2147483648
C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float('nan')
C_DOUBLE_NULL = float('nan')
C_BINARY_NULL = bytearray([int('0xff', 16)])

View File

@ -5,6 +5,7 @@ import threading
# querySeqNum = 0
class TDengineCursor(object):
"""Database cursor which is used to manage the context of a fetch operation.
@ -168,11 +169,26 @@ class TDengineCursor(object):
if (dataType.upper() == "TINYINT"):
if (self._description[col][1] == FieldType.C_TINYINT):
return True
if (dataType.upper() == "TINYINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_TINYINT_UNSIGNED):
return True
if (dataType.upper() == "SMALLINT"):
if (self._description[col][1] == FieldType.C_SMALLINT):
return True
if (dataType.upper() == "SMALLINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_SMALLINT_UNSIGNED):
return True
if (dataType.upper() == "INT"):
if (self._description[col][1] == FieldType.C_INT):
return True
if (dataType.upper() == "INT UNSIGNED"):
if (self._description[col][1] == FieldType.C_INT_UNSIGNED):
return True
if (dataType.upper() == "BIGINT"):
if (self._description[col][1] == FieldType.C_INT):
if (self._description[col][1] == FieldType.C_BIGINT):
return True
if (dataType.upper() == "BIGINT UNSIGNED"):
if (self._description[col][1] == FieldType.C_BIGINT_UNSIGNED):
return True
if (dataType.upper() == "FLOAT"):
if (self._description[col][1] == FieldType.C_FLOAT):
@ -201,10 +217,13 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchRow(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchRow(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
@ -219,15 +238,20 @@ class TDengineCursor(object):
buffer = [[] for i in range(len(self._fields))]
self._rowcount = 0
while True:
block, num_of_fields = CTaosInterface.fetchBlock(self._result, self._fields)
block, num_of_fields = CTaosInterface.fetchBlock(
self._result, self._fields)
errno = CTaosInterface.libtaos.taos_errno(self._result)
if errno != 0:
raise ProgrammingError(CTaosInterface.errStr(self._result), errno)
if num_of_fields == 0: break
raise ProgrammingError(
CTaosInterface.errStr(
self._result), errno)
if num_of_fields == 0:
break
self._rowcount += num_of_fields
for i in range(len(self._fields)):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def nextset(self):
"""
"""
@ -268,4 +292,3 @@ class TDengineCursor(object):
(ele['name'], ele['type'], None, None, None, None, False))
return self._result

View File

@ -4,6 +4,7 @@
import time
import datetime
class DBAPITypeObject(object):
def __init__(self, *values):
self.values = values
@ -16,19 +17,24 @@ class DBAPITypeObject(object):
else:
return -1
Date = datetime.date
Time = datetime.time
Timestamp = datetime.datetime
def DataFromTicks(ticks):
return Date(*time.localtime(ticks)[:3])
def TimeFromTicks(ticks):
return Time(*time.localtime(ticks)[3:6])
def TimestampFromTicks(ticks):
return Timestamp(*time.localtime(ticks)[:6])
Binary = bytes
# STRING = DBAPITypeObject(*constants.FieldType.get_string_types())

View File

@ -1,6 +1,7 @@
"""Python exceptions
"""
class Error(Exception):
def __init__(self, msg=None, errno=None):
self.msg = msg
@ -10,26 +11,31 @@ class Error(Exception):
def __str__(self):
return self._full_msg
class Warning(Exception):
"""Exception raised for important warnings like data truncations while inserting.
"""
pass
class InterfaceError(Error):
"""Exception raised for errors that are related to the database interface rather than the database itself.
"""
pass
class DatabaseError(Error):
"""Exception raised for errors that are related to the database.
"""
pass
class DataError(DatabaseError):
"""Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range.
"""
pass
class OperationalError(DatabaseError):
"""Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer
"""
@ -41,16 +47,19 @@ class IntegrityError(DatabaseError):
"""
pass
class InternalError(DatabaseError):
"""Exception raised when the database encounters an internal error.
"""
pass
class ProgrammingError(DatabaseError):
"""Exception raised for programming errors.
"""
pass
class NotSupportedError(DatabaseError):
"""Exception raised in case a method or database API was used which is not supported by the database,.
"""

View File

@ -1,13 +1,14 @@
from .cinterface import CTaosInterface
from .error import *
class TDengineSubscription(object):
"""TDengine subscription object
"""
def __init__(self, sub):
self._sub = sub
def consume(self):
"""Consume rows of a subscription
"""
@ -18,14 +19,14 @@ class TDengineSubscription(object):
buffer = [[] for i in range(len(fields))]
while True:
block, num_of_fields = CTaosInterface.fetchBlock(result, fields)
if num_of_fields == 0: break
if num_of_fields == 0:
break
for i in range(len(fields)):
buffer[i].extend(block[i])
self.fields = fields
return list(map(tuple, zip(*buffer)))
def close(self, keepProgress=True):
"""Close the Subscription.
"""
@ -38,7 +39,11 @@ class TDengineSubscription(object):
if __name__ == '__main__':
from .connection import TDengineConnection
conn = TDengineConnection(host="127.0.0.1", user="root", password="taosdata", database="test")
conn = TDengineConnection(
host="127.0.0.1",
user="root",
password="taosdata",
database="test")
# Generate a cursor object to run SQL commands
sub = conn.subscribe(True, "test", "select * from meters;", 1000)

View File

@ -3,7 +3,6 @@ from .connection import TDengineConnection
from .cursor import TDengineCursor
# Globals
apilevel = '2.0.3'
threadsafety = 0
paramstyle = 'pyformat'

View File

@ -3,7 +3,6 @@ from .connection import TDengineConnection
from .cursor import TDengineCursor
# Globals
apilevel = '2.0.3'
threadsafety = 0
paramstyle = 'pyformat'

View File

@ -68,7 +68,7 @@ typedef struct taosField {
#define DLL_EXPORT
#endif
DLL_EXPORT void taos_init();
DLL_EXPORT int taos_init();
DLL_EXPORT void taos_cleanup(void);
DLL_EXPORT int taos_options(TSDB_OPTION option, const void *arg, ...);
DLL_EXPORT TAOS *taos_connect(const char *ip, const char *user, const char *pass, const char *db, uint16_t port);

View File

@ -122,8 +122,8 @@
#define TK_UNSIGNED 103
#define TK_TAGS 104
#define TK_USING 105
#define TK_AS 106
#define TK_COMMA 107
#define TK_COMMA 106
#define TK_AS 107
#define TK_NULL 108
#define TK_SELECT 109
#define TK_UNION 110
@ -228,6 +228,7 @@
#define TK_VALUES 209
#define TK_SPACE 300
#define TK_COMMENT 301
#define TK_ILLEGAL 302

View File

@ -76,7 +76,11 @@ TAOS *shellInit(SShellArguments *args) {
args->user = TSDB_DEFAULT_USER;
}
taos_init();
if (taos_init()) {
printf("failed to init taos\n");
fflush(stdout);
return NULL;
}
// Connect to the database.
TAOS *con = NULL;

View File

@ -110,7 +110,10 @@ int main(int argc, char* argv[]) {
}
if (args.netTestRole && args.netTestRole[0] != 0) {
taos_init();
if (taos_init()) {
printf("Failed to init taos");
exit(EXIT_FAILURE);
}
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen);
exit(0);
}

View File

@ -711,7 +711,11 @@ int main(int argc, char *argv[]) {
fprintf(fp, "###################################################################\n\n");
fprintf(fp, "| WRecords | Records/Second | Requests/Second | WLatency(ms) |\n");
taos_init();
if (taos_init()) {
fprintf(stderr, "Failed to init taos\n");
return 1;
}
TAOS *taos = taos_connect(ip_addr, user, pass, NULL, port);
if (taos == NULL) {
fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));

View File

@ -1971,7 +1971,11 @@ static int createSuperTable(TAOS * taos, char* dbName, SSuperTable* superTbls,
static int createDatabases() {
TAOS * taos = NULL;
int ret = 0;
taos_init();
if (taos_init()) {
fprintf(stderr, "Failed to init taos\n");
exit(-1);
}
taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, NULL, g_Dbs.port);
if (taos == NULL) {
fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));
@ -4496,7 +4500,11 @@ void *subQueryProcess(void *sarg) {
int queryTestProcess() {
TAOS * taos = NULL;
taos_init();
if (taos_init()) {
fprintf(stderr, "Failed to init taos\n");
exit(-1);
}
taos = taos_connect(g_queryInfo.host, g_queryInfo.user, g_queryInfo.password, NULL, g_queryInfo.port);
if (taos == NULL) {
fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));
@ -4772,7 +4780,11 @@ int subscribeTestProcess() {
}
TAOS * taos = NULL;
taos_init();
if (taos_init()) {
fprintf(stderr, "Failed to init taos\n");
exit(-1);
}
taos = taos_connect(g_queryInfo.host, g_queryInfo.user, g_queryInfo.password, g_queryInfo.dbName, g_queryInfo.port);
if (taos == NULL) {
fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));

View File

@ -103,7 +103,9 @@ int32_t monInitSystem() {
}
int32_t monStartSystem() {
taos_init();
if (taos_init()) {
return -1;
}
tsMonitor.start = 1;
monExecuteSQLFp = monExecuteSQL;
monInfo("monitor module start");

View File

@ -76,6 +76,7 @@ typedef struct SQuerySQL {
typedef struct SCreatedTableInfo {
SStrToken name; // table name token
SStrToken stableName; // super table name token , for using clause
SArray *pTagNames; // create by using super table, tag name
SArray *pTagVals; // create by using super table, tag value
char *fullname; // table full name
STagData tagdata; // true tag data, super table full name is in STagData
@ -246,7 +247,7 @@ SCreateTableSQL *tSetCreateSqlElems(SArray *pCols, SArray *pTags, SQuerySQL *pSe
void tSqlExprNodeDestroy(tSQLExpr *pExpr);
SAlterTableInfo * tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type, int16_t tableTable);
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists);
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagNames, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists);
void destroyAllSelectClause(SSubclauseInfo *pSql);
void doDestroyQuerySql(SQuerySQL *pSql);

View File

@ -356,9 +356,20 @@ create_stable_args(A) ::= ifnotexists(U) ids(V) cpxName(Z) LP columnlist(X) RP T
create_from_stable(A) ::= ifnotexists(U) ids(V) cpxName(Z) USING ids(X) cpxName(F) TAGS LP tagitemlist(Y) RP. {
X.n += F.n;
V.n += Z.n;
A = createNewChildTableInfo(&X, Y, &V, &U);
A = createNewChildTableInfo(&X, NULL, Y, &V, &U);
}
create_from_stable(A) ::= ifnotexists(U) ids(V) cpxName(Z) USING ids(X) cpxName(F) LP tagNamelist(P) RP TAGS LP tagitemlist(Y) RP. {
X.n += F.n;
V.n += Z.n;
A = createNewChildTableInfo(&X, P, Y, &V, &U);
}
%type tagNamelist{SArray*}
%destructor tagNamelist {taosArrayDestroy($$);}
tagNamelist(A) ::= tagNamelist(X) COMMA ids(Y). {taosArrayPush(X, &Y); A = X; }
tagNamelist(A) ::= ids(X). {A = taosArrayInit(4, sizeof(SStrToken)); taosArrayPush(A, &X);}
// create stream
// create table table_name as select count(*) from super_table_name interval(time)
create_table_args(A) ::= ifnotexists(U) ids(V) cpxName(Z) AS select(S). {

View File

@ -2574,6 +2574,10 @@ static void bottom_function(SQLFunctionCtx *pCtx) {
STopBotInfo *pRes = getTopBotOutputInfo(pCtx);
if ((void *)pRes->res[0] != (void *)((char *)pRes + sizeof(STopBotInfo) + POINTER_BYTES * pCtx->param[0].i64)) {
buildTopBotStruct(pRes, pCtx);
}
for (int32_t i = 0; i < pCtx->size; ++i) {
char *data = GET_INPUT_DATA(pCtx, i);
TSKEY ts = GET_TS_DATA(pCtx, i);
@ -2608,6 +2612,11 @@ static void bottom_function_f(SQLFunctionCtx *pCtx, int32_t index) {
}
STopBotInfo *pRes = getTopBotOutputInfo(pCtx);
if ((void *)pRes->res[0] != (void *)((char *)pRes + sizeof(STopBotInfo) + POINTER_BYTES * pCtx->param[0].i64)) {
buildTopBotStruct(pRes, pCtx);
}
SET_VAL(pCtx, 1, 1);
do_bottom_function_add(pRes, (int32_t)pCtx->param[0].i64, pData, ts, pCtx->inputType, &pCtx->tagInfo, NULL, 0);

View File

@ -497,6 +497,7 @@ static void freeVariant(void *pItem) {
void freeCreateTableInfo(void* p) {
SCreatedTableInfo* pInfo = (SCreatedTableInfo*) p;
taosArrayDestroy(pInfo->pTagNames);
taosArrayDestroyEx(pInfo->pTagVals, freeVariant);
tfree(pInfo->fullname);
tfree(pInfo->tagdata.data);
@ -574,11 +575,12 @@ SCreateTableSQL *tSetCreateSqlElems(SArray *pCols, SArray *pTags, SQuerySQL *pSe
return pCreate;
}
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists) {
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagNames, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists) {
SCreatedTableInfo info;
memset(&info, 0, sizeof(SCreatedTableInfo));
info.name = *pToken;
info.pTagNames = pTagNames;
info.pTagVals = pTagVals;
info.stableName = *pTableName;
info.igExist = (igExists->n > 0)? 1:0;

File diff suppressed because it is too large Load Diff

View File

@ -50,7 +50,8 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
STsdbMeta *pMeta = pRepo->tsdbMeta;
STable * super = NULL;
STable * table = NULL;
int newSuper = 0;
bool newSuper = false;
bool superChanged = false;
int tid = pCfg->tableId.tid;
STable * pTable = NULL;
@ -85,7 +86,7 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
if (pCfg->type == TSDB_CHILD_TABLE) {
super = tsdbGetTableByUid(pMeta, pCfg->superUid);
if (super == NULL) { // super table not exists, try to create it
newSuper = 1;
newSuper = true;
super = tsdbCreateTableFromCfg(pCfg, true);
if (super == NULL) goto _err;
} else {
@ -93,6 +94,17 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
terrno = TSDB_CODE_TDB_IVD_CREATE_TABLE_INFO;
goto _err;
}
if (schemaVersion(pCfg->tagSchema) > schemaVersion(super->tagSchema)) {
// tag schema out of date, need to update super table tag version
STSchema *pOldSchema = super->tagSchema;
TSDB_WLOCK_TABLE(super);
super->tagSchema = tdDupSchema(pCfg->tagSchema);
TSDB_WUNLOCK_TABLE(super);
tdFreeSchema(pOldSchema);
superChanged = true;
}
}
}
@ -117,7 +129,7 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
// TODO: refactor duplicate codes
int tlen = 0;
void *pBuf = NULL;
if (newSuper) {
if (newSuper || superChanged) {
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, super);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) goto _err;

View File

@ -1,6 +1,8 @@
CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
PROJECT(TDengine)
ADD_DEFINITIONS(-DWAL_CHECKSUM_WHOLE)
INCLUDE_DIRECTORIES(inc)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR}/src SRC)

View File

@ -111,6 +111,28 @@ void walRemoveAllOldFiles(void *handle) {
pthread_mutex_unlock(&pWal->mutex);
}
#if defined(WAL_CHECKSUM_WHOLE)
static void walUpdateChecksum(SWalHead *pHead) {
pHead->sver = 1;
pHead->cksum = 0;
pHead->cksum = taosCalcChecksum(0, (uint8_t *)pHead, sizeof(*pHead) + pHead->len);
}
static int walValidateChecksum(SWalHead *pHead) {
if (pHead->sver == 0) { // for compatible with wal before sver 1
return taosCheckChecksumWhole((uint8_t *)pHead, sizeof(*pHead));
} else if (pHead->sver == 1) {
uint32_t cksum = pHead->cksum;
pHead->cksum = 0;
return taosCheckChecksum((uint8_t *)pHead, sizeof(*pHead) + pHead->len, cksum);
}
return 0;
}
#endif
int32_t walWrite(void *handle, SWalHead *pHead) {
if (handle == NULL) return -1;
@ -123,7 +145,13 @@ int32_t walWrite(void *handle, SWalHead *pHead) {
if (pHead->version <= pWal->version) return 0;
pHead->signature = WAL_SIGNATURE;
#if defined(WAL_CHECKSUM_WHOLE)
walUpdateChecksum(pHead);
#else
pHead->sver = 0;
taosCalcChecksumAppend(0, (uint8_t *)pHead, sizeof(SWalHead));
#endif
int32_t contLen = pHead->len + sizeof(SWalHead);
pthread_mutex_lock(&pWal->mutex);
@ -246,16 +274,40 @@ static int32_t walSkipCorruptedRecord(SWal *pWal, SWalHead *pHead, int64_t tfd,
continue;
}
if (taosCheckChecksumWhole((uint8_t *)pHead, sizeof(SWalHead))) {
#if defined(WAL_CHECKSUM_WHOLE)
if (pHead->sver == 0 && walValidateChecksum(pHead)) {
wInfo("vgId:%d, wal head cksum check passed, offset:%" PRId64, pWal->vgId, pos);
*offset = pos;
return TSDB_CODE_SUCCESS;
}
if (pHead->sver == 1) {
if (tfRead(tfd, pHead->cont, pHead->len) < pHead->len) {
wError("vgId:%d, read to end of corrupted wal file, offset:%" PRId64, pWal->vgId, pos);
return TSDB_CODE_WAL_FILE_CORRUPTED;
}
if (walValidateChecksum(pHead)) {
wInfo("vgId:%d, wal whole cksum check passed, offset:%" PRId64, pWal->vgId, pos);
*offset = pos;
return TSDB_CODE_SUCCESS;
}
}
#else
if (taosCheckChecksumWhole((uint8_t *)pHead, sizeof(SWalHead))) {
wInfo("vgId:%d, wal head cksum check passed, offset:%" PRId64, pWal->vgId, pos);
*offset = pos;
return TSDB_CODE_SUCCESS;
}
#endif
}
return TSDB_CODE_WAL_FILE_CORRUPTED;
}
static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, char *name, int64_t fileId) {
int32_t size = WAL_MAX_SIZE;
void * buffer = tmalloc(size);
@ -293,6 +345,51 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
break;
}
#if defined(WAL_CHECKSUM_WHOLE)
if (pHead->sver == 0 && !walValidateChecksum(pHead)) {
wError("vgId:%d, file:%s, wal head cksum is messed up, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
code = walSkipCorruptedRecord(pWal, pHead, tfd, &offset);
if (code != TSDB_CODE_SUCCESS) {
walFtruncate(pWal, tfd, offset);
break;
}
}
if (pHead->len < 0 || pHead->len > size - sizeof(SWalHead)) {
wError("vgId:%d, file:%s, wal head len out of range, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
code = walSkipCorruptedRecord(pWal, pHead, tfd, &offset);
if (code != TSDB_CODE_SUCCESS) {
walFtruncate(pWal, tfd, offset);
break;
}
}
ret = (int32_t)tfRead(tfd, pHead->cont, pHead->len);
if (ret < 0) {
wError("vgId:%d, file:%s, failed to read wal body since %s", pWal->vgId, name, strerror(errno));
code = TAOS_SYSTEM_ERROR(errno);
break;
}
if (ret < pHead->len) {
wError("vgId:%d, file:%s, failed to read wal body, ret:%d len:%d", pWal->vgId, name, ret, pHead->len);
offset += sizeof(SWalHead);
continue;
}
if (pHead->sver == 1 && !walValidateChecksum(pHead)) {
wError("vgId:%d, file:%s, wal whole cksum is messed up, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
code = walSkipCorruptedRecord(pWal, pHead, tfd, &offset);
if (code != TSDB_CODE_SUCCESS) {
walFtruncate(pWal, tfd, offset);
break;
}
}
#else
if (!taosCheckChecksumWhole((uint8_t *)pHead, sizeof(SWalHead))) {
wError("vgId:%d, file:%s, wal head cksum is messed up, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
@ -326,6 +423,7 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
continue;
}
#endif
offset = offset + sizeof(SWalHead) + pHead->len;
wTrace("vgId:%d, restore wal, fileId:%" PRId64 " hver:%" PRIu64 " wver:%" PRIu64 " len:%d", pWal->vgId,

7
tests/Jenkinsfile vendored
View File

@ -109,6 +109,13 @@ pipeline {
java --class-path=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
cd ${JENKINS_HOME}/workspace/nodejs
node nodejsChecker.js host=localhost
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#

View File

@ -62,7 +62,10 @@ int main(int argc, char *argv[]) {
}
// init TAOS
taos_init();
if (taos_init()) {
exit(1);
}
TAOS *taos = taos_connect(argv[1], "root", "taosdata", NULL, 0);
if (taos == NULL) {
printf("failed to connect to server, reason:%s\n", "null taos"/*taos_errstr(taos)*/);

View File

@ -23,7 +23,10 @@ int main(int argc, char *argv[])
}
// init TAOS
taos_init();
if (taos_init()) {
printf("failed to init taos\n");
exit(1);
}
taos = taos_connect(argv[1], "root", "taosdata", NULL, 0);
if (taos == NULL) {

View File

@ -55,7 +55,10 @@ int main(int argc, char *argv[])
}
// init TAOS
taos_init();
if (taos_init()) {
printf("failed to init taos\n");
exit(1);
}
strcpy(db_name, argv[2]);
strcpy(tbl_name, argv[3]);

View File

@ -217,7 +217,10 @@ int main(int argc, char *argv[]) {
}
// init TAOS
taos_init();
if (taos_init()) {
printf("failed to init taos\n");
exit(1);
}
TAOS* taos = taos_connect(host, user, passwd, "", 0);
if (taos == NULL) {

View File

@ -354,10 +354,11 @@ class ThreadCoordinator:
# end, and maybe signal them to stop
if isinstance(err, CrashGenError): # our own transition failure
Logging.info("State transition error")
# TODO: saw an error here once, let's print out stack info for err?
traceback.print_stack()
transitionFailed = True
self._te = None # Not running any more
self._execStats.registerFailure("State transition error")
self._execStats.registerFailure("State transition error: {}".format(err))
else:
raise
# return transitionFailed # Why did we have this??!!
@ -882,8 +883,12 @@ class StateMechine:
self._stateWeights = [1, 2, 10, 40]
def init(self, dbc: DbConn): # late initailization, don't save the dbConn
try:
self._curState = self._findCurrentState(dbc) # starting state
Logging.debug("Found Starting State: {}".format(self._curState))
except taos.error.ProgrammingError as err:
Logging.error("Failed to initialized state machine, cannot find current state: {}".format(err))
traceback.print_stack()
raise # re-throw
# TODO: seems no lnoger used, remove?
def getCurrentState(self):
@ -951,6 +956,8 @@ class StateMechine:
# We transition the system to a new state by examining the current state itself
def transition(self, tasks, dbc: DbConn):
global gSvcMgr
if (len(tasks) == 0): # before 1st step, or otherwise empty
Logging.debug("[STT] Starting State: {}".format(self._curState))
return # do nothing
@ -2370,7 +2377,7 @@ class MainExec:
'-n',
'--dynamic-db-table-names',
action='store_true',
help='Use non-fixed names for dbs/tables, useful for multi-instance executions (default: false)')
help='Use non-fixed names for dbs/tables, for -b, useful for multi-instance executions (default: false)')
parser.add_argument(
'-o',
'--num-dnodes',

View File

@ -15,6 +15,7 @@ from util.log import *
from .misc import Logging, CrashGenError, Helper, Dice
import os
import datetime
import traceback
# from .service_manager import TdeInstance
class DbConn:
@ -349,6 +350,7 @@ class DbConnNative(DbConn):
def execute(self, sql):
if (not self.isOpen):
traceback.print_stack()
raise CrashGenError(
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
Logging.debug("[SQL] Executing SQL: {}".format(sql))
@ -361,6 +363,7 @@ class DbConnNative(DbConn):
def query(self, sql): # return rows affected
if (not self.isOpen):
traceback.print_stack()
raise CrashGenError(
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
Logging.debug("[SQL] Executing SQL: {}".format(sql))

View File

@ -0,0 +1,56 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
ret = tdSql.execute(
'create table tb (ts timestamp, speed int unsigned)')
insertRows = 10
tdLog.info("insert %d rows" % (insertRows))
for i in range(0, insertRows):
ret = tdSql.execute(
'insert into tb values (now + %dm, %d)' %
(i, i))
tdLog.info("insert earlier data")
tdSql.execute('insert into tb values (now - 5m , 10)')
tdSql.execute('insert into tb values (now - 6m , 10)')
tdSql.execute('insert into tb values (now - 7m , 10)')
tdSql.execute('insert into tb values (now - 8m , 4294967294)')
tdSql.error('insert into tb values (now - 9m, -1)')
tdSql.error('insert into tb values (now - 9m, 4294967295)')
tdSql.query("select * from tb")
tdSql.checkRows(insertRows + 4)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

View File

@ -16,10 +16,10 @@ python3 ./test.py -f insert/nchar.py
python3 ./test.py -f insert/nchar-unicode.py
python3 ./test.py -f insert/multi.py
python3 ./test.py -f insert/randomNullCommit.py
#python3 insert/retentionpolicy.py
python3 insert/retentionpolicy.py
python3 ./test.py -f insert/alterTableAndInsert.py
python3 ./test.py -f insert/insertIntoTwoTables.py
#python3 ./test.py -f insert/before_1970.py
python3 ./test.py -f insert/before_1970.py
python3 bug2265.py
#table

View File

@ -23,4 +23,11 @@ python3 ./test.py -f functions/function_sum.py -r 1
python3 ./test.py -f functions/function_top.py -r 1
python3 ./test.py -f functions/function_twa.py -r 1
python3 ./test.py -f functions/function_twa_test2.py
python3 ./test.py -f functions/function_stddev_td2555.pyhao
python3 ./test.py -f functions/function_stddev_td2555.py
python3 ./test.py -f insert/metadataUpdate.py
python3 ./test.py -f tools/taosdemoTest2.py
python3 ./test.py -f query/last_cache.py
python3 ./test.py -f query/last_row_cache.py
python3 ./test.py -f account/account_create.py
python3 ./test.py -f alter/alter_table.py
python3 ./test.py -f query/queryGroupbySort.py

View File

@ -0,0 +1,162 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/cfg.sh -n dnode1 -c walLevel -v 0
system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2
system sh/exec.sh -n dnode1 -s start
sleep 100
sql connect
print ======================== dnode1 start
$db = testdb
sql create database $db
sql use $db
sql create stable st2 (ts timestamp, f1 int) tags (id int, t1 int, t2 nchar(4), t3 double)
sql insert into tb1 using st2 (id, t1) tags(1,2) values (now, 1)
sql select id,t1,t2,t3 from tb1
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data01 != 2 then
return -1
endi
if $data02 != NULL then
return -1
endi
if $data03 != NULL then
return -1
endi
sql create table tb2 using st2 (t2,t3) tags ("12",22.0)
sql select id,t1,t2,t3 from tb2;
if $rows != 1 then
return -1
endi
if $data00 != NULL then
return -1
endi
if $data01 != NULL then
return -1
endi
if $data02 != 12 then
return -1
endi
if $data03 != 22.000000000 then
return -1
endi
sql create table tb3 using st2 tags (1,2,"3",33.0);
sql select id,t1,t2,t3 from tb3;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data01 != 2 then
return -1
endi
if $data02 != 3 then
return -1
endi
if $data03 != 33.000000000 then
return -1
endi
sql insert into tb4 using st2 tags(1,2,"33",44.0) values (now, 1);
sql select id,t1,t2,t3 from tb4;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data01 != 2 then
return -1
endi
if $data02 != 33 then
return -1
endi
if $data03 != 44.000000000 then
return -1
endi
sql_error create table tb5 using st2() tags (3,3,"3",33.0);
sql_error create table tb6 using st2 (id,t1) tags (3,3,"3",33.0);
sql_error create table tb7 using st2 (id,t1) tags (3);
sql_error create table tb8 using st2 (ide) tags (3);
sql_error create table tb9 using st2 (id);
sql_error create table tb10 using st2 (id t1) tags (1,1);
sql_error create table tb10 using st2 (id,,t1) tags (1,1,1);
sql_error create table tb11 using st2 (id,t1,) tags (1,1,1);
sql create table tb12 using st2 (t1,id) tags (2,1);
sql select id,t1,t2,t3 from tb12;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data01 != 2 then
return -1
endi
if $data02 != NULL then
return -1
endi
if $data03 != NULL then
return -1
endi
sql create table tb13 using st2 ("t1",'id') tags (2,1);
sql select id,t1,t2,t3 from tb13;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data01 != 2 then
return -1
endi
if $data02 != NULL then
return -1
endi
if $data03 != NULL then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -72,4 +72,3 @@ cd ../../../debug; make
./test.sh -f unique/cluster/cache.sim
./test.sh -f unique/cluster/vgroup100.sim
./test.sh -f unique/column/replica3.sim

View File

@ -36,3 +36,11 @@
./test.sh -f general/stable/show.sim
./test.sh -f general/stable/values.sim
./test.sh -f general/stable/vnode3.sim
./test.sh -f unique/column/replica3.sim
./test.sh -f issue/TD-2713.sim
./test.sh -f general/parser/select_distinct_tag.sim
./test.sh -f unique/mnode/mgmt30.sim
./test.sh -f issue/TD-2677.sim
./test.sh -f issue/TD-2680.sim
./test.sh -f unique/dnode/lossdata.sim

View File

@ -81,7 +81,9 @@ char *simParseHostName(char *varName) {
}
bool simSystemInit() {
taos_init();
if (taos_init()) {
return false;
}
taosGetFqdn(simHostName);
simInitsimCmdList();
memset(simScriptList, 0, sizeof(SScript *) * MAX_MAIN_SCRIPT_NUM);