[TD-13408]<test>: move tests in for3.0 (#10598)
* restore .gitmodules * Revert "[TD-13408]<test>: move tests out" This reverts commit f80a4ca49ff37431bc0bc0dafb5ccf5858b00beb. * revert f80a4ca49ff37431bc0bc0dafb5ccf5858b00beb * immigrate file change from stand-alone repo to TDengine for 3.0 * remove tests repository for Jenkinsfile2 Co-authored-by: tangfangzhi <fztang@taosdata.com>
This commit is contained in:
parent
8f108bfb83
commit
43feda0595
|
@ -10,10 +10,3 @@
|
|||
[submodule "deps/TSZ"]
|
||||
path = deps/TSZ
|
||||
url = https://github.com/taosdata/TSZ.git
|
||||
[submodule "tests"]
|
||||
path = tests
|
||||
url = https://github.com/taosdata/tests
|
||||
branch = 3.0
|
||||
[submodule "examples/rust"]
|
||||
path = examples/rust
|
||||
url = https://github.com/songtianyi/tdengine-rust-bindings.git
|
||||
|
|
31
Jenkinsfile2
31
Jenkinsfile2
|
@ -74,37 +74,9 @@ def pre_test(){
|
|||
git pull >/dev/null
|
||||
git fetch origin +refs/pull/${CHANGE_ID}/merge
|
||||
git checkout -qf FETCH_HEAD
|
||||
git submodule update --init --recursive --remote
|
||||
git submodule update --init --recursive
|
||||
'''
|
||||
script {
|
||||
if (env.CHANGE_TARGET == 'master') {
|
||||
sh'''
|
||||
cd ${WKCT}
|
||||
git checkout master
|
||||
'''
|
||||
}
|
||||
else if(env.CHANGE_TARGET == '2.0'){
|
||||
sh '''
|
||||
cd ${WKCT}
|
||||
git checkout 2.0
|
||||
'''
|
||||
}
|
||||
else if(env.CHANGE_TARGET == '3.0'){
|
||||
sh '''
|
||||
cd ${WKCT}
|
||||
git checkout 3.0
|
||||
'''
|
||||
}
|
||||
else{
|
||||
sh '''
|
||||
cd ${WKCT}
|
||||
git checkout develop
|
||||
'''
|
||||
}
|
||||
}
|
||||
sh'''
|
||||
cd ${WKCT}
|
||||
git pull >/dev/null
|
||||
cd ${WKC}
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
|
@ -123,7 +95,6 @@ pipeline {
|
|||
environment{
|
||||
WK = '/var/lib/jenkins/workspace/TDinternal'
|
||||
WKC= '/var/lib/jenkins/workspace/TDengine'
|
||||
WKCT= '/var/lib/jenkins/workspace/TDengine/tests'
|
||||
}
|
||||
stages {
|
||||
stage('pre_build'){
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
Subproject commit 1c8924dc668e6aa848214c2fc54e3ace3f5bf8df
|
1
tests
1
tests
|
@ -1 +0,0 @@
|
|||
Subproject commit 904e6f0e152e8fe61edfe0a0a9ae497cfde2a72c
|
|
@ -0,0 +1,4 @@
|
|||
#ADD_SUBDIRECTORY(examples/c)
|
||||
ADD_SUBDIRECTORY(tsim)
|
||||
ADD_SUBDIRECTORY(test/c)
|
||||
#ADD_SUBDIRECTORY(comparisonTest/tdengine)
|
|
@ -0,0 +1,243 @@
|
|||
### Prepare development environment
|
||||
|
||||
1. sudo apt install
|
||||
build-essential cmake net-tools python-pip python-setuptools python3-pip
|
||||
python3-setuptools valgrind psmisc curl
|
||||
|
||||
2. git clone <https://github.com/taosdata/TDengine>; cd TDengine
|
||||
|
||||
3. mkdir debug; cd debug; cmake ..; make ; sudo make install
|
||||
|
||||
4. pip install ../src/connector/python ; pip3 install
|
||||
../src/connector/python
|
||||
|
||||
5. pip install numpy; pip3 install numpy (numpy is required only if you need to run querySort.py)
|
||||
|
||||
> Note: Both Python2 and Python3 are currently supported by the Python test
|
||||
> framework. Since Python2 is no longer officially supported by Python Software
|
||||
> Foundation since January 1, 2020, it is recommended that subsequent test case
|
||||
> development be guaranteed to run correctly on Python3.
|
||||
|
||||
> For Python2, please consider being compatible if appropriate without
|
||||
> additional burden.
|
||||
>
|
||||
> If you use some new Linux distribution like Ubuntu 20.04 which already do not
|
||||
> include Python2, please do not install Python2-related packages.
|
||||
>
|
||||
> <https://nakedsecurity.sophos.com/2020/01/03/python-is-dead-long-live-python/>
|
||||
|
||||
### How to run Python test suite
|
||||
|
||||
1. cd \<TDengine\>/tests/pytest
|
||||
|
||||
2. ./smoketest.sh \# for smoke test
|
||||
|
||||
3. ./smoketest.sh -g \# for memory leak detection test with valgrind
|
||||
|
||||
4. ./fulltest.sh \# for full test
|
||||
|
||||
> Note1: TDengine daemon's configuration and data files are stored in
|
||||
> \<TDengine\>/sim directory. As a historical design, it's same place with
|
||||
> TSIM script. So after the TSIM script ran with sudo privilege, the directory
|
||||
> has been used by TSIM then the python script cannot write it by a normal
|
||||
> user. You need to remove the directory completely first before running the
|
||||
> Python test case. We should consider using two different locations to store
|
||||
> for TSIM and Python script.
|
||||
|
||||
> Note2: if you need to debug crash problem with a core dump, you need
|
||||
> manually edit smoketest.sh or fulltest.sh to add "ulimit -c unlimited"
|
||||
> before the script line. Then you can look for the core file in
|
||||
> \<TDengine\>/tests/pytest after the program crash.
|
||||
|
||||
|
||||
### How to add a new test case
|
||||
|
||||
**1. TSIM test cases:**
|
||||
|
||||
TSIM was the testing framework has been used internally. Now it still be used to run the test cases we develop in the past as a legacy system. We are turning to use Python to develop new test case and are abandoning TSIM gradually.
|
||||
|
||||
**2. Python test cases:**
|
||||
|
||||
**2.1 Please refer to \<TDengine\>/tests/pytest/insert/basic.py to add a new
|
||||
test case.** The new test case must implement 3 functions, where self.init()
|
||||
and self.stop() simply copy the contents of insert/basic.py and the test
|
||||
logic is implemented in self.run(). You can refer to the code in the util
|
||||
directory for more information.
|
||||
|
||||
**2.2 Edit smoketest.sh to add the path and filename of the new test case**
|
||||
|
||||
Note: The Python test framework may continue to be improved in the future,
|
||||
hopefully, to provide more functionality and ease of writing test cases. The
|
||||
method of writing the test case above does not exclude that it will also be
|
||||
affected.
|
||||
|
||||
**2.3 What test.py does in detail:**
|
||||
|
||||
test.py is the entry program for test case execution and monitoring.
|
||||
|
||||
test.py has the following functions.
|
||||
|
||||
\-f --file, Specifies the test case file name to be executed
|
||||
-p --path, Specifies deployment path
|
||||
|
||||
\-m --master, Specifies the master server IP for cluster deployment
|
||||
-c--cluster, test cluster function
|
||||
-s--stop, terminates all running nodes
|
||||
|
||||
\-g--valgrind, load valgrind for memory leak detection test
|
||||
|
||||
\-h--help, display help
|
||||
|
||||
**2.4 What util/log.py does in detail:**
|
||||
|
||||
log.py is quite simple, the main thing is that you can print the output in
|
||||
different colors as needed. The success() should be called for successful
|
||||
test case execution and the success() will print green text. The exit() will
|
||||
print red text and exit the program, exit() should be called for test
|
||||
failure.
|
||||
|
||||
**util/log.py**
|
||||
|
||||
...
|
||||
|
||||
def info(self, info):
|
||||
|
||||
printf("%s %s" % (datetime.datetime.now(), info))
|
||||
|
||||
|
||||
|
||||
def sleep(self, sec):
|
||||
|
||||
printf("%s sleep %d seconds" % (datetime.datetime.now(), sec))
|
||||
|
||||
time.sleep(sec)
|
||||
|
||||
|
||||
|
||||
def debug(self, err):
|
||||
|
||||
printf("\\033[1;36m%s %s\\033[0m" % (datetime.datetime.now(), err))
|
||||
|
||||
|
||||
|
||||
def success(self, info):
|
||||
|
||||
printf("\\033[1;32m%s %s\\033[0m" % (datetime.datetime.now(), info))
|
||||
|
||||
|
||||
|
||||
def notice(self, err):
|
||||
|
||||
printf("\\033[1;33m%s %s\\033[0m" % (datetime.datetime.now(), err))
|
||||
|
||||
|
||||
|
||||
def exit(self, err):
|
||||
|
||||
printf("\\033[1;31m%s %s\\033[0m" % (datetime.datetime.now(), err))
|
||||
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
||||
def printNoPrefix(self, info):
|
||||
|
||||
printf("\\033[1;36m%s\\033[0m" % (info)
|
||||
|
||||
...
|
||||
|
||||
**2.5 What util/sql.py does in detail:**
|
||||
|
||||
SQL.py is mainly used to execute SQL statements to manipulate the database,
|
||||
and the code is extracted and commented as follows:
|
||||
|
||||
**util/sql.py**
|
||||
|
||||
\# prepare() is mainly used to set up the environment for testing table and
|
||||
data, and to set up the database db for testing. do not call prepare() if you
|
||||
need to test the database operation command.
|
||||
|
||||
def prepare(self):
|
||||
|
||||
tdLog.info("prepare database:db")
|
||||
|
||||
self.cursor.execute('reset query cache')
|
||||
|
||||
self.cursor.execute('drop database if exists db')
|
||||
|
||||
self.cursor.execute('create database db')
|
||||
|
||||
self.cursor.execute('use db')
|
||||
|
||||
...
|
||||
|
||||
\# query() is mainly used to execute select statements for normal syntax input
|
||||
|
||||
def query(self, sql):
|
||||
|
||||
...
|
||||
|
||||
\# error() is mainly used to execute the select statement with the wrong syntax
|
||||
input, the error will be caught as a reasonable behavior, if not caught it will
|
||||
prove that the test failed
|
||||
|
||||
def error()
|
||||
|
||||
...
|
||||
|
||||
\# checkRows() is used to check the number of returned lines after calling
|
||||
query(select ...) after calling the query(select ...) to check the number of
|
||||
rows of returned results.
|
||||
|
||||
def checkRows(self, expectRows):
|
||||
|
||||
...
|
||||
|
||||
\# checkData() is used to check the returned result data after calling
|
||||
query(select ...) after the query(select ...) is called, failure to meet
|
||||
expectation is
|
||||
|
||||
def checkData(self, row, col, data):
|
||||
|
||||
...
|
||||
|
||||
\# getData() returns the result data after calling query(select ...) to return
|
||||
the resulting data after calling query(select ...)
|
||||
|
||||
def getData(self, row, col):
|
||||
|
||||
...
|
||||
|
||||
\# execute() used to execute sql and return the number of affected rows
|
||||
|
||||
def execute(self, sql):
|
||||
|
||||
...
|
||||
|
||||
\# executeTimes() Multiple executions of the same sql statement
|
||||
|
||||
def executeTimes(self, sql, times):
|
||||
|
||||
...
|
||||
|
||||
\# CheckAffectedRows() Check if the number of affected rows is as expected
|
||||
|
||||
def checkAffectedRows(self, expectAffectedRows):
|
||||
|
||||
...
|
||||
|
||||
### CI submission adoption principle.
|
||||
|
||||
- Every commit / PR compilation must pass. Currently, the warning is treated
|
||||
as an error, so the warning must also be resolved.
|
||||
|
||||
- Test cases that already exist must pass.
|
||||
|
||||
- Because CI is very important to support build and automatically test
|
||||
procedure, it is necessary to manually test the test case before adding it
|
||||
and do as many iterations as possible to ensure that the test case provides
|
||||
stable and reliable test results when added.
|
||||
|
||||
> Note: In the future, according to the requirements and test development
|
||||
> progress will add stress testing, performance testing, code style,
|
||||
> and other features based on functional testing.
|
|
@ -0,0 +1,328 @@
|
|||
def pre_test(){
|
||||
|
||||
sh '''
|
||||
sudo rmtaos||echo 'no taosd installed'
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WK}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf ${WK}/debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make > /dev/null
|
||||
make install > /dev/null
|
||||
pip3 install ${WKC}/src/connector/python
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
def pre_test_p(){
|
||||
|
||||
sh '''
|
||||
sudo rmtaos||echo 'no taosd installed'
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WK}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf ${WK}/debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make > /dev/null
|
||||
make install > /dev/null
|
||||
pip3 install ${WKC}/src/connector/python
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
pipeline {
|
||||
agent none
|
||||
environment{
|
||||
|
||||
WK = '/data/lib/jenkins/workspace/TDinternal'
|
||||
WKC= '/data/lib/jenkins/workspace/TDinternal/community'
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Parallel test stage') {
|
||||
parallel {
|
||||
stage('pytest') {
|
||||
agent{label 'slad1'}
|
||||
steps {
|
||||
pre_test_p()
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
find pytest -name '*'sql|xargs rm -rf
|
||||
./test-all.sh pytest
|
||||
date'''
|
||||
}
|
||||
}
|
||||
stage('test_b1') {
|
||||
agent{label 'slad2'}
|
||||
steps {
|
||||
pre_test()
|
||||
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b1
|
||||
date'''
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_crash_gen') {
|
||||
agent{label "slad3"}
|
||||
steps {
|
||||
pre_test()
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./crash_gen.sh -a -p -t 4 -s 2000
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_crash_gen_val_log.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_taosd_val_log.sh
|
||||
'''
|
||||
}
|
||||
|
||||
sh'''
|
||||
nohup taosd >/dev/null &
|
||||
sleep 10
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/gotest
|
||||
bash batchtest.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
|
||||
python3 PythonChecker.py
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
|
||||
mvn clean package >/dev/null
|
||||
java -jar target/JdbcRestfulDemo-jar-with-dependencies.jar
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
|
||||
cd ${JENKINS_HOME}/workspace/nodejs
|
||||
node nodejsChecker.js host=localhost
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
|
||||
dotnet run
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
pkill -9 taosd || echo 1
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b2
|
||||
date
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full unit
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_valgrind') {
|
||||
agent{label "slad4"}
|
||||
|
||||
steps {
|
||||
pre_test()
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
nohup taosd >/dev/null &
|
||||
sleep 10
|
||||
python3 concurrent_inquiry.py -c 1
|
||||
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full jdbc
|
||||
date'''
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./valgrind-test.sh 2>&1 > mem-error-out.log
|
||||
./handle_val_log.sh
|
||||
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b3
|
||||
date'''
|
||||
sh '''
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full example
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('arm64_build'){
|
||||
agent{label 'arm64'}
|
||||
steps{
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
}
|
||||
stage('arm32_build'){
|
||||
agent{label 'arm32'}
|
||||
steps{
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
post {
|
||||
success {
|
||||
emailext (
|
||||
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' SUCCESS",
|
||||
body: """<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${env.CHANGE_AUTHOR}</li>
|
||||
<li>提交信息:${env.CHANGE_TITLE}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>""",
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
failure {
|
||||
emailext (
|
||||
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' FAIL",
|
||||
body: """<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
|
||||
<li>构建结果:<span style="color:red"> Failure </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${env.CHANGE_AUTHOR}</li>
|
||||
<li>提交信息:${env.CHANGE_TITLE}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>""",
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,336 @@
|
|||
def pre_test(){
|
||||
|
||||
sh '''
|
||||
sudo rmtaos||echo 'no taosd installed'
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WK}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf ${WK}/debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make > /dev/null
|
||||
make install > /dev/null
|
||||
pip3 install ${WKC}/src/connector/python/ || echo 0
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
def pre_test_p(){
|
||||
|
||||
sh '''
|
||||
sudo rmtaos||echo 'no taosd installed'
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WK}
|
||||
git reset --hard
|
||||
git checkout $BRANCH_NAME
|
||||
git pull
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf ${WK}/debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make > /dev/null
|
||||
make install > /dev/null
|
||||
pip3 install ${WKC}/src/connector/python/ || echo 0
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
pipeline {
|
||||
agent none
|
||||
environment{
|
||||
|
||||
WK = '/data/lib/jenkins/workspace/TDinternal'
|
||||
WKC= '/data/lib/jenkins/workspace/TDinternal/community'
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Parallel test stage') {
|
||||
parallel {
|
||||
stage('pytest') {
|
||||
agent{label 'slam1'}
|
||||
steps {
|
||||
pre_test_p()
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
find pytest -name '*'sql|xargs rm -rf
|
||||
./test-all.sh pytest
|
||||
date'''
|
||||
}
|
||||
}
|
||||
stage('test_b1') {
|
||||
agent{label 'slam2'}
|
||||
steps {
|
||||
pre_test()
|
||||
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b1
|
||||
date'''
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_crash_gen') {
|
||||
agent{label "slam3"}
|
||||
steps {
|
||||
pre_test()
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./crash_gen.sh -a -p -t 4 -s 2000
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_crash_gen_val_log.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
rm -rf /var/lib/taos/*
|
||||
rm -rf /var/log/taos/*
|
||||
./handle_taosd_val_log.sh
|
||||
'''
|
||||
}
|
||||
|
||||
sh'''
|
||||
nohup taosd >/dev/null &
|
||||
sleep 10
|
||||
'''
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/gotest
|
||||
bash batchtest.sh
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
|
||||
python3 PythonChecker.py
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
|
||||
mvn clean package assembly:single -DskipTests >/dev/null
|
||||
java -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/src/connector/jdbc
|
||||
mvn clean package -Dmaven.test.skip=true >/dev/null
|
||||
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
|
||||
java --class-path=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
|
||||
cd ${JENKINS_HOME}/workspace/nodejs
|
||||
node nodejsChecker.js host=localhost
|
||||
'''
|
||||
}
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
|
||||
dotnet run
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
pkill -9 taosd || echo 1
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b2
|
||||
date
|
||||
'''
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full unit
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('test_valgrind') {
|
||||
agent{label "slam4"}
|
||||
|
||||
steps {
|
||||
pre_test()
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
nohup taosd >/dev/null &
|
||||
sleep 10
|
||||
python3 concurrent_inquiry.py -c 1
|
||||
|
||||
'''
|
||||
}
|
||||
sh '''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full jdbc
|
||||
date'''
|
||||
sh '''
|
||||
cd ${WKC}/tests/pytest
|
||||
./valgrind-test.sh 2>&1 > mem-error-out.log
|
||||
./handle_val_log.sh
|
||||
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b3
|
||||
date'''
|
||||
sh '''
|
||||
date
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh full example
|
||||
date'''
|
||||
}
|
||||
}
|
||||
|
||||
stage('arm64_build'){
|
||||
agent{label 'arm64'}
|
||||
steps{
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
}
|
||||
stage('arm32_build'){
|
||||
agent{label 'arm32'}
|
||||
steps{
|
||||
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
|
||||
sh '''
|
||||
cd ${WK}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
cd ${WKC}
|
||||
git fetch
|
||||
git checkout develop
|
||||
git pull
|
||||
git submodule update
|
||||
cd ${WKC}/packaging
|
||||
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
|
||||
|
||||
'''
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
post {
|
||||
success {
|
||||
emailext (
|
||||
subject: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
|
||||
body: '''<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${PROJECT_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${CAUSE}</li>
|
||||
<li>变更概要:${CHANGES}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
<li>变更集:${JELLY_SCRIPT}</li>
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>''',
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
failure {
|
||||
emailext (
|
||||
subject: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
|
||||
body: '''<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${PROJECT_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${CAUSE}</li>
|
||||
<li>变更概要:${CHANGES}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
<li>变更集:${JELLY_SCRIPT}</li>
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>''',
|
||||
to: "yqliu@taosdata.com,pxiao@taosdata.com",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,200 @@
|
|||
import hudson.model.Result
|
||||
import hudson.model.*;
|
||||
import jenkins.model.CauseOfInterruption
|
||||
node {
|
||||
}
|
||||
|
||||
def skipbuild=0
|
||||
def win_stop=0
|
||||
|
||||
def abortPreviousBuilds() {
|
||||
def currentJobName = env.JOB_NAME
|
||||
def currentBuildNumber = env.BUILD_NUMBER.toInteger()
|
||||
def jobs = Jenkins.instance.getItemByFullName(currentJobName)
|
||||
def builds = jobs.getBuilds()
|
||||
|
||||
for (build in builds) {
|
||||
if (!build.isBuilding()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (currentBuildNumber == build.getNumber().toInteger()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
build.doKill() //doTerm(),doKill(),doTerm()
|
||||
}
|
||||
}
|
||||
// abort previous build
|
||||
abortPreviousBuilds()
|
||||
def abort_previous(){
|
||||
def buildNumber = env.BUILD_NUMBER as int
|
||||
if (buildNumber > 1) milestone(buildNumber - 1)
|
||||
milestone(buildNumber)
|
||||
}
|
||||
def pre_test(){
|
||||
sh'hostname'
|
||||
sh '''
|
||||
sudo rmtaos || echo "taosd has not installed"
|
||||
'''
|
||||
sh '''
|
||||
killall -9 taosd ||echo "no taosd running"
|
||||
killall -9 gdb || echo "no gdb running"
|
||||
killall -9 python3.8 || echo "no python program running"
|
||||
cd ${WKC}
|
||||
'''
|
||||
script {
|
||||
if (env.CHANGE_TARGET == 'master') {
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git checkout master
|
||||
'''
|
||||
}
|
||||
else if(env.CHANGE_TARGET == '2.0'){
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git checkout 2.0
|
||||
'''
|
||||
}
|
||||
else if(env.CHANGE_TARGET == '3.0'){
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git checkout 3.0
|
||||
'''
|
||||
}
|
||||
else{
|
||||
sh '''
|
||||
cd ${WKC}
|
||||
git checkout develop
|
||||
'''
|
||||
}
|
||||
}
|
||||
sh'''
|
||||
cd ${WKC}
|
||||
git pull >/dev/null
|
||||
git fetch origin +refs/pull/${CHANGE_ID}/merge
|
||||
git checkout -qf FETCH_HEAD
|
||||
export TZ=Asia/Harbin
|
||||
date
|
||||
rm -rf debug
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. > /dev/null
|
||||
make -j4> /dev/null
|
||||
|
||||
'''
|
||||
return 1
|
||||
}
|
||||
|
||||
pipeline {
|
||||
agent none
|
||||
options { skipDefaultCheckout() }
|
||||
environment{
|
||||
WK = '/var/lib/jenkins/workspace/TDinternal'
|
||||
WKC= '/var/lib/jenkins/workspace/TDengine'
|
||||
}
|
||||
stages {
|
||||
stage('pre_build'){
|
||||
agent{label 'slave3_0'}
|
||||
options { skipDefaultCheckout() }
|
||||
when {
|
||||
changeRequest()
|
||||
}
|
||||
steps {
|
||||
script{
|
||||
abort_previous()
|
||||
abortPreviousBuilds()
|
||||
}
|
||||
timeout(time: 45, unit: 'MINUTES'){
|
||||
pre_test()
|
||||
sh'''
|
||||
cd ${WKC}/tests
|
||||
./test-all.sh b1fq
|
||||
'''
|
||||
sh'''
|
||||
cd ${WKC}/debug
|
||||
ctest
|
||||
'''
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
post {
|
||||
success {
|
||||
emailext (
|
||||
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' SUCCESS",
|
||||
body: """<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
|
||||
<li>构建结果:<span style="color:green"> Successful </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${env.CHANGE_AUTHOR}</li>
|
||||
<li>提交信息:${env.CHANGE_TITLE}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>""",
|
||||
to: "${env.CHANGE_AUTHOR_EMAIL}",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
failure {
|
||||
emailext (
|
||||
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' FAIL",
|
||||
body: """<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
|
||||
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
|
||||
<tr>
|
||||
<td><br />
|
||||
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
|
||||
<hr size="2" width="100%" align="center" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<ul>
|
||||
<div style="font-size:18px">
|
||||
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
|
||||
<li>构建结果:<span style="color:red"> Failure </span></li>
|
||||
<li>构建编号:${BUILD_NUMBER}</li>
|
||||
<li>触发用户:${env.CHANGE_AUTHOR}</li>
|
||||
<li>提交信息:${env.CHANGE_TITLE}</li>
|
||||
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
|
||||
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
|
||||
|
||||
</div>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table></font>
|
||||
</body>
|
||||
</html>""",
|
||||
to: "${env.CHANGE_AUTHOR_EMAIL}",
|
||||
from: "support@taosdata.com"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,176 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import datetime
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
|
||||
tdSql.execute("create database timezone")
|
||||
tdSql.execute("use timezone")
|
||||
tdSql.execute("create stable st (ts timestamp, id int ) tags (index int)")
|
||||
|
||||
tdSql.execute("insert into tb0 using st tags (1) values ('2021-07-01 00:00:00.000',0)")
|
||||
tdSql.query("select ts from tb0")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb1 using st tags (1) values ('2021-07-01T00:00:00.000+07:50',1)")
|
||||
tdSql.query("select ts from tb1")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb2 using st tags (1) values ('2021-07-01T00:00:00.000+08:00',2)")
|
||||
tdSql.query("select ts from tb2")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb3 using st tags (1) values ('2021-07-01T00:00:00.000Z',3)")
|
||||
tdSql.query("select ts from tb3")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb4 using st tags (1) values ('2021-07-01 00:00:00.000+07:50',4)")
|
||||
tdSql.query("select ts from tb4")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb5 using st tags (1) values ('2021-07-01 00:00:00.000Z',5)")
|
||||
tdSql.query("select ts from tb5")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb6 using st tags (1) values ('2021-07-01T00:00:00.000+0800',6)")
|
||||
tdSql.query("select ts from tb6")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb7 using st tags (1) values ('2021-07-01 00:00:00.000+0800',7)")
|
||||
tdSql.query("select ts from tb7")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb8 using st tags (1) values ('2021-07-0100:00:00.000',8)")
|
||||
tdSql.query("select ts from tb8")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb9 using st tags (1) values ('2021-07-0100:00:00.000+0800',9)")
|
||||
tdSql.query("select ts from tb9")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb10 using st tags (1) values ('2021-07-0100:00:00.000+08:00',10)")
|
||||
tdSql.query("select ts from tb10")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb11 using st tags (1) values ('2021-07-0100:00:00.000+07:00',11)")
|
||||
tdSql.query("select ts from tb11")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb12 using st tags (1) values ('2021-07-0100:00:00.000+0700',12)")
|
||||
tdSql.query("select ts from tb12")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb13 using st tags (1) values ('2021-07-0100:00:00.000+07:12',13)")
|
||||
tdSql.query("select ts from tb13")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:48:00.000")
|
||||
|
||||
tdSql.execute("insert into tb14 using st tags (1) values ('2021-07-0100:00:00.000+712',14)")
|
||||
tdSql.query("select ts from tb14")
|
||||
tdSql.checkData(0, 0, "2021-06-28 08:58:00.000")
|
||||
|
||||
tdSql.execute("insert into tb15 using st tags (1) values ('2021-07-0100:00:00.000Z',15)")
|
||||
tdSql.query("select ts from tb15")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb16 using st tags (1) values ('2021-7-1 00:00:00.000Z',16)")
|
||||
tdSql.query("select ts from tb16")
|
||||
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb17 using st tags (1) values ('2021-07-0100:00:00.000+0750',17)")
|
||||
tdSql.query("select ts from tb17")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
|
||||
|
||||
tdSql.execute("insert into tb18 using st tags (1) values ('2021-07-0100:00:00.000+0752',18)")
|
||||
tdSql.query("select ts from tb18")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:08:00.000")
|
||||
|
||||
tdSql.execute("insert into tb19 using st tags (1) values ('2021-07-0100:00:00.000+075',19)")
|
||||
tdSql.query("select ts from tb19")
|
||||
tdSql.checkData(0, 0, "2021-07-01 00:55:00.000")
|
||||
|
||||
tdSql.execute("insert into tb20 using st tags (1) values ('2021-07-0100:00:00.000+75',20)")
|
||||
tdSql.query("select ts from tb20")
|
||||
tdSql.checkData(0, 0, "2021-06-28 05:00:00.000")
|
||||
|
||||
tdSql.execute("insert into tb21 using st tags (1) values ('2021-7-1 1:1:1.234+075',21)")
|
||||
tdSql.query("select ts from tb21")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
|
||||
|
||||
tdSql.execute("insert into tb22 using st tags (1) values ('2021-7-1T1:1:1.234+075',22)")
|
||||
tdSql.query("select ts from tb22")
|
||||
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
|
||||
|
||||
tdSql.execute("insert into tb23 using st tags (1) values ('2021-7-131:1:1.234+075',22)")
|
||||
tdSql.query("select ts from tb23")
|
||||
tdSql.checkData(0, 0, "2021-07-13 01:56:01.234")
|
||||
|
||||
|
||||
tdSql.error("insert into tberror using st tags (1) values ('20210701 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021070100:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('202171 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021 07 01 00:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021 -07-0100:00:00.000+0800',0)")
|
||||
tdSql.error("insert into tberror using st tags (1) values ('2021-7-11:1:1.234+075',0)")
|
||||
|
||||
os.system("rm -rf ./TimeZone/*.py.sql")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,174 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import datetime
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def checkCommunity(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosdump" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
|
||||
|
||||
def run(self):
|
||||
|
||||
# clear envs
|
||||
|
||||
tdSql.execute(" create database ZoneTime precision 'us' ")
|
||||
tdSql.execute(" use ZoneTime ")
|
||||
tdSql.execute(" create stable st (ts timestamp , id int , val float) tags (tag1 timestamp ,tag2 int) ")
|
||||
|
||||
# standard case for Timestamp
|
||||
|
||||
tdSql.execute(" insert into tb1 using st tags (\"2021-07-01 00:00:00.000\" , 2) values( \"2021-07-01 00:00:00.000\" , 1 , 1.0 ) ")
|
||||
case1 = (tdSql.getResult("select * from tb1"))
|
||||
print(case1)
|
||||
if case1 == [(datetime.datetime(2021, 7, 1, 0, 0), 1, 1.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000' ")
|
||||
|
||||
# RCF-3339 : it allows "T" is replaced by " "
|
||||
|
||||
tdSql.execute(" insert into tb2 using st tags (\"2021-07-01T00:00:00.000+07:50\" , 2) values( \"2021-07-01T00:00:00.000+07:50\" , 2 , 2.0 ) ")
|
||||
case2 = (tdSql.getResult("select * from tb2"))
|
||||
print(case2)
|
||||
if case2 == [(datetime.datetime(2021, 7, 1, 0, 10), 2, 2.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+07:50'! ")
|
||||
|
||||
tdSql.execute(" insert into tb3 using st tags (\"2021-07-01T00:00:00.000+08:00\" , 3) values( \"2021-07-01T00:00:00.000+08:00\" , 3 , 3.0 ) ")
|
||||
case3 = (tdSql.getResult("select * from tb3"))
|
||||
print(case3)
|
||||
if case3 == [(datetime.datetime(2021, 7, 1, 0, 0), 3, 3.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+08:00'! ")
|
||||
|
||||
tdSql.execute(" insert into tb4 using st tags (\"2021-07-01T00:00:00.000Z\" , 4) values( \"2021-07-01T00:00:00.000Z\" , 4 , 4.0 ) ")
|
||||
case4 = (tdSql.getResult("select * from tb4"))
|
||||
print(case4)
|
||||
if case4 == [(datetime.datetime(2021, 7, 1, 8, 0), 4, 4.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000Z'! ")
|
||||
|
||||
tdSql.execute(" insert into tb5 using st tags (\"2021-07-01 00:00:00.000+07:50\" , 5) values( \"2021-07-01 00:00:00.000+07:50\" , 5 , 5.0 ) ")
|
||||
case5 = (tdSql.getResult("select * from tb5"))
|
||||
print(case5)
|
||||
if case5 == [(datetime.datetime(2021, 7, 1, 0, 10), 5, 5.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000+08:00 ")
|
||||
|
||||
tdSql.execute(" insert into tb6 using st tags (\"2021-07-01 00:00:00.000Z\" , 6) values( \"2021-07-01 00:00:00.000Z\" , 6 , 6.0 ) ")
|
||||
case6 = (tdSql.getResult("select * from tb6"))
|
||||
print(case6)
|
||||
if case6 == [(datetime.datetime(2021, 7, 1, 8, 0), 6, 6.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01 00:00:00.000Z'! ")
|
||||
|
||||
# ISO 8610 timestamp format , time days and hours must be split by "T"
|
||||
|
||||
tdSql.execute(" insert into tb7 using st tags (\"2021-07-01T00:00:00.000+0800\" , 7) values( \"2021-07-01T00:00:00.000+0800\" , 7 , 7.0 ) ")
|
||||
case7 = (tdSql.getResult("select * from tb7"))
|
||||
print(case7)
|
||||
if case7 == [(datetime.datetime(2021, 7, 1, 0, 0), 7, 7.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+0800'! ")
|
||||
|
||||
tdSql.execute(" insert into tb8 using st tags (\"2021-07-01T00:00:00.000+08\" , 8) values( \"2021-07-01T00:00:00.000+08\" , 8 , 8.0 ) ")
|
||||
case8 = (tdSql.getResult("select * from tb8"))
|
||||
print(case8)
|
||||
if case8 == [(datetime.datetime(2021, 7, 1, 0, 0), 8, 8.0)]:
|
||||
print ("check pass! ")
|
||||
else:
|
||||
print ("check failed about timestamp '2021-07-01T00:00:00.000+08'! ")
|
||||
|
||||
# Non-standard case for Timestamp
|
||||
|
||||
tdSql.execute(" insert into tb9 using st tags (\"2021-07-01 00:00:00.000+0800\" , 9) values( \"2021-07-01 00:00:00.000+0800\" , 9 , 9.0 ) ")
|
||||
case9 = (tdSql.getResult("select * from tb9"))
|
||||
print(case9)
|
||||
|
||||
tdSql.execute(" insert into tb10 using st tags (\"2021-07-0100:00:00.000\" , 10) values( \"2021-07-0100:00:00.000\" , 10 , 10.0 ) ")
|
||||
case10 = (tdSql.getResult("select * from tb10"))
|
||||
print(case10)
|
||||
|
||||
tdSql.execute(" insert into tb11 using st tags (\"2021-07-0100:00:00.000+0800\" , 11) values( \"2021-07-0100:00:00.000+0800\" , 11 , 11.0 ) ")
|
||||
case11 = (tdSql.getResult("select * from tb11"))
|
||||
print(case11)
|
||||
|
||||
tdSql.execute(" insert into tb12 using st tags (\"2021-07-0100:00:00.000+08:00\" , 12) values( \"2021-07-0100:00:00.000+08:00\" , 12 , 12.0 ) ")
|
||||
case12 = (tdSql.getResult("select * from tb12"))
|
||||
print(case12)
|
||||
|
||||
tdSql.execute(" insert into tb13 using st tags (\"2021-07-0100:00:00.000Z\" , 13) values( \"2021-07-0100:00:00.000Z\" , 13 , 13.0 ) ")
|
||||
case13 = (tdSql.getResult("select * from tb13"))
|
||||
print(case13)
|
||||
|
||||
tdSql.execute(" insert into tb14 using st tags (\"2021-07-0100:00:00.000Z\" , 14) values( \"2021-07-0100:00:00.000Z\" , 14 , 14.0 ) ")
|
||||
case14 = (tdSql.getResult("select * from tb14"))
|
||||
print(case14)
|
||||
|
||||
tdSql.execute(" insert into tb15 using st tags (\"2021-07-0100:00:00.000+08\" , 15) values( \"2021-07-0100:00:00.000+08\" , 15 , 15.0 ) ")
|
||||
case15 = (tdSql.getResult("select * from tb15"))
|
||||
print(case15)
|
||||
|
||||
tdSql.execute(" insert into tb16 using st tags (\"2021-07-0100:00:00.000+07:50\" , 16) values( \"2021-07-0100:00:00.000+07:50\" , 16 , 16.0 ) ")
|
||||
case16 = (tdSql.getResult("select * from tb16"))
|
||||
print(case16)
|
||||
|
||||
os.system("rm -rf *.py.sql")
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,53 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import tdLog
|
||||
from util.cases import tdCases
|
||||
from util.sql import tdSql
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.query("show users")
|
||||
rows = tdSql.queryRows
|
||||
|
||||
tdSql.execute("create user test PASS 'test' ")
|
||||
tdSql.query("show users")
|
||||
tdSql.checkRows(rows + 1)
|
||||
|
||||
tdSql.error("create user tdenginetdenginetdengine PASS 'test' ")
|
||||
|
||||
tdSql.error("create user tdenginet PASS '1234512345123456' ")
|
||||
|
||||
try:
|
||||
tdSql.execute("create account a&cc PASS 'pass123'")
|
||||
except Exception as e:
|
||||
print("create account a&cc PASS 'pass123'")
|
||||
return
|
||||
|
||||
tdLog.exit("drop built-in user is error.")
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,52 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import tdLog
|
||||
from util.cases import tdCases
|
||||
from util.sql import tdSql
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
print("==========step1")
|
||||
print("drop built-in account")
|
||||
try:
|
||||
tdSql.execute("drop account root")
|
||||
except Exception as e:
|
||||
if len(e.args) > 0 and 'no rights' != e.args[0]:
|
||||
tdLog.exit(e)
|
||||
|
||||
print("==========step2")
|
||||
print("drop built-in user")
|
||||
try:
|
||||
tdSql.execute("drop user root")
|
||||
except Exception as e:
|
||||
if len(e.args) > 0 and 'no rights' != e.args[0]:
|
||||
tdLog.exit(e)
|
||||
return
|
||||
|
||||
tdLog.exit("drop built-in user is error.")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,67 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import random
|
||||
import string
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def genColList(self):
|
||||
'''
|
||||
generate column list
|
||||
'''
|
||||
col_list = list()
|
||||
for i in range(1, 18):
|
||||
col_list.append(f'c{i}')
|
||||
return col_list
|
||||
|
||||
def genIncreaseValue(self, input_value):
|
||||
'''
|
||||
add ', 1' to end of value every loop
|
||||
'''
|
||||
value_list = list(input_value)
|
||||
value_list.insert(-1, ", 1")
|
||||
return ''.join(value_list)
|
||||
|
||||
def insertAlter(self):
|
||||
'''
|
||||
after each alter and insert, when execute 'select * from {tbname};' taosd will coredump
|
||||
'''
|
||||
tbname = ''.join(random.choice(string.ascii_letters.lower()) for i in range(7))
|
||||
input_value = '(now, 1)'
|
||||
tdSql.execute(f'create table {tbname} (ts timestamp, c0 int);')
|
||||
tdSql.execute(f'insert into {tbname} values {input_value};')
|
||||
for col in self.genColList():
|
||||
input_value = self.genIncreaseValue(input_value)
|
||||
tdSql.execute(f'alter table {tbname} add column {col} int;')
|
||||
tdSql.execute(f'insert into {tbname} values {input_value};')
|
||||
tdSql.query(f'select * from {tbname};')
|
||||
tdSql.checkRows(18)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
self.insertAlter()
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,85 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to execute {__file__}")
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 36500")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdLog.printNoPrefix("==========step1:create table && insert data")
|
||||
tdSql.execute("create table stbtag (ts timestamp, c1 int) TAGS(t1 int)")
|
||||
tdSql.execute("create table tag1 using stbtag tags(1)")
|
||||
|
||||
tdLog.printNoPrefix("==========step2:alter stb add tag create new chiltable")
|
||||
tdSql.execute("alter table stbtag add tag t2 int")
|
||||
tdSql.execute("alter table stbtag add tag t3 tinyint")
|
||||
tdSql.execute("alter table stbtag add tag t4 smallint ")
|
||||
tdSql.execute("alter table stbtag add tag t5 bigint")
|
||||
tdSql.execute("alter table stbtag add tag t6 float ")
|
||||
tdSql.execute("alter table stbtag add tag t7 double ")
|
||||
tdSql.execute("alter table stbtag add tag t8 bool ")
|
||||
tdSql.execute("alter table stbtag add tag t9 binary(10) ")
|
||||
tdSql.execute("alter table stbtag add tag t10 nchar(10)")
|
||||
|
||||
tdSql.execute("create table tag2 using stbtag tags(2, 22, 23, 24, 25, 26.1, 27.1, 1, 'binary9', 'nchar10')")
|
||||
tdSql.query( "select tbname, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 from stbtag" )
|
||||
tdSql.checkData(1, 0, "tag2")
|
||||
tdSql.checkData(1, 1, 2)
|
||||
tdSql.checkData(1, 2, 22)
|
||||
tdSql.checkData(1, 3, 23)
|
||||
tdSql.checkData(1, 4, 24)
|
||||
tdSql.checkData(1, 5, 25)
|
||||
tdSql.checkData(1, 6, 26.1)
|
||||
tdSql.checkData(1, 7, 27.1)
|
||||
tdSql.checkData(1, 8, 1)
|
||||
tdSql.checkData(1, 9, "binary9")
|
||||
tdSql.checkData(1, 10, "nchar10")
|
||||
|
||||
tdLog.printNoPrefix("==========step3:alter stb drop tag create new chiltable")
|
||||
tdSql.execute("alter table stbtag drop tag t2 ")
|
||||
tdSql.execute("alter table stbtag drop tag t3 ")
|
||||
tdSql.execute("alter table stbtag drop tag t4 ")
|
||||
tdSql.execute("alter table stbtag drop tag t5 ")
|
||||
tdSql.execute("alter table stbtag drop tag t6 ")
|
||||
tdSql.execute("alter table stbtag drop tag t7 ")
|
||||
tdSql.execute("alter table stbtag drop tag t8 ")
|
||||
tdSql.execute("alter table stbtag drop tag t9 ")
|
||||
tdSql.execute("alter table stbtag drop tag t10 ")
|
||||
|
||||
tdSql.execute("create table tag3 using stbtag tags(3)")
|
||||
tdSql.query("select * from stbtag where tbname like 'tag3' ")
|
||||
tdSql.checkCols(3)
|
||||
tdSql.query("select tbname, t1 from stbtag where tbname like 'tag3' ")
|
||||
tdSql.checkData(0, 1, 3)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,73 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to execute {__file__}")
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 36500")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdLog.printNoPrefix("==========step1:create table && insert data")
|
||||
# timestamp list:
|
||||
# 0 -> "1970-01-01 08:00:00" | -28800000 -> "1970-01-01 00:00:00" | -946800000000 -> "1940-01-01 00:00:00"
|
||||
# -631180800000 -> "1950-01-01 00:00:00"
|
||||
ts1 = 0
|
||||
ts2 = -28800000
|
||||
ts3 = -946800000000
|
||||
ts4 = "1950-01-01 00:00:00"
|
||||
tdSql.execute(
|
||||
"create table stb2ts (ts timestamp, ts1 timestamp, ts2 timestamp, c1 int, ts3 timestamp) TAGS(t1 int)"
|
||||
)
|
||||
tdSql.execute("create table t2ts1 using stb2ts tags(1)")
|
||||
|
||||
tdSql.execute(f"insert into t2ts1 values ({ts1}, {ts1}, {ts1}, 1, {ts1})")
|
||||
tdSql.execute(f"insert into t2ts1 values ({ts2}, {ts2}, {ts2}, 2, {ts2})")
|
||||
tdSql.execute(f"insert into t2ts1 values ({ts3}, {ts3}, {ts3}, 4, {ts3})")
|
||||
tdSql.execute(f"insert into t2ts1 values ('{ts4}', '{ts4}', '{ts4}', 3, '{ts4}')")
|
||||
|
||||
tdLog.printNoPrefix("==========step2:check inserted data")
|
||||
tdSql.query("select * from stb2ts where ts1=0 and ts2='1970-01-01 08:00:00' ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 4,'1970-01-01 08:00:00')
|
||||
|
||||
tdSql.query("select * from stb2ts where ts1=-28800000 and ts2='1970-01-01 00:00:00' ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 4, '1970-01-01 00:00:00')
|
||||
|
||||
tdSql.query("select * from stb2ts where ts1=-946800000000 and ts2='1940-01-01 00:00:00' ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 4, '1940-01-01 00:00:00')
|
||||
|
||||
tdSql.query("select * from stb2ts where ts1=-631180800000 and ts2='1950-01-01 00:00:00' ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 4, '1950-01-01 00:00:00')
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,109 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import tdDnodes
|
||||
from datetime import datetime
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,15,0)
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath + "/build/bin/"
|
||||
|
||||
#write 5M rows into db, then restart to force the data move into disk.
|
||||
#create 500 tables
|
||||
os.system("%staosdemo -f tools/taosdemoAllTest/insert_5M_rows.json -y " % binPath)
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
tdSql.execute('use db')
|
||||
|
||||
#prepare to query 500 tables last_row()
|
||||
tableName = []
|
||||
for i in range(500):
|
||||
tableName.append(f"stb_{i}")
|
||||
tdSql.execute('use db')
|
||||
lastRow_Off_start = datetime.now()
|
||||
|
||||
slow = 0 #count time where lastRow on is slower
|
||||
for i in range(5):
|
||||
#switch lastRow to off and check
|
||||
tdSql.execute('alter database db cachelast 0')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,15,0)
|
||||
|
||||
#run last_row(*) query 500 times
|
||||
for i in range(500):
|
||||
tdSql.execute(f'SELECT LAST_ROW(*) FROM {tableName[i]}')
|
||||
lastRow_Off_end = datetime.now()
|
||||
|
||||
tdLog.debug(f'time used:{lastRow_Off_end-lastRow_Off_start}')
|
||||
|
||||
#switch lastRow to on and check
|
||||
tdSql.execute('alter database db cachelast 1')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,15,1)
|
||||
|
||||
#run last_row(*) query 500 times
|
||||
tdSql.execute('use db')
|
||||
lastRow_On_start = datetime.now()
|
||||
for i in range(500):
|
||||
tdSql.execute(f'SELECT LAST_ROW(*) FROM {tableName[i]}')
|
||||
lastRow_On_end = datetime.now()
|
||||
|
||||
tdLog.debug(f'time used:{lastRow_On_end-lastRow_On_start}')
|
||||
|
||||
#check which one used more time
|
||||
if (lastRow_Off_end-lastRow_Off_start > lastRow_On_end-lastRow_On_start):
|
||||
pass
|
||||
else:
|
||||
slow += 1
|
||||
tdLog.debug(slow)
|
||||
if slow > 1: #tolerance for the first time
|
||||
tdLog.exit('lastRow hot alter failed')
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,91 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
#TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
#checking string input exception for alter
|
||||
tdSql.error("alter database db keep '10'")
|
||||
tdSql.error('alter database db keep "10"')
|
||||
tdSql.error("alter database db keep '\t'")
|
||||
tdSql.error("alter database db keep \'\t\'")
|
||||
tdSql.error('alter database db keep "a"')
|
||||
tdSql.error('alter database db keep "1.4"')
|
||||
tdSql.error("alter database db blocks '10'")
|
||||
tdSql.error('alter database db comp "0"')
|
||||
tdSql.execute('drop database if exists db')
|
||||
|
||||
#checking string input exception for create
|
||||
tdSql.error("create database db comp '0'")
|
||||
tdSql.error('create database db comp "1"')
|
||||
tdSql.error("create database db comp '\t'")
|
||||
tdSql.error("alter database db keep \'\t\'")
|
||||
tdSql.error('create database db comp "a"')
|
||||
tdSql.error('create database db comp "1.4"')
|
||||
tdSql.error("create database db blocks '10'")
|
||||
tdSql.error('create database db keep "3650"')
|
||||
tdSql.error('create database db fsync "3650"')
|
||||
tdSql.execute('create database db precision "us"')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,16,'us')
|
||||
tdSql.execute('drop database if exists db')
|
||||
|
||||
#checking float input exception for create
|
||||
tdSql.error("create database db fsync 7.3")
|
||||
tdSql.error("create database db fsync 0.0")
|
||||
tdSql.error("create database db fsync -5.32")
|
||||
tdSql.error('create database db comp 7.2')
|
||||
tdSql.error("create database db blocks 5.87")
|
||||
tdSql.error('create database db keep 15.4')
|
||||
|
||||
#checking float input exception for insert
|
||||
tdSql.execute('create database db')
|
||||
tdSql.error('alter database db blocks 5.9')
|
||||
tdSql.error('alter database db blocks -4.7')
|
||||
tdSql.error('alter database db blocks 0.0')
|
||||
tdSql.error('alter database db keep 15.4')
|
||||
tdSql.error('alter database db comp 2.67')
|
||||
|
||||
#checking additional exception param for alter keep
|
||||
tdSql.error('alter database db keep 365001')
|
||||
tdSql.error('alter database db keep 364999,365000,365001')
|
||||
tdSql.error('alter database db keep -10')
|
||||
tdSql.error('alter database db keep 5')
|
||||
tdSql.error('alter database db keep ')
|
||||
tdSql.error('alter database db keep 40,a,60')
|
||||
tdSql.error('alter database db keep ,,60,')
|
||||
tdSql.error('alter database db keep \t')
|
||||
tdSql.execute('alter database db keep \t50')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'50,50,50')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,54 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import random
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import tdDnodes
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
flagList=["debugflag", "cdebugflag", "tmrDebugFlag", "uDebugFlag", "rpcDebugFlag"]
|
||||
|
||||
for flag in flagList:
|
||||
tdSql.execute("alter local %s 131" % flag)
|
||||
tdSql.execute("alter local %s 135" % flag)
|
||||
tdSql.execute("alter local %s 143" % flag)
|
||||
randomFlag = random.randint(100, 250)
|
||||
if randomFlag != 131 and randomFlag != 135 and randomFlag != 143:
|
||||
tdSql.error("alter local %s %d" % (flag, randomFlag))
|
||||
|
||||
tdSql.query("show dnodes")
|
||||
dnodeId = tdSql.getData(0, 0)
|
||||
|
||||
for flag in flagList:
|
||||
tdSql.execute("alter dnode %d %s 131" % (dnodeId, flag))
|
||||
tdSql.execute("alter dnode %d %s 135" % (dnodeId, flag))
|
||||
tdSql.execute("alter dnode %d %s 143" % (dnodeId, flag))
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,208 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import time
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def alterKeepCommunity(self):
|
||||
tdLog.notice('running Keep Test, Community Version')
|
||||
tdLog.notice('running parameter test for keep during create')
|
||||
#testing keep parameter during create
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'3650')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.execute('create database db keep 100')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.error('create database db keep ')
|
||||
tdSql.error('create database db keep 0')
|
||||
tdSql.error('create database db keep 10,20')
|
||||
tdSql.error('create database db keep 10,20,30')
|
||||
tdSql.error('create database db keep 20,30,40,50')
|
||||
|
||||
#testing keep parameter during alter
|
||||
tdSql.execute('create database db')
|
||||
tdLog.notice('running parameter test for keep during alter')
|
||||
|
||||
tdSql.execute('alter database db keep 100')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100')
|
||||
|
||||
tdSql.error('alter database db keep ')
|
||||
tdSql.error('alter database db keep 0')
|
||||
tdSql.error('alter database db keep 10,20')
|
||||
tdSql.error('alter database db keep 10,20,30')
|
||||
tdSql.error('alter database db keep 20,30,40,50')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100')
|
||||
|
||||
def alterKeepEnterprise(self):
|
||||
tdLog.notice('running Keep Test, Enterprise Version')
|
||||
#testing keep parameter during create
|
||||
tdLog.notice('running parameter test for keep during create')
|
||||
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'3650,3650,3650')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.execute('create database db keep 100')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100,100,100')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.execute('create database db keep 20, 30')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'20,30,30')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.execute('create database db keep 30,40,50')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'30,40,50')
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
tdSql.error('create database db keep ')
|
||||
tdSql.error('create database db keep 20,30,40,50')
|
||||
tdSql.error('create database db keep 0')
|
||||
tdSql.error('create database db keep 100,50')
|
||||
tdSql.error('create database db keep 100,40,50')
|
||||
tdSql.error('create database db keep 20,100,50')
|
||||
tdSql.error('create database db keep 50,60,20')
|
||||
|
||||
#testing keep parameter during alter
|
||||
tdSql.execute('create database db')
|
||||
tdLog.notice('running parameter test for keep during alter')
|
||||
|
||||
tdSql.execute('alter database db keep 10')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'10,10,10')
|
||||
|
||||
tdSql.execute('alter database db keep 20,30')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'20,30,30')
|
||||
|
||||
tdSql.execute('alter database db keep 100,200,300')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100,200,300')
|
||||
|
||||
tdSql.error('alter database db keep ')
|
||||
tdSql.error('alter database db keep 20,30,40,50')
|
||||
tdSql.error('alter database db keep 0')
|
||||
tdSql.error('alter database db keep 100,50')
|
||||
tdSql.error('alter database db keep 100,40,50')
|
||||
tdSql.error('alter database db keep 20,100,50')
|
||||
tdSql.error('alter database db keep 50,60,20')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'100,200,300')
|
||||
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
tdLog.debug('running enterprise test')
|
||||
self.alterKeepEnterprise()
|
||||
else:
|
||||
tdLog.debug('running community test')
|
||||
self.alterKeepCommunity()
|
||||
|
||||
tdSql.prepare()
|
||||
|
||||
|
||||
## preset the keep
|
||||
tdSql.prepare()
|
||||
|
||||
tdLog.notice('testing if alter will cause any error')
|
||||
tdSql.execute('create table tb (ts timestamp, speed int)')
|
||||
tdSql.execute('alter database db keep 10,10,10')
|
||||
tdSql.execute('insert into tb values (now, 10)')
|
||||
tdSql.execute('insert into tb values (now + 10m, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(2)
|
||||
|
||||
|
||||
#after alter from small to large, check if the alter if functioning
|
||||
#test if change through test.py is consistent with change from taos client
|
||||
#test case for TD-4459 and TD-4445
|
||||
tdLog.notice('testing keep will be altered changing from small to big')
|
||||
tdSql.execute('alter database db keep 40,40,40')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'40,40,40')
|
||||
tdSql.error('insert into tb values (now-60d, 10)')
|
||||
tdSql.execute('insert into tb values (now-30d, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(3)
|
||||
|
||||
rowNum = 3
|
||||
for i in range(30):
|
||||
rowNum += 1
|
||||
tdSql.execute('alter database db keep 20,20,20')
|
||||
tdSql.execute('alter database db keep 40,40,40')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'40,40,40')
|
||||
tdSql.error('insert into tb values (now-60d, 10)')
|
||||
tdSql.execute('insert into tb values (now-30d, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(rowNum)
|
||||
|
||||
tdLog.notice('testing keep will be altered changing from big to small')
|
||||
tdSql.execute('alter database db keep 10,10,10')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,7,'10,10,10')
|
||||
tdSql.error('insert into tb values (now-15d, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(2)
|
||||
|
||||
rowNum = 2
|
||||
tdLog.notice('testing keep will be altered if sudden change from small to big')
|
||||
for i in range(30):
|
||||
tdSql.execute('alter database db keep 14,14,14')
|
||||
tdSql.execute('alter database db keep 16,16,16')
|
||||
tdSql.execute('insert into tb values (now-15d, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
rowNum += 1
|
||||
tdSql.checkRows(rowNum)
|
||||
|
||||
tdLog.notice('testing keep will be altered if sudden change from big to small')
|
||||
tdSql.execute('alter database db keep 16,16,16')
|
||||
tdSql.execute('alter database db keep 14,14,14')
|
||||
tdSql.error('insert into tb values (now-15d, 10)')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdLog.notice('testing data will show up again when keep is being changed to large value')
|
||||
tdSql.execute('alter database db keep 40,40,40')
|
||||
tdSql.query('select * from tb')
|
||||
tdSql.checkRows(63)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,129 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdLog.info("prepare cluster")
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
tdSql.init(self.conn.cursor())
|
||||
tdSql.execute('reset query cache')
|
||||
tdSql.execute('create dnode 192.168.0.2')
|
||||
tdDnodes.deploy(2)
|
||||
tdDnodes.start(2)
|
||||
|
||||
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
tdSql.init(self.conn.cursor())
|
||||
tdSql.execute('reset query cache')
|
||||
tdSql.execute('create dnode 192.168.0.3')
|
||||
tdDnodes.deploy(3)
|
||||
tdDnodes.start(3)
|
||||
|
||||
def run(self):
|
||||
tdSql.execute('create database db replica 3 days 7')
|
||||
tdSql.execute('use db')
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute('create table tb%d(ts timestamp, i int)' % tid)
|
||||
tdLog.sleep(10)
|
||||
|
||||
tdLog.info("================= step1")
|
||||
startTime = 1520000010000
|
||||
for rid in range(1, 11):
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute(
|
||||
'insert into tb%d values(%ld, %d)' %
|
||||
(tid, startTime, rid))
|
||||
startTime += 1
|
||||
tdSql.query('select * from tb1')
|
||||
tdSql.checkRows(10)
|
||||
tdLog.sleep(5)
|
||||
|
||||
tdLog.info("================= step2")
|
||||
tdSql.execute('alter database db replica 2')
|
||||
tdLog.sleep(10)
|
||||
|
||||
tdLog.info("================= step3")
|
||||
for rid in range(1, 11):
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute(
|
||||
'insert into tb%d values(%ld, %d)' %
|
||||
(tid, startTime, rid))
|
||||
startTime += 1
|
||||
tdSql.query('select * from tb1')
|
||||
tdSql.checkRows(20)
|
||||
tdLog.sleep(5)
|
||||
|
||||
tdLog.info("================= step4")
|
||||
tdSql.execute('alter database db replica 1')
|
||||
tdLog.sleep(10)
|
||||
|
||||
tdLog.info("================= step5")
|
||||
for rid in range(1, 11):
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute(
|
||||
'insert into tb%d values(%ld, %d)' %
|
||||
(tid, startTime, rid))
|
||||
startTime += 1
|
||||
tdSql.query('select * from tb1')
|
||||
tdSql.checkRows(30)
|
||||
tdLog.sleep(5)
|
||||
|
||||
tdLog.info("================= step6")
|
||||
tdSql.execute('alter database db replica 2')
|
||||
tdLog.sleep(10)
|
||||
|
||||
tdLog.info("================= step7")
|
||||
for rid in range(1, 11):
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute(
|
||||
'insert into tb%d values(%ld, %d)' %
|
||||
(tid, startTime, rid))
|
||||
startTime += 1
|
||||
tdSql.query('select * from tb1')
|
||||
tdSql.checkRows(40)
|
||||
tdLog.sleep(5)
|
||||
|
||||
tdLog.info("================= step8")
|
||||
tdSql.execute('alter database db replica 3')
|
||||
tdLog.sleep(10)
|
||||
|
||||
tdLog.info("================= step9")
|
||||
for rid in range(1, 11):
|
||||
for tid in range(1, 11):
|
||||
tdSql.execute(
|
||||
'insert into tb%d values(%ld, %d)' %
|
||||
(tid, startTime, rid))
|
||||
startTime += 1
|
||||
tdSql.query('select * from tb1')
|
||||
tdSql.checkRows(50)
|
||||
tdLog.sleep(5)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
self.conn.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addCluster(__file__, TDTestCase())
|
|
@ -0,0 +1,142 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
self.types = [
|
||||
"int",
|
||||
"bigint",
|
||||
"float",
|
||||
"double",
|
||||
"smallint",
|
||||
"tinyint",
|
||||
"int unsigned",
|
||||
"bigint unsigned",
|
||||
"smallint unsigned",
|
||||
"tinyint unsigned",
|
||||
"binary(10)",
|
||||
"nchar(10)",
|
||||
"timestamp"]
|
||||
self.rowNum = 300
|
||||
self.ts = 1537146000000
|
||||
self.step = 1000
|
||||
self.sqlHead = "select count(*), count(c1) "
|
||||
self.sqlTail = " from stb"
|
||||
|
||||
def addColumnAndCount(self):
|
||||
for colIdx in range(len(self.types)):
|
||||
tdSql.execute(
|
||||
"alter table stb add column c%d %s" %
|
||||
(colIdx + 2, self.types[colIdx]))
|
||||
self.sqlHead = self.sqlHead + ",count(c%d) " % (colIdx + 2)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check1: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
|
||||
|
||||
# insert more rows
|
||||
for k in range(self.rowNum):
|
||||
self.ts += self.step
|
||||
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
|
||||
for j in range(colIdx + 1):
|
||||
sql += ", %d" % (colIdx + 2)
|
||||
sql += ")"
|
||||
tdSql.execute(sql)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check2: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
|
||||
|
||||
def dropColumnAndCount(self):
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
res = []
|
||||
for i in range(len(self.types)):
|
||||
res.append(tdSql.getData(0, i + 2))
|
||||
|
||||
print(res)
|
||||
|
||||
for colIdx in range(len(self.types), 0, -1):
|
||||
tdSql.execute("alter table stb drop column c%d" % (colIdx + 2))
|
||||
# self.sqlHead = self.sqlHead + ",count(c%d) " %(colIdx + 2)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check1: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
|
||||
|
||||
# insert more rows
|
||||
for k in range(self.rowNum):
|
||||
self.ts += self.step
|
||||
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
|
||||
for j in range(colIdx + 1):
|
||||
sql += ", %d" % (colIdx + 2)
|
||||
sql += ")"
|
||||
tdSql.execute(sql)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check2: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
|
||||
|
||||
def run(self):
|
||||
# Setup params
|
||||
db = "db"
|
||||
|
||||
# Create db
|
||||
tdSql.execute("drop database if exists %s" % (db))
|
||||
tdSql.execute("reset query cache")
|
||||
tdSql.execute("create database %s maxrows 200 maxtables 4" % (db))
|
||||
tdSql.execute("use %s" % (db))
|
||||
|
||||
# Create a table with one colunm of int type and insert 300 rows
|
||||
tdLog.info("Create stb and tb")
|
||||
tdSql.execute("create table stb (ts timestamp, c1 int) tags (tg1 int)")
|
||||
tdSql.execute("create table tb using stb tags (0)")
|
||||
tdLog.info("Insert %d rows into tb" % (self.rowNum))
|
||||
for k in range(1, self.rowNum + 1):
|
||||
self.ts += self.step
|
||||
tdSql.execute("insert into tb values (%d, 1)" % (self.ts))
|
||||
|
||||
# Alter tb and add a column of smallint type, then query tb to see if
|
||||
# all added column are NULL
|
||||
self.addColumnAndCount()
|
||||
tdDnodes.stop(1)
|
||||
time.sleep(5)
|
||||
tdDnodes.start(1)
|
||||
time.sleep(5)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
for i in range(2, len(self.types) + 2):
|
||||
tdSql.checkData(0, i, self.rowNum * (len(self.types) + 2 - i))
|
||||
|
||||
self.dropColumnAndCount()
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
#tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,161 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
self.types = [
|
||||
"int",
|
||||
"bigint",
|
||||
"float",
|
||||
"double",
|
||||
"smallint",
|
||||
"tinyint",
|
||||
"int unsigned",
|
||||
"bigint unsigned",
|
||||
"smallint unsigned",
|
||||
"tinyint unsigned",
|
||||
"binary(10)",
|
||||
"nchar(10)",
|
||||
"timestamp"]
|
||||
self.rowNum = 300
|
||||
self.ts = 1537146000000
|
||||
self.step = 1000
|
||||
self.sqlHead = "select count(*), count(c1) "
|
||||
self.sqlTail = " from tb"
|
||||
|
||||
def addColumnAndCount(self):
|
||||
for colIdx in range(len(self.types)):
|
||||
tdSql.execute(
|
||||
"alter table tb add column c%d %s" %
|
||||
(colIdx + 2, self.types[colIdx]))
|
||||
self.sqlHead = self.sqlHead + ",count(c%d) " % (colIdx + 2)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check1: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
|
||||
|
||||
# insert more rows
|
||||
for k in range(self.rowNum):
|
||||
self.ts += self.step
|
||||
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
|
||||
for j in range(colIdx + 1):
|
||||
sql += ", %d" % (colIdx + 2)
|
||||
sql += ")"
|
||||
tdSql.execute(sql)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check2: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
|
||||
|
||||
def dropColumnAndCount(self):
|
||||
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
res = []
|
||||
for i in range(len(self.types)):
|
||||
res[i] = tdSql.getData(0, i + 2)
|
||||
|
||||
print(res.join)
|
||||
|
||||
for colIdx in range(len(self.types), 0, -1):
|
||||
tdSql.execute("alter table tb drop column c%d" % (colIdx + 2))
|
||||
# self.sqlHead = self.sqlHead + ",count(c%d) " %(colIdx + 2)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check1: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
|
||||
|
||||
# insert more rows
|
||||
for k in range(self.rowNum):
|
||||
self.ts += self.step
|
||||
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
|
||||
for j in range(colIdx + 1):
|
||||
sql += ", %d" % (colIdx + 2)
|
||||
sql += ")"
|
||||
tdSql.execute(sql)
|
||||
|
||||
# count non-NULL values in each column
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
|
||||
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
|
||||
for i in range(2, colIdx + 2):
|
||||
print("check2: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
|
||||
|
||||
def alter_table_255_times(self): # add case for TD-6207
|
||||
for i in range(255):
|
||||
tdLog.info("alter table st add column cb%d int"%i)
|
||||
tdSql.execute("alter table st add column cb%d int"%i)
|
||||
tdSql.execute("insert into t0 (ts,c1) values(now,1)")
|
||||
tdSql.execute("reset query cache")
|
||||
tdSql.query("select * from st")
|
||||
tdSql.execute("create table mt(ts timestamp, i int)")
|
||||
tdSql.execute("insert into mt values(now,11)")
|
||||
tdSql.query("select * from mt")
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
tdSql.query("describe db.st")
|
||||
|
||||
def run(self):
|
||||
# Setup params
|
||||
db = "db"
|
||||
|
||||
# Create db
|
||||
tdSql.execute("drop database if exists %s" % (db))
|
||||
tdSql.execute("reset query cache")
|
||||
tdSql.execute("create database %s maxrows 200" % (db))
|
||||
tdSql.execute("use %s" % (db))
|
||||
|
||||
# Create a table with one colunm of int type and insert 300 rows
|
||||
tdLog.info("create table tb")
|
||||
tdSql.execute("create table tb (ts timestamp, c1 int)")
|
||||
tdLog.info("Insert %d rows into tb" % (self.rowNum))
|
||||
for k in range(1, self.rowNum + 1):
|
||||
self.ts += self.step
|
||||
tdSql.execute("insert into tb values (%d, 1)" % (self.ts))
|
||||
|
||||
# Alter tb and add a column of smallint type, then query tb to see if
|
||||
# all added column are NULL
|
||||
self.addColumnAndCount()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
tdSql.query(self.sqlHead + self.sqlTail)
|
||||
size = len(self.types) + 2
|
||||
for i in range(2, size):
|
||||
tdSql.checkData(0, i, self.rowNum * (size - i))
|
||||
|
||||
|
||||
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float,t2 int,t3 double)")
|
||||
tdSql.execute("create table t0 using st tags(null,1,2.3)")
|
||||
tdSql.execute("alter table t0 set tag t1=2.1")
|
||||
|
||||
tdSql.query("show tables")
|
||||
tdSql.checkRows(2)
|
||||
self.alter_table_255_times()
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,91 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
print("==============Case 1: add column, restart taosd, drop the same colum then add it back")
|
||||
tdSql.execute(
|
||||
"create table st(ts timestamp, speed int) tags(loc nchar(20))")
|
||||
tdSql.execute(
|
||||
"insert into t1 using st tags('beijing') values(now, 1)")
|
||||
tdSql.execute(
|
||||
"alter table st add column tbcol binary(20)")
|
||||
|
||||
# restart taosd
|
||||
tdDnodes.forcestop(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
tdSql.execute(
|
||||
"alter table st drop column tbcol")
|
||||
tdSql.execute(
|
||||
"alter table st add column tbcol binary(20)")
|
||||
|
||||
tdSql.query("select * from st")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
print("==============Case 2: keep adding columns, restart taosd")
|
||||
tdSql.execute(
|
||||
"create table dt(ts timestamp, tbcol1 tinyint) tags(tgcol1 tinyint)")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol2 int")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol3 smallint")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol4 bigint")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol5 float")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol6 double")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol7 bool")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol8 nchar(20)")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol9 binary(20)")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol10 tinyint unsigned")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol11 int unsigned")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol12 smallint unsigned")
|
||||
tdSql.execute(
|
||||
"alter table dt add column tbcol13 bigint unsigned")
|
||||
|
||||
# restart taosd
|
||||
tdDnodes.forcestop(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
tdSql.query("select * from dt")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,71 @@
|
|||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import random
|
||||
import string
|
||||
import subprocess
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
def run(self):
|
||||
tdLog.debug("check database")
|
||||
tdSql.prepare()
|
||||
|
||||
# check default update value
|
||||
sql = "create database if not exists db"
|
||||
tdSql.execute(sql)
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,16,0)
|
||||
|
||||
sql = "alter database db update 1"
|
||||
|
||||
# check update value
|
||||
tdSql.execute(sql)
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,16,1)
|
||||
|
||||
|
||||
sql = "alter database db update 0"
|
||||
tdSql.execute(sql)
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,16,0)
|
||||
|
||||
sql = "alter database db update -1"
|
||||
tdSql.error(sql)
|
||||
|
||||
sql = "alter database db update 100"
|
||||
tdSql.error(sql)
|
||||
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,16,0)
|
||||
|
||||
tdSql.execute('drop database db')
|
||||
tdSql.error('create database db update 100')
|
||||
tdSql.error('create database db update -1')
|
||||
|
||||
tdSql.execute('create database db update 1')
|
||||
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,16,1)
|
||||
|
||||
tdSql.execute('drop database db')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,77 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
tdSql.execute(
|
||||
'create table st (ts timestamp, v1 int, v2 int, v3 int, v4 int, v5 int) tags (t int)')
|
||||
|
||||
totalTables = 100
|
||||
batchSize = 500
|
||||
totalBatch = 60
|
||||
|
||||
tdLog.info(
|
||||
"create %d tables, insert %d rows per table" %
|
||||
(totalTables, batchSize * totalBatch))
|
||||
|
||||
for t in range(0, totalTables):
|
||||
tdSql.execute('create table t%d using st tags(%d)' % (t, t))
|
||||
# 2019-06-10 00:00:00
|
||||
beginTs = 1560096000000
|
||||
interval = 10000
|
||||
for r in range(0, totalBatch):
|
||||
sql = 'insert into t%d values ' % (t)
|
||||
for b in range(0, batchSize):
|
||||
ts = beginTs + (r * batchSize + b) * interval
|
||||
sql += '(%d, 1, 2, 3, 4, 5)' % (ts)
|
||||
tdSql.execute(sql)
|
||||
|
||||
tdLog.info("insert data finished")
|
||||
tdSql.execute('alter table st add column v6 int')
|
||||
tdLog.sleep(5)
|
||||
tdLog.info("alter table finished")
|
||||
|
||||
tdSql.query("select count(*) from t50")
|
||||
tdSql.checkData(0, 0, (int)(batchSize * totalBatch))
|
||||
|
||||
tdLog.info("insert")
|
||||
tdSql.execute(
|
||||
"insert into t50 values ('2019-06-13 07:59:55.000', 1, 2, 3, 4, 5, 6)")
|
||||
|
||||
tdLog.info("import")
|
||||
tdSql.execute(
|
||||
"import into t50 values ('2019-06-13 07:59:55.000', 1, 2, 3, 4, 5, 6)")
|
||||
|
||||
tdLog.info("query")
|
||||
tdSql.query("select count(*) from t50")
|
||||
tdSql.checkData(0, 0, batchSize * totalBatch + 1)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,85 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
if __name__ == "__main__":
|
||||
|
||||
logSql = True
|
||||
deployPath = ""
|
||||
testCluster = False
|
||||
valgrind = 0
|
||||
|
||||
print("start to execute %s" % __file__)
|
||||
tdDnodes.init(deployPath)
|
||||
tdDnodes.setTestCluster(testCluster)
|
||||
tdDnodes.setValgrind(valgrind)
|
||||
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.addSimExtraCfg("maxSQLLength", "1048576")
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
host = '127.0.0.1'
|
||||
|
||||
tdLog.info("Procedures for tdengine deployed in %s" % (host))
|
||||
|
||||
tdCases.logSql(logSql)
|
||||
print('1')
|
||||
conn = taos.connect(
|
||||
host,
|
||||
config=tdDnodes.getSimCfgPath())
|
||||
|
||||
tdSql.init(conn.cursor(), True)
|
||||
|
||||
print("==========step1")
|
||||
print("create table ")
|
||||
tdSql.execute("create database db")
|
||||
tdSql.execute("use db")
|
||||
tdSql.execute("create table t1 (ts timestamp, c1 int,c2 int ,c3 int)")
|
||||
|
||||
print("==========step2")
|
||||
print("insert maxSQLLength data ")
|
||||
data = 'insert into t1 values'
|
||||
ts = 1604298064000
|
||||
i = 0
|
||||
while ((len(data)<(1024*1024)) & (i < 32767 - 1) ):
|
||||
data += '(%s,%d,%d,%d)'%(ts+i,i%1000,i%1000,i%1000)
|
||||
i+=1
|
||||
tdSql.execute(data)
|
||||
|
||||
print("==========step4")
|
||||
print("insert data batch larger than 32767 ")
|
||||
i = 0
|
||||
while ((len(data)<(1024*1024)) & (i < 32767) ):
|
||||
data += '(%s,%d,%d,%d)'%(ts+i,i%1000,i%1000,i%1000)
|
||||
i+=1
|
||||
tdSql.error(data)
|
||||
|
||||
print("==========step4")
|
||||
print("insert data larger than maxSQLLength ")
|
||||
tdSql.execute("create table t2 (ts timestamp, c1 binary(50))")
|
||||
data = 'insert into t2 values'
|
||||
i = 0
|
||||
while ((len(data)<(1024*1024)) & (i < 32767 - 1 ) ):
|
||||
data += '(%s,%s)'%(ts+i,'a'*50)
|
||||
i+=1
|
||||
tdSql.error(data)
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.query('select database()')
|
||||
tdSql.checkData(0, 0, "db")
|
||||
|
||||
tdSql.execute("alter database db comp 2")
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkData(0, 14, 2)
|
||||
|
||||
tdSql.execute("alter database db keep 365,365,365")
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkData(0, 7, "365,365,365")
|
||||
|
||||
tdSql.error("alter database db quorum 2")
|
||||
|
||||
|
||||
tdSql.execute("alter database db blocks 100")
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkData(0, 9, 100)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,88 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.pathFinding import *
|
||||
from util.dnodes import tdDnodes
|
||||
from datetime import datetime
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
##TODO: this is now automatic, but not sure if this will run through jenkins
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
tdFindPath.init(__file__)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
binPath = tdFindPath.getTaosdemoPath()
|
||||
TDenginePath = tdFindPath.getTDenginePath()
|
||||
|
||||
## change system time to 2020/10/20
|
||||
os.system('sudo timedatectl set-ntp off')
|
||||
tdLog.sleep(10)
|
||||
os.system('sudo timedatectl set-time 2020-10-20')
|
||||
|
||||
#run taosdemo to insert data. one row per second from 2020/10/11 to 2020/10/20
|
||||
#11 data files should be generated
|
||||
#vnode at TDinternal/community/sim/dnode1/data/vnode
|
||||
try:
|
||||
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_A.json")
|
||||
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
|
||||
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
|
||||
except BaseException:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
if result.count('data') != 11:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
tdLog.exit('wrong number of files')
|
||||
else:
|
||||
tdLog.debug("data file number correct")
|
||||
|
||||
#move 5 days ahead to 2020/10/25. 4 oldest files should be removed during the new write
|
||||
#leaving 7 data files.
|
||||
try:
|
||||
os.system ('timedatectl set-time 2020-10-25')
|
||||
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_B.json")
|
||||
except BaseException:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
|
||||
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
|
||||
print(result.count('data'))
|
||||
if result.count('data') != 7:
|
||||
tdLog.exit('wrong number of files')
|
||||
else:
|
||||
tdLog.debug("data file number correct")
|
||||
tdSql.query('select first(ts) from stb_0')
|
||||
tdSql.checkData(0,0,datetime(2020,10,14,8,0,0,0)) #check the last data in the database
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
|
||||
def stop(self):
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
tdSql.close()
|
||||
tdLog.success("alter block manual check finish")
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,100 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import tdDnodes
|
||||
from util.pathFinding import *
|
||||
from datetime import datetime
|
||||
import subprocess
|
||||
|
||||
##TODO: this is now automatic, but not sure if this will run through jenkins
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
tdFindPath.init(__file__)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
binPath = tdFindPath.getTaosdemoPath()
|
||||
TDenginePath = tdFindPath.getTDenginePath()
|
||||
|
||||
## change system time to 2020/10/20
|
||||
os.system ('timedatectl set-ntp off')
|
||||
tdLog.sleep(10)
|
||||
os.system ('timedatectl set-time 2020-10-20')
|
||||
|
||||
#run taosdemo to insert data. one row per second from 2020/10/11 to 2020/10/20
|
||||
#11 data files should be generated
|
||||
#vnode at TDinternal/community/sim/dnode1/data/vnode
|
||||
try:
|
||||
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_A.json")
|
||||
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
|
||||
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
|
||||
except BaseException:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
|
||||
if result.count('data') != 11:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
tdLog.exit('wrong number of files')
|
||||
else:
|
||||
tdLog.debug("data file number correct")
|
||||
|
||||
|
||||
try:
|
||||
tdSql.query('select first(ts) from stb_0') #check the last data in the database
|
||||
tdSql.checkData(0,0,datetime(2020,10,11,0,0,0,0))
|
||||
except BaseException:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
|
||||
#moves 5 days ahead to 2020/10/25 and restart taosd
|
||||
#4 oldest data file should be removed from tsdb/data
|
||||
#7 data file should be found
|
||||
#vnode at TDinternal/community/sim/dnode1/data/vnode
|
||||
|
||||
try:
|
||||
os.system ('timedatectl set-time 2020-10-25')
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
tdSql.query('select first(ts) from stb_0')
|
||||
tdSql.checkData(0,0,datetime(2020,10,14,8,0,0,0)) #check the last data in the database
|
||||
except BaseException:
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
|
||||
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
|
||||
print(result.count('data'))
|
||||
if result.count('data') != 7:
|
||||
tdLog.exit('wrong number of files')
|
||||
else:
|
||||
tdLog.debug("data file number correct")
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdLog.sleep(10)
|
||||
|
||||
def stop(self):
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
tdSql.close()
|
||||
tdLog.success("alter block manual check finish")
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,83 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
from datetime import timedelta
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
ret = tdSql.query('select database()')
|
||||
tdSql.checkData(0, 0, "db")
|
||||
|
||||
ret = tdSql.query('select server_status()')
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
ret = tdSql.query('select server_status() as result')
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
ret = tdSql.query('show dnodes')
|
||||
|
||||
dnodeId = tdSql.getData(0, 0);
|
||||
dnodeEndpoint = tdSql.getData(0, 1);
|
||||
|
||||
ret = tdSql.execute('alter dnode "%s" debugFlag 135' % dnodeId)
|
||||
tdLog.info('alter dnode "%s" debugFlag 135 -> ret: %d' % (dnodeId, ret))
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
ret = tdSql.query('show mnodes')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 2, "master")
|
||||
|
||||
role_time = tdSql.getData(0, 3)
|
||||
create_time = tdSql.getData(0, 4)
|
||||
time_delta = timedelta(milliseconds=100)
|
||||
|
||||
if create_time-time_delta < role_time < create_time+time_delta:
|
||||
tdLog.info("role_time {} and create_time {} expected within range".format(role_time, create_time))
|
||||
else:
|
||||
tdLog.exit("role_time {} and create_time {} not expected within range".format(role_time, create_time))
|
||||
|
||||
ret = tdSql.query('show vgroups')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.execute('create stable st (ts timestamp, f int) tags(t int)')
|
||||
tdSql.execute('create table ct1 using st tags(1)');
|
||||
tdSql.execute('create table ct2 using st tags(2)');
|
||||
|
||||
time.sleep(3)
|
||||
|
||||
ret = tdSql.query('show vnodes "{}"'.format(dnodeEndpoint))
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.checkData(0, 1, "master")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,48 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import tdDnodes
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdDnodes.stop(1)
|
||||
sql = "use db"
|
||||
|
||||
try:
|
||||
tdSql.execute(sql)
|
||||
except Exception as e:
|
||||
expectError = 'Unable to establish connection'
|
||||
if expectError in str(e):
|
||||
pass
|
||||
else:
|
||||
caller = inspect.getframeinfo(inspect.stack()[1][1])
|
||||
tdLog.exit("%s(%d) failed: sql:%s, expect error not occured" % (caller.filename, caller.lineno, sql))
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,55 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
import threading
|
||||
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/home/chr/taosdata/TDengine/sim/dnode1/cfg "
|
||||
|
||||
def newCloseCon(times):
|
||||
newConList = []
|
||||
for times in range(0,times) :
|
||||
newConList.append(taos.connect(self.host, self.user, self.password, self.config))
|
||||
for times in range(0,times) :
|
||||
newConList[times].close()
|
||||
|
||||
def run(self):
|
||||
tdDnodes.init("")
|
||||
tdDnodes.setTestCluster(False)
|
||||
tdDnodes.setValgrind(False)
|
||||
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
# multiple new and cloes connection
|
||||
for m in range(1,101) :
|
||||
t= threading.Thread(target=newCloseCon,args=(10,))
|
||||
t.start()
|
||||
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
clients.run()
|
|
@ -0,0 +1,96 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/home/xp/git/TDengine/sim/dnode1/cfg"
|
||||
|
||||
def run(self):
|
||||
tdDnodes.init("")
|
||||
tdDnodes.setTestCluster(False)
|
||||
tdDnodes.setValgrind(False)
|
||||
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
# first client create a stable and insert data
|
||||
conn1 = taos.connect(self.host, self.user, self.password, self.config)
|
||||
cursor1 = conn1.cursor()
|
||||
cursor1.execute("drop database if exists db")
|
||||
cursor1.execute("create database db")
|
||||
cursor1.execute("use db")
|
||||
cursor1.execute("create table tb (ts timestamp, id int) tags(loc nchar(30))")
|
||||
cursor1.execute("insert into t0 using tb tags('beijing') values(now, 1)")
|
||||
|
||||
# second client alter the table created by cleint
|
||||
conn2 = taos.connect(self.host, self.user, self.password, self.config)
|
||||
cursor2 = conn2.cursor()
|
||||
cursor2.execute("use db")
|
||||
cursor2.execute("alter table tb add column name nchar(30)")
|
||||
|
||||
# first client should not be able to use the origin metadata
|
||||
tdSql.init(cursor1, True)
|
||||
tdSql.error("insert into t0 values(now, 2)")
|
||||
|
||||
# first client should be able to insert data with udpated medadata
|
||||
tdSql.execute("insert into t0 values(now, 2, 'test')")
|
||||
tdSql.query("select * from tb")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
# second client drop the table
|
||||
cursor2.execute("drop table t0")
|
||||
cursor2.execute("create table t0 using tb tags('beijing')")
|
||||
|
||||
tdSql.execute("insert into t0 values(now, 2, 'test')")
|
||||
tdSql.query("select * from tb")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
# error expected for two clients drop the same cloumn
|
||||
cursor2.execute("alter table tb drop column name")
|
||||
tdSql.error("alter table tb drop column name")
|
||||
|
||||
cursor2.execute("alter table tb add column speed int")
|
||||
tdSql.error("alter table tb add column speed int")
|
||||
|
||||
|
||||
tdSql.execute("alter table tb add column size int")
|
||||
tdSql.query("describe tb")
|
||||
tdSql.checkRows(5)
|
||||
tdSql.checkData(0, 0, "ts")
|
||||
tdSql.checkData(1, 0, "id")
|
||||
tdSql.checkData(2, 0, "speed")
|
||||
tdSql.checkData(3, 0, "size")
|
||||
tdSql.checkData(4, 0, "loc")
|
||||
|
||||
|
||||
cursor1.close()
|
||||
cursor2.close()
|
||||
conn1.close()
|
||||
conn2.close()
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
clients.run()
|
|
@ -0,0 +1,55 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from math import floor
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
sql = "select server_version()"
|
||||
ret = tdSql.query(sql)
|
||||
version = floor(float(tdSql.getData(0, 0)[0:3]))
|
||||
expectedVersion = 2
|
||||
|
||||
if(version == expectedVersion):
|
||||
tdLog.info("sql:%s, row:%d col:%d data:%d == expect" % (sql, 0, 0, version))
|
||||
else:
|
||||
tdLog.exit("sql:%s, row:%d col:%d data:%d != expect:%d " % (sql, 0, 0, version, expectedVersion))
|
||||
|
||||
sql = "select client_version()"
|
||||
ret = tdSql.query(sql)
|
||||
version = floor(float(tdSql.getData(0, 0)[0:3]))
|
||||
expectedVersion = 2
|
||||
if(version == expectedVersion):
|
||||
tdLog.info("sql:%s, row:%d col:%d data:%d == expect" % (sql, 0, 0, version))
|
||||
else:
|
||||
tdLog.exit("sql:%s, row:%d col:%d data:%d != expect:%d " % (sql, 0, 0, version, expectedVersion))
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,9 @@
|
|||
execute:
|
||||
cd TDengine/tests/pytest && python3 ./test.py -f cluster/TD-3693/multClient.py && python3 cluster/TD-3693/multQuery.py
|
||||
|
||||
1. 使用测试的集群,三个节点fc1、fct2、fct4。
|
||||
2. 用taosdemo建两个库db1和db2,副本数目为1,插入一定数据。
|
||||
3. db1在mnode的master上(fct2),db2在mnode的slave上(fct4)。
|
||||
4. 珲哥修改taosdemo,变成多线程查询,修改后的软件我命名成taosdemoMul,然后做持续多线程查询db2上的数据,建立多个连接
|
||||
5. 4中查询过程放到后台,同时再次在db2执行建表、插入,查询操作。循环执行查询10次,每次间隔91s。
|
||||
6. 然后查询taosd的log日志,看是否还存在上述问题“send auth msg to mnodes”。
|
|
@ -0,0 +1,88 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "192.168.1.104",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 10,
|
||||
"num_of_records_per_req": 1000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db1",
|
||||
"drop": "yes",
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 3650,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 10,
|
||||
"childtable_prefix": "stb00_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 10000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"multi_thread_write_one_tbl": "no",
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 20,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 20000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"multi_thread_write_one_tbl": "no",
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "192.168.1.104",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 10,
|
||||
"num_of_records_per_req": 1000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db2",
|
||||
"drop": "yes",
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 3650,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 10,
|
||||
"childtable_prefix": "stb00_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 10000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"multi_thread_write_one_tbl": "no",
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 20,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 20000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"multi_thread_write_one_tbl": "no",
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,74 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
self.rowNum = 100000
|
||||
self.ts = 1537146000000
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
|
||||
# insert data to cluster'db
|
||||
os.system("%staosdemo -f cluster/TD-3693/insert1Data.json -y " % binPath)
|
||||
# multiple new and cloes connection with query data
|
||||
os.system("%staosdemo -f cluster/TD-3693/insert2Data.json -y " % binPath)
|
||||
os.system("nohup %staosdemoMul -f cluster/TD-3693/queryCount.json -y & " % binPath)
|
||||
|
||||
|
||||
|
||||
# delete useless files
|
||||
os.system("rm -rf ./insert_res.txt")
|
||||
os.system("rm -rf ./querySystemInfo*")
|
||||
os.system("rm -rf cluster/TD-3693/multClient.py.sql")
|
||||
os.system("rm -rf ./querySystemInfo*")
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,72 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
import threading
|
||||
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "fct4"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taos/"
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
# query data from cluster'db
|
||||
conn = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config)
|
||||
cur = conn.cursor()
|
||||
tdSql.init(cur, True)
|
||||
tdSql.execute("use db2")
|
||||
cur.execute("select count (tbname) from stb0")
|
||||
tdSql.query("select count (tbname) from stb0")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count (tbname) from stb1")
|
||||
tdSql.checkData(0, 0, 20)
|
||||
tdSql.query("select count(*) from stb00_0")
|
||||
tdSql.checkData(0, 0, 10000)
|
||||
tdSql.query("select count(*) from stb0")
|
||||
tdSql.checkData(0, 0, 100000)
|
||||
tdSql.query("select count(*) from stb01_0")
|
||||
tdSql.checkData(0, 0, 20000)
|
||||
tdSql.query("select count(*) from stb1")
|
||||
tdSql.checkData(0, 0, 400000)
|
||||
tdSql.execute("drop table if exists squerytest")
|
||||
tdSql.execute("drop table if exists querytest")
|
||||
tdSql.execute('''create stable squerytest(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double, col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table querytest using squerytest tags('beijing')")
|
||||
tdSql.execute("insert into querytest(ts) values(%d)" % (self.ts - 1))
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into querytest values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)" % (self.ts + i, i + 1, 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
for j in range(10):
|
||||
tdSql.execute("use db2")
|
||||
tdSql.query("select count(*),last(*) from querytest group by col1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.checkData(1, 2, 2)
|
||||
tdSql.checkData(1, 3, 1)
|
||||
sleep(88)
|
||||
tdSql.execute("drop table if exists squerytest")
|
||||
tdSql.execute("drop table if exists querytest")
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
clients.run()
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"filetype":"query",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "192.168.1.104",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"databases": "db2",
|
||||
"query_times": 1000000,
|
||||
"specified_table_query":
|
||||
{"query_interval":1, "concurrent":100,
|
||||
"sqls": [{"sql": "select count(*) from db.stb0", "result": ""}]
|
||||
}
|
||||
}
|
|
@ -0,0 +1,57 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
import time
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 32 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
nodes.addConfigs("maxVgroupsPerDb", "10")
|
||||
nodes.addConfigs("maxTablesPerVnode", "1000")
|
||||
nodes.restartAllTaosd()
|
||||
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(1)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.query("show vgroups")
|
||||
dnodes = []
|
||||
for i in range(10):
|
||||
dnodes.append(int(tdSql.getData(i, 4)))
|
||||
|
||||
s = set(dnodes)
|
||||
if len(s) < 3:
|
||||
tdLog.exit("cluster is not balanced")
|
||||
|
||||
tdLog.info("cluster is balanced")
|
||||
|
||||
nodes.removeConfigs("maxVgroupsPerDb", "10")
|
||||
nodes.removeConfigs("maxTablesPerVnode", "1000")
|
||||
nodes.restartAllTaosd()
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,47 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 1, 33 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
|
||||
ctest.connectDB()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
## Test case 1 ##
|
||||
tdLog.info("Test case 1 repeat %d times" % ctest.repeat)
|
||||
for i in range(ctest.repeat):
|
||||
tdLog.info("Start Round %d" % (i + 1))
|
||||
replica = random.randint(1,3)
|
||||
ctest.createSTable(replica)
|
||||
ctest.run()
|
||||
tdLog.sleep(10)
|
||||
tdSql.query("select count(*) from %s.%s" %(ctest.dbName, ctest.stbName))
|
||||
tdSql.checkData(0, 0, ctest.numberOfRecords * ctest.numberOfTables)
|
||||
tdLog.info("Round %d completed" % (i + 1))
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,51 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 7, ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.query("show vgroups")
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 5, "master")
|
||||
|
||||
tdSql.execute("alter database %s replica 2" % ctest.dbName)
|
||||
tdLog.sleep(30)
|
||||
tdSql.query("show vgroups")
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 5, "master")
|
||||
tdSql.checkData(i, 7, "slave")
|
||||
|
||||
tdSql.execute("alter database %s replica 3" % ctest.dbName)
|
||||
tdLog.sleep(30)
|
||||
tdSql.query("show vgroups")
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 5, "master")
|
||||
tdSql.checkData(i, 7, "slave")
|
||||
tdSql.checkData(i, 9, "slave")
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,214 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from fabric import Connection
|
||||
import random
|
||||
import time
|
||||
import logging
|
||||
|
||||
class Node:
|
||||
def __init__(self, index, username, hostIP, hostName, password, homeDir):
|
||||
self.index = index
|
||||
self.username = username
|
||||
self.hostIP = hostIP
|
||||
self.hostName = hostName
|
||||
self.homeDir = homeDir
|
||||
self.corePath = '/coredump'
|
||||
self.conn = Connection("{}@{}".format(username, hostName), connect_kwargs={"password": "{}".format(password)})
|
||||
|
||||
def buildTaosd(self):
|
||||
try:
|
||||
self.conn.cd("/root/TDinternal/community")
|
||||
self.conn.run("git checkout develop")
|
||||
self.conn.run("git pull")
|
||||
self.conn.cd("/root/TDinternal")
|
||||
self.conn.run("git checkout develop")
|
||||
self.conn.run("git pull")
|
||||
self.conn.cd("/root/TDinternal/debug")
|
||||
self.conn.run("cmake ..")
|
||||
self.conn.run("make")
|
||||
self.conn.run("make install")
|
||||
except Exception as e:
|
||||
print("Build Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
pass
|
||||
|
||||
def startTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl start taosd")
|
||||
except Exception as e:
|
||||
print("Start Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def stopTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl stop taosd")
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def restartTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl restart taosd")
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def removeTaosd(self):
|
||||
try:
|
||||
self.conn.run("rmtaos")
|
||||
except Exception as e:
|
||||
print("remove taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def forceStopOneTaosd(self):
|
||||
try:
|
||||
self.conn.run("kill -9 $(ps -ax|grep taosd|awk '{print $1}')")
|
||||
except Exception as e:
|
||||
print("kill taosd error on node%d " % self.index)
|
||||
|
||||
def startOneTaosd(self):
|
||||
try:
|
||||
self.conn.run("nohup taosd -c /etc/taos/ > /dev/null 2>&1 &")
|
||||
except Exception as e:
|
||||
print("start taosd error on node%d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def installTaosd(self, packagePath):
|
||||
self.conn.put(packagePath, self.homeDir)
|
||||
self.conn.cd(self.homeDir)
|
||||
self.conn.run("tar -zxf $(basename '%s')" % packagePath)
|
||||
with self.conn.cd("TDengine-enterprise-server"):
|
||||
self.conn.run("yes|./install.sh")
|
||||
|
||||
def configTaosd(self, taosConfigKey, taosConfigValue):
|
||||
self.conn.run("sudo echo '%s %s' >> %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
|
||||
|
||||
def removeTaosConfig(self, taosConfigKey, taosConfigValue):
|
||||
self.conn.run("sudo sed -in-place -e '/%s %s/d' %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
|
||||
|
||||
def configHosts(self, ip, name):
|
||||
self.conn.run("echo '%s %s' >> %s" % (ip, name, '/etc/hosts'))
|
||||
|
||||
def removeData(self):
|
||||
try:
|
||||
self.conn.run("sudo rm -rf /var/lib/taos/*")
|
||||
except Exception as e:
|
||||
print("remove taosd data error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def removeLog(self):
|
||||
try:
|
||||
self.conn.run("sudo rm -rf /var/log/taos/*")
|
||||
except Exception as e:
|
||||
print("remove taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def removeDataForMnode(self):
|
||||
try:
|
||||
self.conn.run("sudo rm -rf /var/lib/taos/*")
|
||||
except Exception as e:
|
||||
print("remove taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def removeDataForVnode(self, id):
|
||||
try:
|
||||
self.conn.run("sudo rm -rf /var/lib/taos/vnode%d/*.data" % id)
|
||||
except Exception as e:
|
||||
print("remove taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
|
||||
def detectCoredumpFile(self):
|
||||
try:
|
||||
result = self.conn.run("find /coredump -name 'core_*' ", hide=True)
|
||||
output = result.stdout
|
||||
print("output: %s" % output)
|
||||
return output
|
||||
except Exception as e:
|
||||
print("find coredump file error on node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
|
||||
class Nodes:
|
||||
def __init__(self):
|
||||
self.tdnodes = []
|
||||
self.tdnodes.append(Node(0, 'root', '192.168.17.194', 'taosdata', 'r', '/root/'))
|
||||
# self.tdnodes.append(Node(1, 'root', '52.250.48.222', 'node2', 'a', '/root/'))
|
||||
# self.tdnodes.append(Node(2, 'root', '51.141.167.23', 'node3', 'a', '/root/'))
|
||||
# self.tdnodes.append(Node(3, 'root', '52.247.207.173', 'node4', 'a', '/root/'))
|
||||
# self.tdnodes.append(Node(4, 'root', '51.141.166.100', 'node5', 'a', '/root/'))
|
||||
|
||||
def stopOneNode(self, index):
|
||||
self.tdnodes[index].stopTaosd()
|
||||
self.tdnodes[index].forceStopOneTaosd()
|
||||
|
||||
def startOneNode(self, index):
|
||||
self.tdnodes[index].startOneTaosd()
|
||||
|
||||
def detectCoredumpFile(self, index):
|
||||
return self.tdnodes[index].detectCoredumpFile()
|
||||
|
||||
def stopAllTaosd(self):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].stopTaosd()
|
||||
|
||||
def startAllTaosd(self):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].startTaosd()
|
||||
|
||||
def restartAllTaosd(self):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].restartTaosd()
|
||||
|
||||
def addConfigs(self, configKey, configValue):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].configTaosd(configKey, configValue)
|
||||
|
||||
def removeConfigs(self, configKey, configValue):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].removeTaosConfig(configKey, configValue)
|
||||
|
||||
def removeAllDataFiles(self):
|
||||
for i in range(len(self.tdnodes)):
|
||||
self.tdnodes[i].removeData()
|
||||
|
||||
class Test:
|
||||
def __init__(self):
|
||||
self.nodes = Nodes()
|
||||
|
||||
# kill taosd randomly every 10 mins
|
||||
def randomlyKillDnode(self):
|
||||
loop = 0
|
||||
while True:
|
||||
index = random.randint(0, 4)
|
||||
print("loop: %d, kill taosd on node%d" %(loop, index))
|
||||
self.nodes.stopOneNode(index)
|
||||
time.sleep(60)
|
||||
self.nodes.startOneNode(index)
|
||||
time.sleep(600)
|
||||
loop = loop + 1
|
||||
|
||||
def detectCoredump(self):
|
||||
loop = 0
|
||||
while True:
|
||||
for i in range(len(self.nodes.tdnodes)):
|
||||
result = self.nodes.detectCoredumpFile(i)
|
||||
print("core file path is %s" % result)
|
||||
if result and not result.isspace():
|
||||
self.nodes.stopAllTaosd()
|
||||
print("sleep for 10 mins")
|
||||
time.sleep(600)
|
||||
|
||||
test = Test()
|
||||
test.detectCoredump()
|
|
@ -0,0 +1,53 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 20, 21, 22 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(3)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.query("show vgroups")
|
||||
vnodeID = tdSql.getData(0, 0)
|
||||
nodes.node2.removeDataForVnode(vnodeID)
|
||||
nodes.node2.startTaosd()
|
||||
|
||||
# Wait for vnode file to recover
|
||||
for i in range(10):
|
||||
tdSql.query("select count(*) from t0")
|
||||
|
||||
tdLog.sleep(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.query("select count(*) from t0")
|
||||
tdSql.checkData(0, 0, 1000)
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,47 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
##Cover test case 5 ##
|
||||
def run(self):
|
||||
# cluster environment set up
|
||||
nodes = Nodes()
|
||||
nodes.addConfigs("maxVgroupsPerDb", "10")
|
||||
nodes.addConfigs("maxTablesPerVnode", "1000")
|
||||
nodes.restartAllTaosd()
|
||||
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(1)
|
||||
ctest.run()
|
||||
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.error("create table tt1 using %s tags(1)" % ctest.stbName)
|
||||
|
||||
nodes.removeConfigs("maxVgroupsPerDb", "10")
|
||||
nodes.removeConfigs("maxTablesPerVnode", "1000")
|
||||
nodes.restartAllTaosd()
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,75 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 7, 10 ##
|
||||
def run(self):
|
||||
# cluster environment set up
|
||||
tdLog.info("Test case 7, 10")
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
nodes.node1.stopTaosd()
|
||||
tdSql.query("show dnodes")
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "offline")
|
||||
tdSql.checkData(1, 4, "ready")
|
||||
tdSql.checkData(2, 4, "ready")
|
||||
|
||||
nodes.node1.startTaosd()
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "ready")
|
||||
tdSql.checkData(1, 4, "ready")
|
||||
tdSql.checkData(2, 4, "ready")
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
tdSql.query("show dnodes")
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "ready")
|
||||
tdSql.checkData(1, 4, "offline")
|
||||
tdSql.checkData(2, 4, "ready")
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "ready")
|
||||
tdSql.checkData(1, 4, "ready")
|
||||
tdSql.checkData(2, 4, "ready")
|
||||
|
||||
nodes.node3.stopTaosd()
|
||||
tdSql.query("show dnodes")
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "ready")
|
||||
tdSql.checkData(1, 4, "ready")
|
||||
tdSql.checkData(2, 4, "offline")
|
||||
|
||||
nodes.node3.startTaosd()
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(0, 4, "ready")
|
||||
tdSql.checkData(1, 4, "ready")
|
||||
tdSql.checkData(2, 4, "ready")
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,54 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## cover test case 6, 8, 9, 11 ##
|
||||
def run(self):
|
||||
# cluster environment set up
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
nodes.addConfigs("offlineThreshold", "10")
|
||||
nodes.removeAllDataFiles()
|
||||
nodes.restartAllTaosd()
|
||||
nodes.node3.stopTaosd()
|
||||
|
||||
tdLog.sleep(10)
|
||||
tdSql.query("show dnodes")
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(2, 4, "offline")
|
||||
|
||||
tdLog.sleep(60)
|
||||
tdSql.checkRows(3)
|
||||
tdSql.checkData(2, 4, "dropping")
|
||||
|
||||
tdLog.sleep(300)
|
||||
tdSql.checkRows(2)
|
||||
|
||||
nodes.removeConfigs("offlineThreshold", "10")
|
||||
nodes.restartAllTaosd()
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,65 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 28, 29, 30, 31 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(3)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
for i in range(100):
|
||||
tdSql.execute("drop table t%d" % i)
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.query("show tables")
|
||||
tdSql.checkRows(9900)
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
for i in range(10):
|
||||
tdSql.execute("create table a%d using meters tags(2)" % i)
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.query("show tables")
|
||||
tdSql.checkRows(9910)
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
tdSql.execute("alter table meters add col col6 int")
|
||||
nodes.node2.startTaosd()
|
||||
|
||||
nodes.node2.stopTaosd()
|
||||
tdSql.execute("drop database %s" % ctest.dbName)
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,54 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
import time
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 32 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(1)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
totalTime = 0
|
||||
for i in range(10):
|
||||
startTime = time.time()
|
||||
tdSql.query("select * from %s" % ctest.stbName)
|
||||
totalTime += time.time() - startTime
|
||||
print("replica 1: avarage query time for %d records: %f seconds" % (ctest.numberOfTables * ctest.numberOfRecords,totalTime / 10))
|
||||
|
||||
tdSql.execute("alter database %s replica 3" % ctest.dbName)
|
||||
tdLog.sleep(60)
|
||||
totalTime = 0
|
||||
for i in range(10):
|
||||
startTime = time.time()
|
||||
tdSql.query("select * from %s" % ctest.stbName)
|
||||
totalTime += time.time() - startTime
|
||||
print("replica 3: avarage query time for %d records: %f seconds" % (ctest.numberOfTables * ctest.numberOfRecords,totalTime / 10))
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,45 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 19 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.query("show databases")
|
||||
count = tdSql.queryRows;
|
||||
|
||||
nodes.stopAllTaosd()
|
||||
nodes.node1.startTaosd()
|
||||
tdSql.error("show databases")
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.error("show databases")
|
||||
|
||||
nodes.node3.startTaosd()
|
||||
tdLog.sleep(10)
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkRows(count)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,48 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 17, 18 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(1)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
tdSql.query("show databases")
|
||||
count = tdSql.queryRows;
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.execute("alter database %s replica 3" % ctest.dbName)
|
||||
nodes.node2.stopTaosd()
|
||||
nodes.node3.stopTaosd()
|
||||
tdSql.error("show databases")
|
||||
|
||||
nodes.node2.startTaosd()
|
||||
tdSql.error("show databases")
|
||||
|
||||
nodes.node3.startTaosd()
|
||||
tdSql.query("show databases")
|
||||
tdSql.checkRows(count)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,50 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from clusterSetup import *
|
||||
from util.sql import tdSql
|
||||
from util.log import tdLog
|
||||
import random
|
||||
|
||||
class ClusterTestcase:
|
||||
|
||||
## test case 24, 25, 26, 27 ##
|
||||
def run(self):
|
||||
|
||||
nodes = Nodes()
|
||||
ctest = ClusterTest(nodes.node1.hostName)
|
||||
ctest.connectDB()
|
||||
ctest.createSTable(1)
|
||||
ctest.run()
|
||||
tdSql.init(ctest.conn.cursor(), False)
|
||||
|
||||
|
||||
tdSql.execute("use %s" % ctest.dbName)
|
||||
tdSql.execute("alter database %s replica 3" % ctest.dbName)
|
||||
|
||||
for i in range(100):
|
||||
tdSql.execute("drop table t%d" % i)
|
||||
|
||||
for i in range(100):
|
||||
tdSql.execute("create table a%d using meters tags(1)" % i)
|
||||
|
||||
tdSql.execute("alter table meters add col col5 int")
|
||||
tdSql.execute("alter table meters drop col col5 int")
|
||||
tdSql.execute("drop database %s" % ctest.dbName)
|
||||
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
ct = ClusterTestcase()
|
||||
ct.run()
|
|
@ -0,0 +1,12 @@
|
|||
python3 basicTest.py
|
||||
python3 bananceTest.py
|
||||
python3 changeReplicaTest.py
|
||||
python3 dataFileRecoveryTest.py
|
||||
python3 fullDnodesTest.py
|
||||
python3 killAndRestartDnodesTest.py
|
||||
python3 offlineThresholdTest.py
|
||||
python3 oneReplicaOfflineTest.py
|
||||
python3 queryTimeTest.py
|
||||
python3 stopAllDnodesTest.py
|
||||
python3 stopTwoDnodesTest.py
|
||||
python3 syncingTest.py
|
|
@ -0,0 +1,777 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import threading
|
||||
import taos
|
||||
import sys
|
||||
import json
|
||||
import time
|
||||
import random
|
||||
import requests
|
||||
import argparse
|
||||
import datetime
|
||||
import string
|
||||
from requests.auth import HTTPBasicAuth
|
||||
func_list=['avg','count','twa','sum','stddev','leastsquares','min',
|
||||
'max','first','last','top','bottom','percentile','apercentile',
|
||||
'last_row','diff','spread','distinct']
|
||||
condition_list=[
|
||||
"where _c0 > now -10d ",
|
||||
'interval(10s)',
|
||||
'limit 10',
|
||||
'group by',
|
||||
'order by',
|
||||
'fill(null)'
|
||||
|
||||
]
|
||||
where_list = ['_c0>now-10d',' <50','like',' is null','in']
|
||||
class ConcurrentInquiry:
|
||||
# def __init__(self,ts=1500000001000,host='127.0.0.1',user='root',password='taosdata',dbname='test',
|
||||
# stb_prefix='st',subtb_prefix='t',n_Therads=10,r_Therads=10,probabilities=0.05,loop=5,
|
||||
# stableNum = 2,subtableNum = 1000,insertRows = 100):
|
||||
def __init__(self,ts,host,user,password,dbname,
|
||||
stb_prefix,subtb_prefix,n_Therads,r_Therads,probabilities,loop,
|
||||
stableNum ,subtableNum ,insertRows ,mix_table, replay):
|
||||
self.n_numOfTherads = n_Therads
|
||||
self.r_numOfTherads = r_Therads
|
||||
self.ts=ts
|
||||
self.host = host
|
||||
self.user = user
|
||||
self.password = password
|
||||
self.dbname=dbname
|
||||
self.stb_prefix = stb_prefix
|
||||
self.subtb_prefix = subtb_prefix
|
||||
self.stb_list=[]
|
||||
self.subtb_list=[]
|
||||
self.stb_stru_list=[]
|
||||
self.subtb_stru_list=[]
|
||||
self.stb_tag_list=[]
|
||||
self.subtb_tag_list=[]
|
||||
self.probabilities = [1-probabilities,probabilities]
|
||||
self.ifjoin = [1,0]
|
||||
self.loop = loop
|
||||
self.stableNum = stableNum
|
||||
self.subtableNum = subtableNum
|
||||
self.insertRows = insertRows
|
||||
self.mix_table = mix_table
|
||||
self.max_ts = datetime.datetime.now()
|
||||
self.min_ts = datetime.datetime.now() - datetime.timedelta(days=5)
|
||||
self.replay = replay
|
||||
def SetThreadsNum(self,num):
|
||||
self.numOfTherads=num
|
||||
|
||||
def ret_fcol(self,cl,sql): #返回结果的第一列
|
||||
cl.execute(sql)
|
||||
fcol_list=[]
|
||||
for data in cl:
|
||||
fcol_list.append(data[0])
|
||||
return fcol_list
|
||||
|
||||
def r_stb_list(self,cl): #返回超级表列表
|
||||
sql='show '+self.dbname+'.stables'
|
||||
self.stb_list=self.ret_fcol(cl,sql)
|
||||
|
||||
def r_subtb_list(self,cl,stablename): #每个超级表返回2个子表
|
||||
sql='select tbname from '+self.dbname+'.'+stablename+' limit 2;'
|
||||
self.subtb_list+=self.ret_fcol(cl,sql)
|
||||
|
||||
def cal_struct(self,cl,tbname): #查看表结构
|
||||
tb=[]
|
||||
tag=[]
|
||||
sql='describe '+self.dbname+'.'+tbname+';'
|
||||
cl.execute(sql)
|
||||
for data in cl:
|
||||
if data[3]:
|
||||
tag.append(data[0])
|
||||
else:
|
||||
tb.append(data[0])
|
||||
return tb,tag
|
||||
|
||||
def r_stb_stru(self,cl): #获取所有超级表的表结构
|
||||
for i in self.stb_list:
|
||||
tb,tag=self.cal_struct(cl,i)
|
||||
self.stb_stru_list.append(tb)
|
||||
self.stb_tag_list.append(tag)
|
||||
|
||||
def r_subtb_stru(self,cl): #返回所有子表的表结构
|
||||
for i in self.subtb_list:
|
||||
tb,tag=self.cal_struct(cl,i)
|
||||
self.subtb_stru_list.append(tb)
|
||||
self.subtb_tag_list.append(tag)
|
||||
|
||||
def get_timespan(self,cl): #获取时间跨度(仅第一个超级表)
|
||||
sql = 'select first(_c0),last(_c0) from ' + self.dbname + '.' + self.stb_list[0] + ';'
|
||||
print(sql)
|
||||
cl.execute(sql)
|
||||
for data in cl:
|
||||
self.max_ts = data[1]
|
||||
self.min_ts = data[0]
|
||||
|
||||
def get_full(self): #获取所有的表、表结构
|
||||
host = self.host
|
||||
user = self.user
|
||||
password = self.password
|
||||
conn = taos.connect(
|
||||
host,
|
||||
user,
|
||||
password,
|
||||
)
|
||||
cl = conn.cursor()
|
||||
self.r_stb_list(cl)
|
||||
for i in self.stb_list:
|
||||
self.r_subtb_list(cl,i)
|
||||
self.r_stb_stru(cl)
|
||||
self.r_subtb_stru(cl)
|
||||
self.get_timespan(cl)
|
||||
cl.close()
|
||||
conn.close()
|
||||
|
||||
#query condition
|
||||
def con_where(self,tlist,col_list,tag_list):
|
||||
l=[]
|
||||
for i in range(random.randint(0,len(tlist))):
|
||||
c = random.choice(where_list)
|
||||
if c == '_c0>now-10d':
|
||||
rdate = self.min_ts + (self.max_ts - self.min_ts)/10 * random.randint(-11,11)
|
||||
conlist = ' _c0 ' + random.choice(['<','>','>=','<=','<>']) + "'" + str(rdate) + "'"
|
||||
if self.random_pick():
|
||||
l.append(conlist)
|
||||
else: l.append(c)
|
||||
elif '<50' in c:
|
||||
conlist = ' ' + random.choice(tlist) + random.choice(['<','>','>=','<=','<>']) + str(random.randrange(-100,100))
|
||||
l.append(conlist)
|
||||
elif 'is null' in c:
|
||||
conlist = ' ' + random.choice(tlist) + random.choice([' is null',' is not null'])
|
||||
l.append(conlist)
|
||||
elif 'in' in c:
|
||||
in_list = []
|
||||
temp = []
|
||||
for i in range(random.randint(0,100)):
|
||||
temp.append(random.randint(-10000,10000))
|
||||
temp = (str(i) for i in temp)
|
||||
in_list.append(temp)
|
||||
temp1 = []
|
||||
for i in range(random.randint(0,100)):
|
||||
temp1.append("'" + ''.join(random.sample(string.ascii_letters, random.randint(0,10))) + "'")
|
||||
in_list.append(temp1)
|
||||
in_list.append(['NULL','NULL'])
|
||||
conlist = ' ' + random.choice(tlist) + ' in (' + ','.join(random.choice(in_list)) + ')'
|
||||
l.append(conlist)
|
||||
else:
|
||||
s_all = string.ascii_letters
|
||||
conlist = ' ' + random.choice(tlist) + " like \'%" + random.choice(s_all) + "%\' "
|
||||
l.append(conlist)
|
||||
return 'where '+random.choice([' and ',' or ']).join(l)
|
||||
|
||||
def con_interval(self,tlist,col_list,tag_list):
|
||||
interval = 'interval(' + str(random.randint(0,20)) + random.choice(['a','s','d','w','n','y']) + ')'
|
||||
return interval
|
||||
|
||||
def con_limit(self,tlist,col_list,tag_list):
|
||||
rand1 = str(random.randint(0,1000))
|
||||
rand2 = str(random.randint(0,1000))
|
||||
return random.choice(['limit ' + rand1,'limit ' + rand1 + ' offset '+rand2,
|
||||
' slimit ' + rand1,' slimit ' + rand1 + ' offset ' + rand2,'limit '+rand1 + ' slimit '+ rand2,
|
||||
'limit '+ rand1 + ' offset' + rand2 + ' slimit '+ rand1 + ' soffset ' + rand2 ])
|
||||
|
||||
def con_fill(self,tlist,col_list,tag_list):
|
||||
return random.choice(['fill(null)','fill(prev)','fill(none)','fill(LINEAR)'])
|
||||
|
||||
def con_group(self,tlist,col_list,tag_list):
|
||||
rand_tag = random.randint(0,5)
|
||||
rand_col = random.randint(0,1)
|
||||
if len(tag_list):
|
||||
return 'group by '+','.join(random.sample(col_list,rand_col) + random.sample(tag_list,rand_tag))
|
||||
else:
|
||||
return 'group by '+','.join(random.sample(col_list,rand_col))
|
||||
|
||||
def con_order(self,tlist,col_list,tag_list):
|
||||
return 'order by '+random.choice(tlist)
|
||||
|
||||
def con_state_window(self,tlist,col_list,tag_list):
|
||||
return 'state_window(' + random.choice(tlist + tag_list) + ')'
|
||||
|
||||
def con_session_window(self,tlist,col_list,tag_list):
|
||||
session_window = 'session_window(' + random.choice(tlist + tag_list) + ',' + str(random.randint(0,20)) + random.choice(['a','s','d','w','n','y']) + ')'
|
||||
return session_window
|
||||
|
||||
def gen_subquery_sql(self):
|
||||
subsql ,col_num = self.gen_query_sql(1)
|
||||
if col_num == 0:
|
||||
return 0
|
||||
col_list=[]
|
||||
tag_list=[]
|
||||
for i in range(col_num):
|
||||
col_list.append("taosd%d"%i)
|
||||
|
||||
tlist=col_list+['abc'] #增加不存在的域'abc',是否会引起新bug
|
||||
con_rand=random.randint(0,len(condition_list))
|
||||
func_rand=random.randint(0,len(func_list))
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
t_rand=random.randint(0,len(tlist))
|
||||
sql='select ' #select
|
||||
random.shuffle(col_list)
|
||||
random.shuffle(func_list)
|
||||
sel_col_list=[]
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
loop = 0
|
||||
for i,j in zip(col_list[0:col_rand],func_list): #决定每个被查询col的函数
|
||||
alias = ' as '+ 'sub%d ' % loop
|
||||
loop += 1
|
||||
pick_func = ''
|
||||
if j == 'leastsquares':
|
||||
pick_func=j+'('+i+',1,1)'
|
||||
elif j == 'top' or j == 'bottom' or j == 'percentile' or j == 'apercentile':
|
||||
pick_func=j+'('+i+',1)'
|
||||
else:
|
||||
pick_func=j+'('+i+')'
|
||||
if bool(random.getrandbits(1)) :
|
||||
pick_func+=alias
|
||||
sel_col_list.append(pick_func)
|
||||
if col_rand == 0:
|
||||
sql = sql + '*'
|
||||
else:
|
||||
sql=sql+','.join(sel_col_list) #select col & func
|
||||
sql = sql + ' from ('+ subsql +') '
|
||||
con_func=[self.con_where,self.con_interval,self.con_limit,self.con_group,self.con_order,self.con_fill,self.con_state_window,self.con_session_window]
|
||||
sel_con=random.sample(con_func,random.randint(0,len(con_func)))
|
||||
sel_con_list=[]
|
||||
for i in sel_con:
|
||||
sel_con_list.append(i(tlist,col_list,tag_list)) #获取对应的条件函数
|
||||
sql+=' '.join(sel_con_list) # condition
|
||||
#print(sql)
|
||||
return sql
|
||||
|
||||
def gen_query_sql(self,subquery=0): #生成查询语句
|
||||
tbi=random.randint(0,len(self.subtb_list)+len(self.stb_list)) #随机决定查询哪张表
|
||||
tbname=''
|
||||
col_list=[]
|
||||
tag_list=[]
|
||||
is_stb=0
|
||||
if tbi>len(self.stb_list) :
|
||||
tbi=tbi-len(self.stb_list)
|
||||
tbname=self.subtb_list[tbi-1]
|
||||
col_list=self.subtb_stru_list[tbi-1]
|
||||
tag_list=self.subtb_tag_list[tbi-1]
|
||||
else:
|
||||
tbname=self.stb_list[tbi-1]
|
||||
col_list=self.stb_stru_list[tbi-1]
|
||||
tag_list=self.stb_tag_list[tbi-1]
|
||||
is_stb=1
|
||||
tlist=col_list+tag_list+['abc'] #增加不存在的域'abc',是否会引起新bug
|
||||
con_rand=random.randint(0,len(condition_list))
|
||||
func_rand=random.randint(0,len(func_list))
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
tag_rand=random.randint(0,len(tag_list))
|
||||
t_rand=random.randint(0,len(tlist))
|
||||
sql='select ' #select
|
||||
random.shuffle(col_list)
|
||||
random.shuffle(func_list)
|
||||
sel_col_list=[]
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
loop = 0
|
||||
for i,j in zip(col_list[0:col_rand],func_list): #决定每个被查询col的函数
|
||||
alias = ' as '+ 'taos%d ' % loop
|
||||
loop += 1
|
||||
pick_func = ''
|
||||
if j == 'leastsquares':
|
||||
pick_func=j+'('+i+',1,1)'
|
||||
elif j == 'top' or j == 'bottom' or j == 'percentile' or j == 'apercentile':
|
||||
pick_func=j+'('+i+',1)'
|
||||
else:
|
||||
pick_func=j+'('+i+')'
|
||||
if bool(random.getrandbits(1)) | subquery :
|
||||
pick_func+=alias
|
||||
sel_col_list.append(pick_func)
|
||||
if col_rand == 0 & subquery :
|
||||
sql = sql + '*'
|
||||
else:
|
||||
sql=sql+','.join(sel_col_list) #select col & func
|
||||
if self.mix_table == 0:
|
||||
sql = sql + ' from '+random.choice(self.stb_list+self.subtb_list)+' '
|
||||
elif self.mix_table == 1:
|
||||
sql = sql + ' from '+random.choice(self.subtb_list)+' '
|
||||
else:
|
||||
sql = sql + ' from '+random.choice(self.stb_list)+' '
|
||||
con_func=[self.con_where,self.con_interval,self.con_limit,self.con_group,self.con_order,self.con_fill,self.con_state_window,self.con_session_window]
|
||||
sel_con=random.sample(con_func,random.randint(0,len(con_func)))
|
||||
sel_con_list=[]
|
||||
for i in sel_con:
|
||||
sel_con_list.append(i(tlist,col_list,tag_list)) #获取对应的条件函数
|
||||
sql+=' '.join(sel_con_list) # condition
|
||||
#print(sql)
|
||||
return (sql,loop)
|
||||
|
||||
def gen_query_join(self): #生成join查询语句
|
||||
tbname = []
|
||||
col_list = []
|
||||
tag_list = []
|
||||
col_intersection = []
|
||||
tag_intersection = []
|
||||
subtable = None
|
||||
if self.mix_table == 0:
|
||||
if bool(random.getrandbits(1)):
|
||||
subtable = True
|
||||
tbname = random.sample(self.subtb_list,2)
|
||||
for i in tbname:
|
||||
col_list.append(self.subtb_stru_list[self.subtb_list.index(i)])
|
||||
tag_list.append(self.subtb_stru_list[self.subtb_list.index(i)])
|
||||
col_intersection = list(set(col_list[0]).intersection(set(col_list[1])))
|
||||
tag_intersection = list(set(tag_list[0]).intersection(set(tag_list[1])))
|
||||
else:
|
||||
tbname = random.sample(self.stb_list,2)
|
||||
for i in tbname:
|
||||
col_list.append(self.stb_stru_list[self.stb_list.index(i)])
|
||||
tag_list.append(self.stb_stru_list[self.stb_list.index(i)])
|
||||
col_intersection = list(set(col_list[0]).intersection(set(col_list[1])))
|
||||
tag_intersection = list(set(tag_list[0]).intersection(set(tag_list[1])))
|
||||
elif self.mix_table == 1:
|
||||
subtable = True
|
||||
tbname = random.sample(self.subtb_list,2)
|
||||
for i in tbname:
|
||||
col_list.append(self.subtb_stru_list[self.subtb_list.index(i)])
|
||||
tag_list.append(self.subtb_stru_list[self.subtb_list.index(i)])
|
||||
col_intersection = list(set(col_list[0]).intersection(set(col_list[1])))
|
||||
tag_intersection = list(set(tag_list[0]).intersection(set(tag_list[1])))
|
||||
else:
|
||||
tbname = random.sample(self.stb_list,2)
|
||||
for i in tbname:
|
||||
col_list.append(self.stb_stru_list[self.stb_list.index(i)])
|
||||
tag_list.append(self.stb_stru_list[self.stb_list.index(i)])
|
||||
col_intersection = list(set(col_list[0]).intersection(set(col_list[1])))
|
||||
tag_intersection = list(set(tag_list[0]).intersection(set(tag_list[1])))
|
||||
con_rand=random.randint(0,len(condition_list))
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
tag_rand=random.randint(0,len(tag_list))
|
||||
sql='select ' #select
|
||||
|
||||
sel_col_tag=[]
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
if bool(random.getrandbits(1)):
|
||||
sql += '*'
|
||||
else:
|
||||
sel_col_tag.append('t1.' + str(random.choice(col_list[0] + tag_list[0])))
|
||||
sel_col_tag.append('t2.' + str(random.choice(col_list[1] + tag_list[1])))
|
||||
sel_col_list = []
|
||||
random.shuffle(func_list)
|
||||
if self.random_pick():
|
||||
loop = 0
|
||||
for i,j in zip(sel_col_tag,func_list): #决定每个被查询col的函数
|
||||
alias = ' as '+ 'taos%d ' % loop
|
||||
loop += 1
|
||||
pick_func = ''
|
||||
if j == 'leastsquares':
|
||||
pick_func=j+'('+i+',1,1)'
|
||||
elif j == 'top' or j == 'bottom' or j == 'percentile' or j == 'apercentile':
|
||||
pick_func=j+'('+i+',1)'
|
||||
else:
|
||||
pick_func=j+'('+i+')'
|
||||
if bool(random.getrandbits(1)):
|
||||
pick_func+=alias
|
||||
sel_col_list.append(pick_func)
|
||||
sql += ','.join(sel_col_list)
|
||||
else:
|
||||
sql += ','.join(sel_col_tag)
|
||||
|
||||
sql = sql + ' from '+ str(tbname[0]) +' t1,' + str(tbname[1]) + ' t2 ' #select col & func
|
||||
join_section = None
|
||||
temp = None
|
||||
if subtable:
|
||||
temp = random.choices(col_intersection)
|
||||
join_section = temp.pop()
|
||||
sql += 'where t1._c0 = t2._c0 and ' + 't1.' + str(join_section) + '=t2.' + str(join_section)
|
||||
else:
|
||||
temp = random.choices(col_intersection+tag_intersection)
|
||||
join_section = temp.pop()
|
||||
sql += 'where t1._c0 = t2._c0 and ' + 't1.' + str(join_section) + '=t2.' + str(join_section)
|
||||
return sql
|
||||
|
||||
def random_pick(self):
|
||||
x = random.uniform(0,1)
|
||||
cumulative_probability = 0.0
|
||||
for item, item_probability in zip(self.ifjoin, self.probabilities):
|
||||
cumulative_probability += item_probability
|
||||
if x < cumulative_probability:break
|
||||
return item
|
||||
|
||||
def gen_data(self):
|
||||
stableNum = self.stableNum
|
||||
subtableNum = self.subtableNum
|
||||
insertRows = self.insertRows
|
||||
t0 = self.ts
|
||||
host = self.host
|
||||
user = self.user
|
||||
password = self.password
|
||||
conn = taos.connect(
|
||||
host,
|
||||
user,
|
||||
password,
|
||||
)
|
||||
cl = conn.cursor()
|
||||
cl.execute("drop database if exists %s;" %self.dbname)
|
||||
cl.execute("create database if not exists %s;" %self.dbname)
|
||||
cl.execute("use %s" % self.dbname)
|
||||
for k in range(stableNum):
|
||||
sql="create table %s (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool,c8 binary(20),c9 nchar(20),c11 int unsigned,c12 smallint unsigned,c13 tinyint unsigned,c14 bigint unsigned) \
|
||||
tags(t1 int, t2 float, t3 bigint, t4 smallint, t5 tinyint, t6 double, t7 bool,t8 binary(20),t9 nchar(20), t11 int unsigned , t12 smallint unsigned , t13 tinyint unsigned , t14 bigint unsigned)" % (self.stb_prefix+str(k))
|
||||
cl.execute(sql)
|
||||
for j in range(subtableNum):
|
||||
if j % 100 == 0:
|
||||
sql = "create table %s using %s tags(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)" % \
|
||||
(self.subtb_prefix+str(k)+'_'+str(j),self.stb_prefix+str(k))
|
||||
else:
|
||||
sql = "create table %s using %s tags(%d,%d,%d,%d,%d,%d,%d,'%s','%s',%d,%d,%d,%d)" % \
|
||||
(self.subtb_prefix+str(k)+'_'+str(j),self.stb_prefix+str(k),j,j/2.0,j%41,j%51,j%53,j*1.0,j%2,'taos'+str(j),'涛思'+str(j), j%43, j%23 , j%17 , j%3167)
|
||||
print(sql)
|
||||
cl.execute(sql)
|
||||
for i in range(insertRows):
|
||||
if i % 100 == 0 :
|
||||
ret = cl.execute(
|
||||
"insert into %s values (%d , NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)" %
|
||||
(self.subtb_prefix+str(k)+'_'+str(j), t0+i))
|
||||
else:
|
||||
ret = cl.execute(
|
||||
"insert into %s values (%d , %d,%d,%d,%d,%d,%d,%d,'%s','%s',%d,%d,%d,%d)" %
|
||||
(self.subtb_prefix+str(k)+'_'+str(j), t0+i, i%100, i/2.0, i%41, i%51, i%53, i*1.0, i%2,'taos'+str(i),'涛思'+str(i), i%43, i%23 , i%17 , i%3167))
|
||||
cl.close()
|
||||
conn.close()
|
||||
|
||||
def rest_query(self,sql): #rest 接口
|
||||
host = self.host
|
||||
user = self.user
|
||||
password = self.password
|
||||
port =6041
|
||||
url = "http://{}:{}/rest/sql".format(host, port )
|
||||
try:
|
||||
r = requests.post(url,
|
||||
data = 'use %s' % self.dbname,
|
||||
auth = HTTPBasicAuth('root', 'taosdata'))
|
||||
r = requests.post(url,
|
||||
data = sql,
|
||||
auth = HTTPBasicAuth('root', 'taosdata'))
|
||||
except:
|
||||
print("REST API Failure (TODO: more info here)")
|
||||
raise
|
||||
rj = r.json()
|
||||
if ('status' not in rj):
|
||||
raise RuntimeError("No status in REST response")
|
||||
|
||||
if rj['status'] == 'error': # clearly reported error
|
||||
if ('code' not in rj): # error without code
|
||||
raise RuntimeError("REST error return without code")
|
||||
errno = rj['code'] # May need to massage this in the future
|
||||
# print("Raising programming error with REST return: {}".format(rj))
|
||||
raise taos.error.ProgrammingError(
|
||||
rj['desc'], errno) # todo: check existance of 'desc'
|
||||
|
||||
if rj['status'] != 'succ': # better be this
|
||||
raise RuntimeError(
|
||||
"Unexpected REST return status: {}".format(
|
||||
rj['status']))
|
||||
|
||||
nRows = rj['rows'] if ('rows' in rj) else 0
|
||||
return nRows
|
||||
|
||||
|
||||
def query_thread_n(self,threadID): #使用原生python接口查询
|
||||
host = self.host
|
||||
user = self.user
|
||||
password = self.password
|
||||
conn = taos.connect(
|
||||
host,
|
||||
user,
|
||||
password,
|
||||
)
|
||||
cl = conn.cursor()
|
||||
cl.execute("use %s;" % self.dbname)
|
||||
fo = open('bak_sql_n_%d'%threadID,'w+')
|
||||
print("Thread %d: starting" % threadID)
|
||||
loop = self.loop
|
||||
while loop:
|
||||
|
||||
try:
|
||||
if self.random_pick():
|
||||
if self.random_pick():
|
||||
sql,temp=self.gen_query_sql()
|
||||
else:
|
||||
sql = self.gen_subquery_sql()
|
||||
else:
|
||||
sql = self.gen_query_join()
|
||||
print("sql is ",sql)
|
||||
fo.write(sql+'\n')
|
||||
start = time.time()
|
||||
cl.execute(sql)
|
||||
cl.fetchall()
|
||||
end = time.time()
|
||||
print("time cost :",end-start)
|
||||
except Exception as e:
|
||||
print('-'*40)
|
||||
print(
|
||||
"Failure thread%d, sql: %s \nexception: %s" %
|
||||
(threadID, str(sql),str(e)))
|
||||
err_uec='Unable to establish connection'
|
||||
if err_uec in str(e) and loop >0:
|
||||
exit(-1)
|
||||
loop -= 1
|
||||
if loop == 0: break
|
||||
fo.close()
|
||||
cl.close()
|
||||
conn.close()
|
||||
print("Thread %d: finishing" % threadID)
|
||||
|
||||
def query_thread_nr(self,threadID): #使用原生python接口进行重放
|
||||
host = self.host
|
||||
user = self.user
|
||||
password = self.password
|
||||
conn = taos.connect(
|
||||
host,
|
||||
user,
|
||||
password,
|
||||
)
|
||||
cl = conn.cursor()
|
||||
cl.execute("use %s;" % self.dbname)
|
||||
replay_sql = []
|
||||
with open('bak_sql_n_%d'%threadID,'r') as f:
|
||||
replay_sql = f.readlines()
|
||||
print("Replay Thread %d: starting" % threadID)
|
||||
for sql in replay_sql:
|
||||
try:
|
||||
print("sql is ",sql)
|
||||
start = time.time()
|
||||
cl.execute(sql)
|
||||
cl.fetchall()
|
||||
end = time.time()
|
||||
print("time cost :",end-start)
|
||||
except Exception as e:
|
||||
print('-'*40)
|
||||
print(
|
||||
"Failure thread%d, sql: %s \nexception: %s" %
|
||||
(threadID, str(sql),str(e)))
|
||||
err_uec='Unable to establish connection'
|
||||
if err_uec in str(e) and loop >0:
|
||||
exit(-1)
|
||||
cl.close()
|
||||
conn.close()
|
||||
print("Replay Thread %d: finishing" % threadID)
|
||||
|
||||
def query_thread_r(self,threadID): #使用rest接口查询
|
||||
print("Thread %d: starting" % threadID)
|
||||
fo = open('bak_sql_r_%d'%threadID,'w+')
|
||||
loop = self.loop
|
||||
while loop:
|
||||
try:
|
||||
if self.random_pick():
|
||||
if self.random_pick():
|
||||
sql,temp=self.gen_query_sql()
|
||||
else:
|
||||
sql = self.gen_subquery_sql()
|
||||
else:
|
||||
sql = self.gen_query_join()
|
||||
print("sql is ",sql)
|
||||
fo.write(sql+'\n')
|
||||
start = time.time()
|
||||
self.rest_query(sql)
|
||||
end = time.time()
|
||||
print("time cost :",end-start)
|
||||
except Exception as e:
|
||||
print('-'*40)
|
||||
print(
|
||||
"Failure thread%d, sql: %s \nexception: %s" %
|
||||
(threadID, str(sql),str(e)))
|
||||
err_uec='Unable to establish connection'
|
||||
if err_uec in str(e) and loop >0:
|
||||
exit(-1)
|
||||
loop -= 1
|
||||
if loop == 0: break
|
||||
fo.close()
|
||||
print("Thread %d: finishing" % threadID)
|
||||
|
||||
def query_thread_rr(self,threadID): #使用rest接口重放
|
||||
print("Replay Thread %d: starting" % threadID)
|
||||
replay_sql = []
|
||||
with open('bak_sql_r_%d'%threadID,'r') as f:
|
||||
replay_sql = f.readlines()
|
||||
|
||||
for sql in replay_sql:
|
||||
try:
|
||||
print("sql is ",sql)
|
||||
start = time.time()
|
||||
self.rest_query(sql)
|
||||
end = time.time()
|
||||
print("time cost :",end-start)
|
||||
except Exception as e:
|
||||
print('-'*40)
|
||||
print(
|
||||
"Failure thread%d, sql: %s \nexception: %s" %
|
||||
(threadID, str(sql),str(e)))
|
||||
err_uec='Unable to establish connection'
|
||||
if err_uec in str(e) and loop >0:
|
||||
exit(-1)
|
||||
print("Replay Thread %d: finishing" % threadID)
|
||||
|
||||
def run(self):
|
||||
print(self.n_numOfTherads,self.r_numOfTherads)
|
||||
threads = []
|
||||
if self.replay: #whether replay
|
||||
for i in range(self.n_numOfTherads):
|
||||
thread = threading.Thread(target=self.query_thread_nr, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
for i in range(self.r_numOfTherads):
|
||||
thread = threading.Thread(target=self.query_thread_rr, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
else:
|
||||
for i in range(self.n_numOfTherads):
|
||||
thread = threading.Thread(target=self.query_thread_n, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
for i in range(self.r_numOfTherads):
|
||||
thread = threading.Thread(target=self.query_thread_r, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
'-H',
|
||||
'--host-name',
|
||||
action='store',
|
||||
default='127.0.0.1',
|
||||
type=str,
|
||||
help='host name to be connected (default: 127.0.0.1)')
|
||||
parser.add_argument(
|
||||
'-S',
|
||||
'--ts',
|
||||
action='store',
|
||||
default=1500000000000,
|
||||
type=int,
|
||||
help='insert data from timestamp (default: 1500000000000)')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--db-name',
|
||||
action='store',
|
||||
default='test',
|
||||
type=str,
|
||||
help='Database name to be created (default: test)')
|
||||
parser.add_argument(
|
||||
'-t',
|
||||
'--number-of-native-threads',
|
||||
action='store',
|
||||
default=10,
|
||||
type=int,
|
||||
help='Number of native threads (default: 10)')
|
||||
parser.add_argument(
|
||||
'-T',
|
||||
'--number-of-rest-threads',
|
||||
action='store',
|
||||
default=10,
|
||||
type=int,
|
||||
help='Number of rest threads (default: 10)')
|
||||
parser.add_argument(
|
||||
'-r',
|
||||
'--number-of-records',
|
||||
action='store',
|
||||
default=100,
|
||||
type=int,
|
||||
help='Number of record to be created for each table (default: 100)')
|
||||
parser.add_argument(
|
||||
'-c',
|
||||
'--create-table',
|
||||
action='store',
|
||||
default='0',
|
||||
type=int,
|
||||
help='whether gen data (default: 0)')
|
||||
parser.add_argument(
|
||||
'-p',
|
||||
'--subtb-name-prefix',
|
||||
action='store',
|
||||
default='t',
|
||||
type=str,
|
||||
help='subtable-name-prefix (default: t)')
|
||||
parser.add_argument(
|
||||
'-P',
|
||||
'--stb-name-prefix',
|
||||
action='store',
|
||||
default='st',
|
||||
type=str,
|
||||
help='stable-name-prefix (default: st)')
|
||||
parser.add_argument(
|
||||
'-b',
|
||||
'--probabilities',
|
||||
action='store',
|
||||
default='0.05',
|
||||
type=float,
|
||||
help='probabilities of join (default: 0.05)')
|
||||
parser.add_argument(
|
||||
'-l',
|
||||
'--loop-per-thread',
|
||||
action='store',
|
||||
default='100',
|
||||
type=int,
|
||||
help='loop per thread (default: 100)')
|
||||
parser.add_argument(
|
||||
'-u',
|
||||
'--user',
|
||||
action='store',
|
||||
default='root',
|
||||
type=str,
|
||||
help='user name')
|
||||
parser.add_argument(
|
||||
'-w',
|
||||
'--password',
|
||||
action='store',
|
||||
default='root',
|
||||
type=str,
|
||||
help='user name')
|
||||
parser.add_argument(
|
||||
'-n',
|
||||
'--number-of-tables',
|
||||
action='store',
|
||||
default=1000,
|
||||
type=int,
|
||||
help='Number of subtales per stable (default: 1000)')
|
||||
parser.add_argument(
|
||||
'-N',
|
||||
'--number-of-stables',
|
||||
action='store',
|
||||
default=2,
|
||||
type=int,
|
||||
help='Number of stables (default: 2)')
|
||||
parser.add_argument(
|
||||
'-m',
|
||||
'--mix-stable-subtable',
|
||||
action='store',
|
||||
default=0,
|
||||
type=int,
|
||||
help='0:stable & substable ,1:subtable ,2:stable (default: 0)')
|
||||
parser.add_argument(
|
||||
'-R',
|
||||
'--replay',
|
||||
action='store',
|
||||
default=0,
|
||||
type=int,
|
||||
help='0:not replay ,1:replay (default: 0)')
|
||||
|
||||
args = parser.parse_args()
|
||||
q = ConcurrentInquiry(
|
||||
args.ts,args.host_name,args.user,args.password,args.db_name,
|
||||
args.stb_name_prefix,args.subtb_name_prefix,args.number_of_native_threads,args.number_of_rest_threads,
|
||||
args.probabilities,args.loop_per_thread,args.number_of_stables,args.number_of_tables ,args.number_of_records,
|
||||
args.mix_stable_subtable, args.replay )
|
||||
|
||||
if args.create_table:
|
||||
q.gen_data()
|
||||
q.get_full()
|
||||
|
||||
#q.gen_query_sql()
|
||||
q.run()
|
||||
|
|
@ -0,0 +1,82 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This is the script for us to try to cause the TDengine server or client to crash
|
||||
#
|
||||
# PREPARATION
|
||||
#
|
||||
# 1. Build an compile the TDengine source code that comes with this script, in the same directory tree
|
||||
# 2. Please follow the direction in our README.md, and build TDengine in the build/ directory
|
||||
# 3. Adjust the configuration file if needed under build/test/cfg/taos.cfg
|
||||
# 4. Run the TDengine server instance: cd build; ./build/bin/taosd -c test/cfg
|
||||
# 5. Make sure you have a working Python3 environment: run /usr/bin/python3 --version, and you should get 3.6 or above
|
||||
# 6. Make sure you have the proper Python packages: # sudo apt install python3-setuptools python3-pip python3-distutils
|
||||
#
|
||||
# RUNNING THIS SCRIPT
|
||||
#
|
||||
# This script assumes the source code directory is intact, and that the binaries has been built in the
|
||||
# build/ directory, as such, will will load the Python libraries in the directory tree, and also load
|
||||
# the TDengine client shared library (so) file, in the build/directory, as evidenced in the env
|
||||
# variables below.
|
||||
#
|
||||
# Running the script is simple, no parameter is needed (for now, but will change in the future).
|
||||
#
|
||||
# Happy Crashing...
|
||||
|
||||
|
||||
# Due to the heavy path name assumptions/usage, let us require that the user be in the current directory
|
||||
EXEC_DIR=`dirname "$0"`
|
||||
if [[ $EXEC_DIR != "." ]]
|
||||
then
|
||||
echo "ERROR: Please execute `basename "$0"` in its own directory (for now anyway, pardon the dust)"
|
||||
exit -1
|
||||
fi
|
||||
|
||||
CURR_DIR=`pwd`
|
||||
IN_TDINTERNAL="community"
|
||||
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||
TAOS_DIR=$CURR_DIR/../../..
|
||||
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
|
||||
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6,7|rev`/lib
|
||||
else
|
||||
TAOS_DIR=$CURR_DIR/../..
|
||||
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
|
||||
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6|rev`/lib
|
||||
fi
|
||||
|
||||
# Now getting ready to execute Python
|
||||
# The following is the default of our standard dev env (Ubuntu 20.04), modify/adjust at your own risk
|
||||
PYTHON_EXEC=python3.8
|
||||
|
||||
# First we need to set up a path for Python to find our own TAOS modules, so that "import" can work.
|
||||
export PYTHONPATH=$(pwd)/../../src/connector/python:$(pwd)
|
||||
|
||||
# Then let us set up the library path so that our compiled SO file can be loaded by Python
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$LIB_DIR
|
||||
|
||||
# Now we are all let, and let's see if we can find a crash. Note we pass all params
|
||||
CONCURRENT_INQUIRY=concurrent_inquiry.py
|
||||
if [[ $1 == '--valgrind' ]]; then
|
||||
shift
|
||||
export PYTHONMALLOC=malloc
|
||||
VALGRIND_OUT=valgrind.out
|
||||
VALGRIND_ERR=valgrind.err
|
||||
# How to generate valgrind suppression file: https://stackoverflow.com/questions/17159578/generating-suppressions-for-memory-leaks
|
||||
# valgrind --leak-check=full --gen-suppressions=all --log-fd=9 python3.8 ./concurrent_inquiry.py $@ 9>>memcheck.log
|
||||
echo Executing under VALGRIND, with STDOUT/ERR going to $VALGRIND_OUT and $VALGRIND_ERR, please watch them from a different terminal.
|
||||
valgrind \
|
||||
--leak-check=yes \
|
||||
--suppressions=crash_gen/valgrind_taos.supp \
|
||||
$PYTHON_EXEC \
|
||||
$CONCURRENT_INQUIRY $@ > $VALGRIND_OUT 2> $VALGRIND_ERR
|
||||
elif [[ $1 == '--helgrind' ]]; then
|
||||
shift
|
||||
HELGRIND_OUT=helgrind.out
|
||||
HELGRIND_ERR=helgrind.err
|
||||
valgrind \
|
||||
--tool=helgrind \
|
||||
$PYTHON_EXEC \
|
||||
$CONCURRENT_INQUIRY $@ > $HELGRIND_OUT 2> $HELGRIND_ERR
|
||||
else
|
||||
$PYTHON_EXEC $CONCURRENT_INQUIRY $@
|
||||
fi
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def isLuaInstalled(self):
|
||||
if not which('lua'):
|
||||
tdLog.exit("Lua not found!")
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
# tdLog.info("Check if Lua installed")
|
||||
# if not self.isLuaInstalled():
|
||||
# sys.exit(1)
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
|
||||
targetPath = buildPath + "/../tests/examples/lua"
|
||||
tdLog.info(targetPath)
|
||||
currentPath = os.getcwd()
|
||||
os.chdir(targetPath)
|
||||
os.system('./build.sh')
|
||||
os.system('lua test.lua')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
#tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,169 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import threading
|
||||
import taos
|
||||
import sys
|
||||
import json
|
||||
import time
|
||||
import random
|
||||
# query sql
|
||||
query_sql = [
|
||||
# first supertable
|
||||
"select count(*) from test.meters ;",
|
||||
"select count(*) from test.meters where t3 > 2;",
|
||||
"select count(*) from test.meters where ts <> '2020-05-13 10:00:00.002';",
|
||||
"select count(*) from test.meters where t7 like 'taos_1%';",
|
||||
"select count(*) from test.meters where t7 like '_____2';",
|
||||
"select count(*) from test.meters where t8 like '%思%';",
|
||||
"select count(*) from test.meters interval(1n) order by ts desc;",
|
||||
#"select max(c0) from test.meters group by tbname",
|
||||
"select first(ts) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select last(ts) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select last_row(*) from test.meters;",
|
||||
"select twa(c1) from test.t1 where ts > 1500000001000 and ts < 1500000101000" ,
|
||||
"select avg(c1) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select bottom(c1, 2) from test.t1;",
|
||||
"select diff(c1) from test.t1;",
|
||||
"select leastsquares(c1, 1, 1) from test.t1 ;",
|
||||
"select max(c1) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select min(c1) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select c1 + c2 + c1 / c5 + c4 + c2 from test.t1;",
|
||||
"select percentile(c1, 50) from test.t1;",
|
||||
"select spread(c1) from test.t1 ;",
|
||||
"select stddev(c1) from test.t1;",
|
||||
"select sum(c1) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select top(c1, 2) from test.meters where t5 >5000 and t5<5100;"
|
||||
"select twa(c4) from test.t1 where ts > 1500000001000 and ts < 1500000101000" ,
|
||||
"select avg(c4) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select bottom(c4, 2) from test.t1 where t5 >5000 and t5<5100;",
|
||||
"select diff(c4) from test.t1 where t5 >5000 and t5<5100;",
|
||||
"select leastsquares(c4, 1, 1) from test.t1 ;",
|
||||
"select max(c4) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select min(c4) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select c5 + c2 + c4 / c5 + c4 + c2 from test.t1 ;",
|
||||
"select percentile(c5, 50) from test.t1;",
|
||||
"select spread(c5) from test.t1 ;",
|
||||
"select stddev(c5) from test.t1 where t5 >5000 and t5<5100;",
|
||||
"select sum(c5) from test.meters where t5 >5000 and t5<5100;",
|
||||
"select top(c5, 2) from test.meters where t5 >5000 and t5<5100;",
|
||||
#all vnode
|
||||
"select count(*) from test.meters where t5 >5000 and t5<5100",
|
||||
"select max(c0),avg(c1) from test.meters where t5 >5000 and t5<5100",
|
||||
"select sum(c5),avg(c1) from test.meters where t5 >5000 and t5<5100",
|
||||
"select max(c0),min(c5) from test.meters where t5 >5000 and t5<5100",
|
||||
"select min(c0),avg(c5) from test.meters where t5 >5000 and t5<5100",
|
||||
# second supertable
|
||||
"select count(*) from test.meters1 where t3 > 2;",
|
||||
"select count(*) from test.meters1 where ts <> '2020-05-13 10:00:00.002';",
|
||||
"select count(*) from test.meters where t7 like 'taos_1%';",
|
||||
"select count(*) from test.meters where t7 like '_____2';",
|
||||
"select count(*) from test.meters where t8 like '%思%';",
|
||||
"select count(*) from test.meters1 interval(1n) order by ts desc;",
|
||||
#"select max(c0) from test.meters1 group by tbname",
|
||||
"select first(ts) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select last(ts) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select last_row(*) from test.meters1 ;",
|
||||
"select twa(c1) from test.m1 where ts > 1500000001000 and ts < 1500000101000" ,
|
||||
"select avg(c1) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select bottom(c1, 2) from test.m1 where t5 >5000 and t5<5100;",
|
||||
"select diff(c1) from test.m1 ;",
|
||||
"select leastsquares(c1, 1, 1) from test.m1 ;",
|
||||
"select max(c1) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select min(c1) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select c1 + c2 + c1 / c0 + c2 from test.m1 ;",
|
||||
"select percentile(c1, 50) from test.m1;",
|
||||
"select spread(c1) from test.m1 ;",
|
||||
"select stddev(c1) from test.m1;",
|
||||
"select sum(c1) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select top(c1, 2) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select twa(c5) from test.m1 where ts > 1500000001000 and ts < 1500000101000" ,
|
||||
"select avg(c5) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select bottom(c5, 2) from test.m1;",
|
||||
"select diff(c5) from test.m1;",
|
||||
"select leastsquares(c5, 1, 1) from test.m1 ;",
|
||||
"select max(c5) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select min(c5) from test.meters1 where t5 >5000 and t5<5100;",
|
||||
"select c5 + c2 + c4 / c5 + c0 from test.m1;",
|
||||
"select percentile(c4, 50) from test.m1;",
|
||||
"select spread(c4) from test.m1 ;",
|
||||
"select stddev(c4) from test.m1;",
|
||||
"select sum(c4) from test.meters1 where t5 >5100 and t5<5300;",
|
||||
"select top(c4, 2) from test.meters1 where t5 >5100 and t5<5300;",
|
||||
"select count(*) from test.meters1 where t5 >5100 and t5<5300",
|
||||
#all vnode
|
||||
"select count(*) from test.meters1 where t5 >5100 and t5<5300",
|
||||
"select max(c0),avg(c1) from test.meters1 where t5 >5000 and t5<5100",
|
||||
"select sum(c5),avg(c1) from test.meters1 where t5 >5000 and t5<5100",
|
||||
"select max(c0),min(c5) from test.meters1 where t5 >5000 and t5<5100",
|
||||
"select min(c0),avg(c5) from test.meters1 where t5 >5000 and t5<5100",
|
||||
#join
|
||||
# "select * from meters,meters1 where meters.ts = meters1.ts and meters.t5 = meters1.t5",
|
||||
# "select * from meters,meters1 where meters.ts = meters1.ts and meters.t7 = meters1.t7",
|
||||
# "select * from meters,meters1 where meters.ts = meters1.ts and meters.t8 = meters1.t8",
|
||||
# "select meters.ts,meters1.c2 from meters,meters1 where meters.ts = meters1.ts and meters.t8 = meters1.t8"
|
||||
]
|
||||
|
||||
class ConcurrentInquiry:
|
||||
def initConnection(self):
|
||||
self.numOfTherads = 50
|
||||
self.ts=1500000001000
|
||||
|
||||
def SetThreadsNum(self,num):
|
||||
self.numOfTherads=num
|
||||
def query_thread(self,threadID):
|
||||
host = "10.211.55.14"
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
conn = taos.connect(
|
||||
host,
|
||||
user,
|
||||
password,
|
||||
)
|
||||
cl = conn.cursor()
|
||||
cl.execute("use test;")
|
||||
|
||||
print("Thread %d: starting" % threadID)
|
||||
|
||||
while True:
|
||||
ran_query_sql=query_sql
|
||||
random.shuffle(ran_query_sql)
|
||||
for i in ran_query_sql:
|
||||
print("Thread %d : %s"% (threadID,i))
|
||||
try:
|
||||
start = time.time()
|
||||
cl.execute(i)
|
||||
cl.fetchall()
|
||||
end = time.time()
|
||||
print("time cost :",end-start)
|
||||
except Exception as e:
|
||||
print(
|
||||
"Failure thread%d, sql: %s,exception: %s" %
|
||||
(threadID, str(i),str(e)))
|
||||
exit(-1)
|
||||
|
||||
|
||||
print("Thread %d: finishing" % threadID)
|
||||
|
||||
|
||||
|
||||
def run(self):
|
||||
|
||||
threads = []
|
||||
for i in range(self.numOfTherads):
|
||||
thread = threading.Thread(target=self.query_thread, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
q = ConcurrentInquiry()
|
||||
q.initConnection()
|
||||
q.run()
|
|
@ -0,0 +1,82 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This is the script for us to try to cause the TDengine server or client to crash
|
||||
#
|
||||
# PREPARATION
|
||||
#
|
||||
# 1. Build an compile the TDengine source code that comes with this script, in the same directory tree
|
||||
# 2. Please follow the direction in our README.md, and build TDengine in the build/ directory
|
||||
# 3. Adjust the configuration file if needed under build/test/cfg/taos.cfg
|
||||
# 4. Run the TDengine server instance: cd build; ./build/bin/taosd -c test/cfg
|
||||
# 5. Make sure you have a working Python3 environment: run /usr/bin/python3 --version, and you should get 3.6 or above
|
||||
# 6. Make sure you have the proper Python packages: # sudo apt install python3-setuptools python3-pip python3-distutils
|
||||
#
|
||||
# RUNNING THIS SCRIPT
|
||||
#
|
||||
# This script assumes the source code directory is intact, and that the binaries has been built in the
|
||||
# build/ directory, as such, will will load the Python libraries in the directory tree, and also load
|
||||
# the TDengine client shared library (so) file, in the build/directory, as evidenced in the env
|
||||
# variables below.
|
||||
#
|
||||
# Running the script is simple, no parameter is needed (for now, but will change in the future).
|
||||
#
|
||||
# Happy Crashing...
|
||||
|
||||
|
||||
# Due to the heavy path name assumptions/usage, let us require that the user be in the current directory
|
||||
EXEC_DIR=`dirname "$0"`
|
||||
if [[ $EXEC_DIR != "." ]]
|
||||
then
|
||||
echo "ERROR: Please execute `basename "$0"` in its own directory (for now anyway, pardon the dust)"
|
||||
exit -1
|
||||
fi
|
||||
|
||||
CURR_DIR=`pwd`
|
||||
IN_TDINTERNAL="community"
|
||||
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||
TAOS_DIR=$CURR_DIR/../../..
|
||||
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
|
||||
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6,7|rev`/lib
|
||||
else
|
||||
TAOS_DIR=$CURR_DIR/../..
|
||||
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
|
||||
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6|rev`/lib
|
||||
fi
|
||||
|
||||
# Now getting ready to execute Python
|
||||
# The following is the default of our standard dev env (Ubuntu 20.04), modify/adjust at your own risk
|
||||
PYTHON_EXEC=python3.8
|
||||
|
||||
# First we need to set up a path for Python to find our own TAOS modules, so that "import" can work.
|
||||
export PYTHONPATH=$(pwd)/../../src/connector/python:$(pwd)
|
||||
|
||||
# Then let us set up the library path so that our compiled SO file can be loaded by Python
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$LIB_DIR
|
||||
|
||||
# Now we are all let, and let's see if we can find a crash. Note we pass all params
|
||||
CRASH_GEN_EXEC=crash_gen_bootstrap.py
|
||||
if [[ $1 == '--valgrind' ]]; then
|
||||
shift
|
||||
export PYTHONMALLOC=malloc
|
||||
VALGRIND_OUT=valgrind.out
|
||||
VALGRIND_ERR=valgrind.err
|
||||
# How to generate valgrind suppression file: https://stackoverflow.com/questions/17159578/generating-suppressions-for-memory-leaks
|
||||
# valgrind --leak-check=full --gen-suppressions=all --log-fd=9 python3.8 ./crash_gen.py $@ 9>>memcheck.log
|
||||
echo Executing under VALGRIND, with STDOUT/ERR going to $VALGRIND_OUT and $VALGRIND_ERR, please watch them from a different terminal.
|
||||
valgrind \
|
||||
--leak-check=yes \
|
||||
--suppressions=crash_gen/valgrind_taos.supp \
|
||||
$PYTHON_EXEC \
|
||||
$CRASH_GEN_EXEC $@ > $VALGRIND_OUT 2> $VALGRIND_ERR
|
||||
elif [[ $1 == '--helgrind' ]]; then
|
||||
shift
|
||||
HELGRIND_OUT=helgrind.out
|
||||
HELGRIND_ERR=helgrind.err
|
||||
valgrind \
|
||||
--tool=helgrind \
|
||||
$PYTHON_EXEC \
|
||||
$CRASH_GEN_EXEC $@ > $HELGRIND_OUT 2> $HELGRIND_ERR
|
||||
else
|
||||
$PYTHON_EXEC $CRASH_GEN_EXEC $@
|
||||
fi
|
||||
|
|
@ -0,0 +1,156 @@
|
|||
<center><h1>User's Guide to the Crash_Gen Tool</h1></center>
|
||||
|
||||
# Introduction
|
||||
|
||||
To effectively test and debug our TDengine product, we have developed a simple tool to
|
||||
exercise various functions of the system in a randomized fashion, hoping to expose
|
||||
maximum number of problems, hopefully without a pre-determined scenario.
|
||||
|
||||
# Features
|
||||
|
||||
This tool can run as a test client with the following features:
|
||||
|
||||
1. Any number of concurrent threads
|
||||
1. Any number of test steps/loops
|
||||
1. Auto-create and writing to multiple databases
|
||||
1. Ignore specific error codes
|
||||
1. Write small or large data blocks
|
||||
1. Auto-generate out-of-sequence data, if needed
|
||||
1. Verify the result of write operations
|
||||
1. Concurrent writing to a shadow database for later data verification
|
||||
1. User specified number of replicas to use, against clusters
|
||||
|
||||
This tool can also use to start a TDengine service, either in stand-alone mode or
|
||||
cluster mode. The features include:
|
||||
|
||||
1. User specified number of D-Nodes to create/use.
|
||||
|
||||
# Preparation
|
||||
|
||||
To run this tool, please ensure the followed preparation work is done first.
|
||||
|
||||
1. Fetch a copy of the TDengine source code, and build it successfully in the `build/`
|
||||
directory
|
||||
1. Ensure that the system has Python3.8 or above properly installed. We use
|
||||
Ubuntu 20.04LTS as our own development environment, and suggest you also use such
|
||||
an environment if possible.
|
||||
|
||||
# Simple Execution as Client Test Tool
|
||||
|
||||
To run the tool with the simplest method, follow the steps below:
|
||||
|
||||
1. Open a terminal window, start the `taosd` service in the `build/` directory
|
||||
(or however you prefer to start the `taosd` service)
|
||||
1. Open another terminal window, go into the `tests/pytest/` directory, and
|
||||
run `./crash_gen.sh -p -t 3 -s 10` (change the two parameters here as you wish)
|
||||
1. Watch the output to the end and see if you get a `SUCCESS` or `FAILURE`
|
||||
|
||||
That's it!
|
||||
|
||||
# Running Server-side Clusters
|
||||
|
||||
This tool also makes it easy to test/verify the clustering capabilities of TDengine. You
|
||||
can start a cluster quite easily with the following command:
|
||||
|
||||
```
|
||||
$ cd tests/pytest/
|
||||
$ rm -rf ../../build/cluster_dnode_?; ./crash_gen.sh -e -o 3 # first part optional
|
||||
```
|
||||
|
||||
The `-e` option above tells the tool to start the service, and do not run any tests, while
|
||||
the `-o 3` option tells the tool to start 3 DNodes and join them together in a cluster.
|
||||
Obviously you can adjust the the number here. The `rm -rf` command line is optional
|
||||
to clean up previous cluster data, so that we can start from a clean state with no data
|
||||
at all.
|
||||
|
||||
## Behind the Scenes
|
||||
|
||||
When the tool runs a cluster, it users a number of directories, each holding the information
|
||||
for a single DNode, see:
|
||||
|
||||
```
|
||||
$ ls build/cluster*
|
||||
build/cluster_dnode_0:
|
||||
cfg data log
|
||||
|
||||
build/cluster_dnode_1:
|
||||
cfg data log
|
||||
|
||||
build/cluster_dnode_2:
|
||||
cfg data log
|
||||
```
|
||||
|
||||
Therefore, when something goes wrong and you want to reset everything with the cluster, simple
|
||||
erase all the files:
|
||||
|
||||
```
|
||||
$ rm -rf build/cluster_dnode_*
|
||||
```
|
||||
|
||||
## Addresses and Ports
|
||||
|
||||
The DNodes in the cluster all binds the the `127.0.0.1` IP address (for now anyway), and
|
||||
uses port 6030 for the first DNode, and 6130 for the 2nd one, and so on.
|
||||
|
||||
## Testing Against a Cluster
|
||||
|
||||
In a separate terminal window, you can invoke the tool in client mode and test against
|
||||
a cluster, such as:
|
||||
|
||||
```
|
||||
$ ./crash_gen.sh -p -t 10 -s 100 -i 3
|
||||
```
|
||||
|
||||
Here the `-i` option tells the tool to always create tables with 3 replicas, and run
|
||||
all tests against such tables.
|
||||
|
||||
# Additional Features
|
||||
|
||||
The exhaustive features of the tool is available through the `-h` option:
|
||||
|
||||
```
|
||||
$ ./crash_gen.sh -h
|
||||
usage: crash_gen_bootstrap.py [-h] [-a] [-b MAX_DBS] [-c CONNECTOR_TYPE] [-d] [-e] [-g IGNORE_ERRORS]
|
||||
[-i NUM_REPLICAS] [-k] [-l] [-m] [-n]
|
||||
[-o NUM_DNODES] [-p] [-r] [-s MAX_STEPS] [-t NUM_THREADS] [-v] [-w] [-x]
|
||||
|
||||
TDengine Auto Crash Generator (PLEASE NOTICE the Prerequisites Below)
|
||||
---------------------------------------------------------------------
|
||||
1. You build TDengine in the top level ./build directory, as described in offical docs
|
||||
2. You run the server there before this script: ./build/bin/taosd -c test/cfg
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-a, --auto-start-service
|
||||
Automatically start/stop the TDengine service (default: false)
|
||||
-b MAX_DBS, --max-dbs MAX_DBS
|
||||
Maximum number of DBs to keep, set to disable dropping DB. (default: 0)
|
||||
-c CONNECTOR_TYPE, --connector-type CONNECTOR_TYPE
|
||||
Connector type to use: native, rest, or mixed (default: 10)
|
||||
-d, --debug Turn on DEBUG mode for more logging (default: false)
|
||||
-e, --run-tdengine Run TDengine service in foreground (default: false)
|
||||
-g IGNORE_ERRORS, --ignore-errors IGNORE_ERRORS
|
||||
Ignore error codes, comma separated, 0x supported (default: None)
|
||||
-i NUM_REPLICAS, --num-replicas NUM_REPLICAS
|
||||
Number (fixed) of replicas to use, when testing against clusters. (default: 1)
|
||||
-k, --track-memory-leaks
|
||||
Use Valgrind tool to track memory leaks (default: false)
|
||||
-l, --larger-data Write larger amount of data during write operations (default: false)
|
||||
-m, --mix-oos-data Mix out-of-sequence data into the test data stream (default: true)
|
||||
-n, --dynamic-db-table-names
|
||||
Use non-fixed names for dbs/tables, for -b, useful for multi-instance executions (default: false)
|
||||
-o NUM_DNODES, --num-dnodes NUM_DNODES
|
||||
Number of Dnodes to initialize, used with -e option. (default: 1)
|
||||
-p, --per-thread-db-connection
|
||||
Use a single shared db connection (default: false)
|
||||
-r, --record-ops Use a pair of always-fsynced fils to record operations performing + performed, for power-off tests (default: false)
|
||||
-s MAX_STEPS, --max-steps MAX_STEPS
|
||||
Maximum number of steps to run (default: 100)
|
||||
-t NUM_THREADS, --num-threads NUM_THREADS
|
||||
Number of threads to run (default: 10)
|
||||
-v, --verify-data Verify data written in a number of places by reading back (default: false)
|
||||
-w, --use-shadow-db Use a shaddow database to verify data integrity (default: false)
|
||||
-x, --continue-on-exception
|
||||
Continue execution after encountering unexpected/disallowed errors/exceptions (default: false)
|
||||
```
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
# Helpful Ref: https://stackoverflow.com/questions/24100558/how-can-i-split-a-module-into-multiple-files-without-breaking-a-backwards-compa/24100645
|
||||
from crash_gen.service_manager import ServiceManager, TdeInstance, TdeSubProcess
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,954 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import io
|
||||
import sys
|
||||
from enum import Enum
|
||||
import threading
|
||||
import signal
|
||||
import logging
|
||||
import time
|
||||
from subprocess import PIPE, Popen, TimeoutExpired
|
||||
from typing import BinaryIO, Generator, IO, List, NewType, Optional
|
||||
import typing
|
||||
|
||||
try:
|
||||
import psutil
|
||||
except:
|
||||
print("Psutil module needed, please install: sudo pip3 install psutil")
|
||||
sys.exit(-1)
|
||||
from queue import Queue, Empty
|
||||
|
||||
from .shared.config import Config
|
||||
from .shared.db import DbTarget, DbConn
|
||||
from .shared.misc import Logging, Helper, CrashGenError, Status, Progress, Dice
|
||||
from .shared.types import DirPath, IpcStream
|
||||
|
||||
# from crash_gen.misc import CrashGenError, Dice, Helper, Logging, Progress, Status
|
||||
# from crash_gen.db import DbConn, DbTarget
|
||||
# from crash_gen.settings import Config
|
||||
# from crash_gen.types import DirPath
|
||||
|
||||
class TdeInstance():
|
||||
"""
|
||||
A class to capture the *static* information of a TDengine instance,
|
||||
including the location of the various files/directories, and basica
|
||||
configuration.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def _getBuildPath(cls):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("communit")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
buildPath = None
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
if buildPath == None:
|
||||
raise RuntimeError("Failed to determine buildPath, selfPath={}, projPath={}"
|
||||
.format(selfPath, projPath))
|
||||
return buildPath
|
||||
|
||||
@classmethod
|
||||
def prepareGcovEnv(cls, env):
|
||||
# Ref: https://gcc.gnu.org/onlinedocs/gcc/Cross-profiling.html
|
||||
bPath = cls._getBuildPath() # build PATH
|
||||
numSegments = len(bPath.split('/')) # "/x/TDengine/build" should yield 3
|
||||
# numSegments += 2 # cover "/src" after build
|
||||
# numSegments = numSegments - 1 # DEBUG only
|
||||
env['GCOV_PREFIX'] = bPath + '/src_s' # Server side source
|
||||
env['GCOV_PREFIX_STRIP'] = str(numSegments) # Strip every element, plus, ENV needs strings
|
||||
# VERY VERY important note: GCOV data collection NOT effective upon SIG_KILL
|
||||
Logging.info("Preparing GCOV environement to strip {} elements and use path: {}".format(
|
||||
numSegments, env['GCOV_PREFIX'] ))
|
||||
|
||||
def __init__(self, subdir='test', tInstNum=0, port=6030, fepPort=6030):
|
||||
self._buildDir = self._getBuildPath()
|
||||
self._subdir = '/' + subdir # TODO: tolerate "/"
|
||||
self._port = port # TODO: support different IP address too
|
||||
self._fepPort = fepPort
|
||||
|
||||
self._tInstNum = tInstNum
|
||||
|
||||
# An "Tde Instance" will *contain* a "sub process" object, with will/may use a thread internally
|
||||
# self._smThread = ServiceManagerThread()
|
||||
self._subProcess = None # type: Optional[TdeSubProcess]
|
||||
|
||||
def getDbTarget(self):
|
||||
return DbTarget(self.getCfgDir(), self.getHostAddr(), self._port)
|
||||
|
||||
def getPort(self):
|
||||
return self._port
|
||||
|
||||
def __repr__(self):
|
||||
return "[TdeInstance: {}, subdir={}]".format(
|
||||
self._buildDir, Helper.getFriendlyPath(self._subdir))
|
||||
|
||||
def generateCfgFile(self):
|
||||
# print("Logger = {}".format(logger))
|
||||
# buildPath = self.getBuildPath()
|
||||
# taosdPath = self._buildPath + "/build/bin/taosd"
|
||||
|
||||
cfgDir = self.getCfgDir()
|
||||
cfgFile = cfgDir + "/taos.cfg" # TODO: inquire if this is fixed
|
||||
if os.path.exists(cfgFile):
|
||||
if os.path.isfile(cfgFile):
|
||||
Logging.warning("Config file exists already, skip creation: {}".format(cfgFile))
|
||||
return # cfg file already exists, nothing to do
|
||||
else:
|
||||
raise CrashGenError("Invalid config file: {}".format(cfgFile))
|
||||
# Now that the cfg file doesn't exist
|
||||
if os.path.exists(cfgDir):
|
||||
if not os.path.isdir(cfgDir):
|
||||
raise CrashGenError("Invalid config dir: {}".format(cfgDir))
|
||||
# else: good path
|
||||
else:
|
||||
os.makedirs(cfgDir, exist_ok=True) # like "mkdir -p"
|
||||
# Now we have a good cfg dir
|
||||
cfgValues = {
|
||||
'runDir': self.getRunDir(),
|
||||
'ip': '127.0.0.1', # TODO: change to a network addressable ip
|
||||
'port': self._port,
|
||||
'fepPort': self._fepPort,
|
||||
}
|
||||
cfgTemplate = """
|
||||
dataDir {runDir}/data
|
||||
logDir {runDir}/log
|
||||
|
||||
charset UTF-8
|
||||
|
||||
firstEp {ip}:{fepPort}
|
||||
fqdn {ip}
|
||||
serverPort {port}
|
||||
|
||||
# was all 135 below
|
||||
dDebugFlag 135
|
||||
cDebugFlag 135
|
||||
rpcDebugFlag 135
|
||||
qDebugFlag 135
|
||||
# httpDebugFlag 143
|
||||
# asyncLog 0
|
||||
# tables 10
|
||||
maxtablesPerVnode 10
|
||||
rpcMaxTime 101
|
||||
# cache 2
|
||||
keep 36500
|
||||
# walLevel 2
|
||||
walLevel 1
|
||||
#
|
||||
# maxConnections 100
|
||||
quorum 2
|
||||
"""
|
||||
cfgContent = cfgTemplate.format_map(cfgValues)
|
||||
f = open(cfgFile, "w")
|
||||
f.write(cfgContent)
|
||||
f.close()
|
||||
|
||||
def rotateLogs(self):
|
||||
logPath = self.getLogDir()
|
||||
# ref: https://stackoverflow.com/questions/1995373/deleting-all-files-in-a-directory-with-python/1995397
|
||||
if os.path.exists(logPath):
|
||||
logPathSaved = logPath + "_" + time.strftime('%Y-%m-%d-%H-%M-%S')
|
||||
Logging.info("Saving old log files to: {}".format(logPathSaved))
|
||||
os.rename(logPath, logPathSaved)
|
||||
# os.mkdir(logPath) # recreate, no need actually, TDengine will auto-create with proper perms
|
||||
|
||||
|
||||
def getExecFile(self): # .../taosd
|
||||
return self._buildDir + "/build/bin/taosd"
|
||||
|
||||
def getRunDir(self) -> DirPath : # TODO: rename to "root dir" ?!
|
||||
return DirPath(self._buildDir + self._subdir)
|
||||
|
||||
def getCfgDir(self) -> DirPath : # path, not file
|
||||
return DirPath(self.getRunDir() + "/cfg")
|
||||
|
||||
def getLogDir(self) -> DirPath :
|
||||
return DirPath(self.getRunDir() + "/log")
|
||||
|
||||
def getHostAddr(self):
|
||||
return "127.0.0.1"
|
||||
|
||||
def getServiceCmdLine(self): # to start the instance
|
||||
if Config.getConfig().track_memory_leaks:
|
||||
Logging.info("Invoking VALGRIND on service...")
|
||||
return ['exec valgrind', '--leak-check=yes', self.getExecFile(), '-c', self.getCfgDir()]
|
||||
else:
|
||||
# TODO: move "exec -c" into Popen(), we can both "use shell" and NOT fork so ask to lose kill control
|
||||
return ["exec " + self.getExecFile(), '-c', self.getCfgDir()] # used in subproce.Popen()
|
||||
|
||||
def _getDnodes(self, dbc):
|
||||
dbc.query("show dnodes")
|
||||
cols = dbc.getQueryResult() # id,end_point,vnodes,cores,status,role,create_time,offline reason
|
||||
return {c[1]:c[4] for c in cols} # {'xxx:6030':'ready', 'xxx:6130':'ready'}
|
||||
|
||||
def createDnode(self, dbt: DbTarget):
|
||||
"""
|
||||
With a connection to the "first" EP, let's create a dnode for someone else who
|
||||
wants to join.
|
||||
"""
|
||||
dbc = DbConn.createNative(self.getDbTarget())
|
||||
dbc.open()
|
||||
|
||||
if dbt.getEp() in self._getDnodes(dbc):
|
||||
Logging.info("Skipping DNode creation for: {}".format(dbt))
|
||||
dbc.close()
|
||||
return
|
||||
|
||||
sql = "CREATE DNODE \"{}\"".format(dbt.getEp())
|
||||
dbc.execute(sql)
|
||||
dbc.close()
|
||||
|
||||
def getStatus(self):
|
||||
# return self._smThread.getStatus()
|
||||
if self._subProcess is None:
|
||||
return Status(Status.STATUS_EMPTY)
|
||||
return self._subProcess.getStatus()
|
||||
|
||||
# def getSmThread(self):
|
||||
# return self._smThread
|
||||
|
||||
def start(self):
|
||||
if self.getStatus().isActive():
|
||||
raise CrashGenError("Cannot start instance from status: {}".format(self.getStatus()))
|
||||
|
||||
Logging.info("Starting TDengine instance: {}".format(self))
|
||||
self.generateCfgFile() # service side generates config file, client does not
|
||||
self.rotateLogs()
|
||||
|
||||
# self._smThread.start(self.getServiceCmdLine(), self.getLogDir()) # May raise exceptions
|
||||
self._subProcess = TdeSubProcess(self.getServiceCmdLine(), self.getLogDir())
|
||||
|
||||
def stop(self):
|
||||
self._subProcess.stop()
|
||||
self._subProcess = None
|
||||
|
||||
def isFirst(self):
|
||||
return self._tInstNum == 0
|
||||
|
||||
def printFirst10Lines(self):
|
||||
if self._subProcess is None:
|
||||
Logging.warning("Incorrect TI status for procIpcBatch-10 operation")
|
||||
return
|
||||
self._subProcess.procIpcBatch(trimToTarget=10, forceOutput=True)
|
||||
|
||||
def procIpcBatch(self):
|
||||
if self._subProcess is None:
|
||||
Logging.warning("Incorrect TI status for procIpcBatch operation")
|
||||
return
|
||||
self._subProcess.procIpcBatch() # may enounter EOF and change status to STOPPED
|
||||
if self._subProcess.getStatus().isStopped():
|
||||
self._subProcess.stop()
|
||||
self._subProcess = None
|
||||
|
||||
class TdeSubProcess:
|
||||
"""
|
||||
A class to to represent the actual sub process that is the run-time
|
||||
of a TDengine instance.
|
||||
|
||||
It takes a TdeInstance object as its parameter, with the rationale being
|
||||
"a sub process runs an instance".
|
||||
|
||||
We aim to ensure that this object has exactly the same life-cycle as the
|
||||
underlying sub process.
|
||||
"""
|
||||
|
||||
# RET_ALREADY_STOPPED = -1
|
||||
# RET_TIME_OUT = -3
|
||||
# RET_SUCCESS = -4
|
||||
|
||||
def __init__(self, cmdLine: List[str], logDir: DirPath):
|
||||
# Create the process + managing thread immediately
|
||||
|
||||
Logging.info("Attempting to start TAOS sub process...")
|
||||
self._popen = self._start(cmdLine) # the actual sub process
|
||||
self._smThread = ServiceManagerThread(self, logDir) # A thread to manage the sub process, mostly to process the IO
|
||||
Logging.info("Successfully started TAOS process: {}".format(self))
|
||||
|
||||
|
||||
|
||||
def __repr__(self):
|
||||
# if self.subProcess is None:
|
||||
# return '[TdeSubProc: Empty]'
|
||||
return '[TdeSubProc: pid = {}, status = {}]'.format(
|
||||
self.getPid(), self.getStatus() )
|
||||
|
||||
def getIpcStdOut(self) -> IpcStream :
|
||||
if self._popen.universal_newlines : # alias of text_mode
|
||||
raise CrashGenError("We need binary mode for STDOUT IPC")
|
||||
# Logging.info("Type of stdout is: {}".format(type(self._popen.stdout)))
|
||||
return typing.cast(IpcStream, self._popen.stdout)
|
||||
|
||||
def getIpcStdErr(self) -> IpcStream :
|
||||
if self._popen.universal_newlines : # alias of text_mode
|
||||
raise CrashGenError("We need binary mode for STDERR IPC")
|
||||
return typing.cast(IpcStream, self._popen.stderr)
|
||||
|
||||
# Now it's always running, since we matched the life cycle
|
||||
# def isRunning(self):
|
||||
# return self.subProcess is not None
|
||||
|
||||
def getPid(self):
|
||||
return self._popen.pid
|
||||
|
||||
def _start(self, cmdLine) -> Popen :
|
||||
ON_POSIX = 'posix' in sys.builtin_module_names
|
||||
|
||||
# Prepare environment variables for coverage information
|
||||
# Ref: https://stackoverflow.com/questions/2231227/python-subprocess-popen-with-a-modified-environment
|
||||
myEnv = os.environ.copy()
|
||||
TdeInstance.prepareGcovEnv(myEnv)
|
||||
|
||||
# print(myEnv)
|
||||
# print("Starting TDengine with env: ", myEnv.items())
|
||||
print("Starting TDengine: {}".format(cmdLine))
|
||||
|
||||
ret = Popen(
|
||||
' '.join(cmdLine), # ' '.join(cmdLine) if useShell else cmdLine,
|
||||
shell=True, # Always use shell, since we need to pass ENV vars
|
||||
stdout=PIPE,
|
||||
stderr=PIPE,
|
||||
close_fds=ON_POSIX,
|
||||
env=myEnv
|
||||
) # had text=True, which interferred with reading EOF
|
||||
time.sleep(0.01) # very brief wait, then let's check if sub process started successfully.
|
||||
if ret.poll():
|
||||
raise CrashGenError("Sub process failed to start with command line: {}".format(cmdLine))
|
||||
return ret
|
||||
|
||||
STOP_SIGNAL = signal.SIGINT # signal.SIGKILL/SIGINT # What signal to use (in kill) to stop a taosd process?
|
||||
SIG_KILL_RETCODE = 137 # ref: https://stackoverflow.com/questions/43268156/process-finished-with-exit-code-137-in-pycharm
|
||||
|
||||
def stop(self):
|
||||
"""
|
||||
Stop a sub process, DO NOT return anything, process all conditions INSIDE.
|
||||
|
||||
Calling function should immediately delete/unreference the object
|
||||
|
||||
Common POSIX signal values (from man -7 signal):
|
||||
SIGHUP 1
|
||||
SIGINT 2
|
||||
SIGQUIT 3
|
||||
SIGILL 4
|
||||
SIGTRAP 5
|
||||
SIGABRT 6
|
||||
SIGIOT 6
|
||||
SIGBUS 7
|
||||
SIGEMT -
|
||||
SIGFPE 8
|
||||
SIGKILL 9
|
||||
SIGUSR1 10
|
||||
SIGSEGV 11
|
||||
SIGUSR2 12
|
||||
"""
|
||||
# self._popen should always be valid.
|
||||
|
||||
Logging.info("Terminating TDengine service running as the sub process...")
|
||||
if self.getStatus().isStopped():
|
||||
Logging.info("Service already stopped")
|
||||
return
|
||||
if self.getStatus().isStopping():
|
||||
Logging.info("Service is already being stopped, pid: {}".format(self.getPid()))
|
||||
return
|
||||
|
||||
self.setStatus(Status.STATUS_STOPPING)
|
||||
|
||||
retCode = self._popen.poll() # ret -N means killed with signal N, otherwise it's from exit(N)
|
||||
if retCode: # valid return code, process ended
|
||||
# retCode = -retCode # only if valid
|
||||
Logging.warning("TSP.stop(): process ended itself")
|
||||
# self.subProcess = None
|
||||
return
|
||||
|
||||
# process still alive, let's interrupt it
|
||||
self._stopForSure(self._popen, self.STOP_SIGNAL) # success if no exception
|
||||
|
||||
# sub process should end, then IPC queue should end, causing IO thread to end
|
||||
self._smThread.stop() # stop for sure too
|
||||
|
||||
self.setStatus(Status.STATUS_STOPPED)
|
||||
|
||||
@classmethod
|
||||
def _stopForSure(cls, proc: Popen, sig: int):
|
||||
'''
|
||||
Stop a process and all sub processes with a singal, and SIGKILL if necessary
|
||||
'''
|
||||
def doKillTdService(proc: Popen, sig: int):
|
||||
Logging.info("Killing sub-sub process {} with signal {}".format(proc.pid, sig))
|
||||
proc.send_signal(sig)
|
||||
try:
|
||||
retCode = proc.wait(20)
|
||||
if (- retCode) == signal.SIGSEGV: # Crashed
|
||||
Logging.warning("Process {} CRASHED, please check CORE file!".format(proc.pid))
|
||||
elif (- retCode) == sig :
|
||||
Logging.info("TD service terminated with expected return code {}".format(sig))
|
||||
else:
|
||||
Logging.warning("TD service terminated, EXPECTING ret code {}, got {}".format(sig, -retCode))
|
||||
return True # terminated successfully
|
||||
except TimeoutExpired as err:
|
||||
Logging.warning("Failed to kill sub-sub process {} with signal {}".format(proc.pid, sig))
|
||||
return False # failed to terminate
|
||||
|
||||
|
||||
def doKillChild(child: psutil.Process, sig: int):
|
||||
Logging.info("Killing sub-sub process {} with signal {}".format(child.pid, sig))
|
||||
child.send_signal(sig)
|
||||
try:
|
||||
retCode = child.wait(20) # type: ignore
|
||||
if (- retCode) == signal.SIGSEGV: # type: ignore # Crashed
|
||||
Logging.warning("Process {} CRASHED, please check CORE file!".format(child.pid))
|
||||
elif (- retCode) == sig : # type: ignore
|
||||
Logging.info("Sub-sub process terminated with expected return code {}".format(sig))
|
||||
else:
|
||||
Logging.warning("Process terminated, EXPECTING ret code {}, got {}".format(sig, -retCode)) # type: ignore
|
||||
return True # terminated successfully
|
||||
except psutil.TimeoutExpired as err:
|
||||
Logging.warning("Failed to kill sub-sub process {} with signal {}".format(child.pid, sig))
|
||||
return False # did not terminate
|
||||
|
||||
def doKill(proc: Popen, sig: int):
|
||||
pid = proc.pid
|
||||
try:
|
||||
topSubProc = psutil.Process(pid) # Now that we are doing "exec -c", should not have children any more
|
||||
for child in topSubProc.children(recursive=True): # or parent.children() for recursive=False
|
||||
Logging.warning("Unexpected child to be killed")
|
||||
doKillChild(child, sig)
|
||||
except psutil.NoSuchProcess as err:
|
||||
Logging.info("Process not found, can't kill, pid = {}".format(pid))
|
||||
|
||||
return doKillTdService(proc, sig)
|
||||
# TODO: re-examine if we need to kill the top process, which is always the SHELL for now
|
||||
# try:
|
||||
# proc.wait(1) # SHELL process here, may throw subprocess.TimeoutExpired exception
|
||||
# # expRetCode = self.SIG_KILL_RETCODE if sig==signal.SIGKILL else (-sig)
|
||||
# # if retCode == expRetCode:
|
||||
# # Logging.info("Process terminated with expected return code {}".format(retCode))
|
||||
# # else:
|
||||
# # Logging.warning("Process terminated, EXPECTING ret code {}, got {}".format(expRetCode, retCode))
|
||||
# # return True # success
|
||||
# except subprocess.TimeoutExpired as err:
|
||||
# Logging.warning("Failed to kill process {} with signal {}".format(pid, sig))
|
||||
# return False # failed to kill
|
||||
|
||||
def softKill(proc, sig):
|
||||
return doKill(proc, sig)
|
||||
|
||||
def hardKill(proc):
|
||||
return doKill(proc, signal.SIGKILL)
|
||||
|
||||
pid = proc.pid
|
||||
Logging.info("Terminate running processes under {}, with SIG #{} and wait...".format(pid, sig))
|
||||
if softKill(proc, sig):
|
||||
return # success
|
||||
if sig != signal.SIGKILL: # really was soft above
|
||||
if hardKill(proc):
|
||||
return
|
||||
raise CrashGenError("Failed to stop process, pid={}".format(pid))
|
||||
|
||||
def getStatus(self):
|
||||
return self._smThread.getStatus()
|
||||
|
||||
def setStatus(self, status):
|
||||
self._smThread.setStatus(status)
|
||||
|
||||
def procIpcBatch(self, trimToTarget=0, forceOutput=False):
|
||||
self._smThread.procIpcBatch(trimToTarget, forceOutput)
|
||||
|
||||
class ServiceManager:
|
||||
PAUSE_BETWEEN_IPC_CHECK = 1.2 # seconds between checks on STDOUT of sub process
|
||||
|
||||
def __init__(self, numDnodes): # >1 when we run a cluster
|
||||
Logging.info("TDengine Service Manager (TSM) created")
|
||||
self._numDnodes = numDnodes # >1 means we have a cluster
|
||||
self._lock = threading.Lock()
|
||||
# signal.signal(signal.SIGTERM, self.sigIntHandler) # Moved to MainExec
|
||||
# signal.signal(signal.SIGINT, self.sigIntHandler)
|
||||
# signal.signal(signal.SIGUSR1, self.sigUsrHandler) # different handler!
|
||||
|
||||
self.inSigHandler = False
|
||||
# self._status = MainExec.STATUS_RUNNING # set inside
|
||||
# _startTaosService()
|
||||
self._runCluster = (numDnodes > 1)
|
||||
self._tInsts : List[TdeInstance] = []
|
||||
for i in range(0, numDnodes):
|
||||
ti = self._createTdeInstance(i) # construct tInst
|
||||
self._tInsts.append(ti)
|
||||
|
||||
# self.svcMgrThreads : List[ServiceManagerThread] = []
|
||||
# for i in range(0, numDnodes):
|
||||
# thread = self._createThread(i) # construct tInst
|
||||
# self.svcMgrThreads.append(thread)
|
||||
|
||||
def _createTdeInstance(self, dnIndex):
|
||||
if not self._runCluster: # single instance
|
||||
subdir = 'test'
|
||||
else: # Create all threads in a cluster
|
||||
subdir = 'cluster_dnode_{}'.format(dnIndex)
|
||||
fepPort= 6030 # firstEP Port
|
||||
port = fepPort + dnIndex * 100
|
||||
return TdeInstance(subdir, dnIndex, port, fepPort)
|
||||
# return ServiceManagerThread(dnIndex, ti)
|
||||
|
||||
def _doMenu(self):
|
||||
choice = ""
|
||||
while True:
|
||||
print("\nInterrupting Service Program, Choose an Action: ")
|
||||
print("1: Resume")
|
||||
print("2: Terminate")
|
||||
print("3: Restart")
|
||||
# Remember to update the if range below
|
||||
# print("Enter Choice: ", end="", flush=True)
|
||||
while choice == "":
|
||||
choice = input("Enter Choice: ")
|
||||
if choice != "":
|
||||
break # done with reading repeated input
|
||||
if choice in ["1", "2", "3"]:
|
||||
break # we are done with whole method
|
||||
print("Invalid choice, please try again.")
|
||||
choice = "" # reset
|
||||
return choice
|
||||
|
||||
def sigUsrHandler(self, signalNumber, frame):
|
||||
print("Interrupting main thread execution upon SIGUSR1")
|
||||
if self.inSigHandler: # already
|
||||
print("Ignoring repeated SIG...")
|
||||
return # do nothing if it's already not running
|
||||
self.inSigHandler = True
|
||||
|
||||
choice = self._doMenu()
|
||||
if choice == "1":
|
||||
self.sigHandlerResume() # TODO: can the sub-process be blocked due to us not reading from queue?
|
||||
elif choice == "2":
|
||||
self.stopTaosServices()
|
||||
elif choice == "3": # Restart
|
||||
self.restart()
|
||||
else:
|
||||
raise RuntimeError("Invalid menu choice: {}".format(choice))
|
||||
|
||||
self.inSigHandler = False
|
||||
|
||||
def sigIntHandler(self, signalNumber, frame):
|
||||
print("ServiceManager: INT Signal Handler starting...")
|
||||
if self.inSigHandler:
|
||||
print("Ignoring repeated SIG_INT...")
|
||||
return
|
||||
self.inSigHandler = True
|
||||
|
||||
self.stopTaosServices()
|
||||
print("ServiceManager: INT Signal Handler returning...")
|
||||
self.inSigHandler = False
|
||||
|
||||
def sigHandlerResume(self):
|
||||
print("Resuming TDengine service manager (main thread)...\n\n")
|
||||
|
||||
# def _updateThreadStatus(self):
|
||||
# if self.svcMgrThread: # valid svc mgr thread
|
||||
# if self.svcMgrThread.isStopped(): # done?
|
||||
# self.svcMgrThread.procIpcBatch() # one last time. TODO: appropriate?
|
||||
# self.svcMgrThread = None # no more
|
||||
|
||||
def isActive(self):
|
||||
"""
|
||||
Determine if the service/cluster is active at all, i.e. at least
|
||||
one instance is active
|
||||
"""
|
||||
for ti in self._tInsts:
|
||||
if ti.getStatus().isActive():
|
||||
return True
|
||||
return False
|
||||
|
||||
def isRunning(self):
|
||||
for ti in self._tInsts:
|
||||
if not ti.getStatus().isRunning():
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
# def isRestarting(self):
|
||||
# """
|
||||
# Determine if the service/cluster is being "restarted", i.e., at least
|
||||
# one thread is in "restarting" status
|
||||
# """
|
||||
# for thread in self.svcMgrThreads:
|
||||
# if thread.isRestarting():
|
||||
# return True
|
||||
# return False
|
||||
|
||||
def isStable(self):
|
||||
"""
|
||||
Determine if the service/cluster is "stable", i.e. all of the
|
||||
threads are in "stable" status.
|
||||
"""
|
||||
for ti in self._tInsts:
|
||||
if not ti.getStatus().isStable():
|
||||
return False
|
||||
return True
|
||||
|
||||
def _procIpcAll(self):
|
||||
while self.isActive():
|
||||
Progress.emit(Progress.SERVICE_HEART_BEAT)
|
||||
for ti in self._tInsts: # all thread objects should always be valid
|
||||
# while self.isRunning() or self.isRestarting() : # for as long as the svc mgr thread is still here
|
||||
status = ti.getStatus()
|
||||
if status.isRunning():
|
||||
# th = ti.getSmThread()
|
||||
ti.procIpcBatch() # regular processing,
|
||||
if status.isStopped():
|
||||
ti.procIpcBatch() # one last time?
|
||||
# self._updateThreadStatus()
|
||||
|
||||
time.sleep(self.PAUSE_BETWEEN_IPC_CHECK) # pause, before next round
|
||||
# raise CrashGenError("dummy")
|
||||
Logging.info("Service Manager Thread (with subprocess) ended, main thread exiting...")
|
||||
|
||||
def _getFirstInstance(self):
|
||||
return self._tInsts[0]
|
||||
|
||||
def startTaosServices(self):
|
||||
with self._lock:
|
||||
if self.isActive():
|
||||
raise RuntimeError("Cannot start TAOS service(s) when one/some may already be running")
|
||||
|
||||
# Find if there's already a taosd service, and then kill it
|
||||
for proc in psutil.process_iter():
|
||||
if proc.name() == 'taosd' or proc.name() == 'memcheck-amd64-': # Regular or under Valgrind
|
||||
Logging.info("Killing an existing TAOSD process in 2 seconds... press CTRL-C to interrupt")
|
||||
time.sleep(2.0)
|
||||
proc.kill()
|
||||
# print("Process: {}".format(proc.name()))
|
||||
|
||||
# self.svcMgrThread = ServiceManagerThread() # create the object
|
||||
|
||||
for ti in self._tInsts:
|
||||
ti.start()
|
||||
if not ti.isFirst():
|
||||
tFirst = self._getFirstInstance()
|
||||
tFirst.createDnode(ti.getDbTarget())
|
||||
ti.printFirst10Lines()
|
||||
# ti.getSmThread().procIpcBatch(trimToTarget=10, forceOutput=True) # for printing 10 lines
|
||||
|
||||
def stopTaosServices(self):
|
||||
with self._lock:
|
||||
if not self.isActive():
|
||||
Logging.warning("Cannot stop TAOS service(s), already not active")
|
||||
return
|
||||
|
||||
for ti in self._tInsts:
|
||||
ti.stop()
|
||||
|
||||
def run(self):
|
||||
self.startTaosServices()
|
||||
self._procIpcAll() # pump/process all the messages, may encounter SIG + restart
|
||||
if self.isActive(): # if sig handler hasn't destroyed it by now
|
||||
self.stopTaosServices() # should have started already
|
||||
|
||||
def restart(self):
|
||||
if not self.isStable():
|
||||
Logging.warning("Cannot restart service/cluster, when not stable")
|
||||
return
|
||||
|
||||
# self._isRestarting = True
|
||||
if self.isActive():
|
||||
self.stopTaosServices()
|
||||
else:
|
||||
Logging.warning("Service not active when restart requested")
|
||||
|
||||
self.startTaosServices()
|
||||
# self._isRestarting = False
|
||||
|
||||
# def isRunning(self):
|
||||
# return self.svcMgrThread != None
|
||||
|
||||
# def isRestarting(self):
|
||||
# return self._isRestarting
|
||||
|
||||
class ServiceManagerThread:
|
||||
"""
|
||||
A class representing a dedicated thread which manages the "sub process"
|
||||
of the TDengine service, interacting with its STDOUT/ERR.
|
||||
|
||||
It takes a TdeInstance parameter at creation time, or create a default
|
||||
"""
|
||||
MAX_QUEUE_SIZE = 10000
|
||||
|
||||
def __init__(self, subProc: TdeSubProcess, logDir: str):
|
||||
# Set the sub process
|
||||
# self._tdeSubProcess = None # type: TdeSubProcess
|
||||
|
||||
# Arrange the TDengine instance
|
||||
# self._tInstNum = tInstNum # instance serial number in cluster, ZERO based
|
||||
# self._tInst = tInst or TdeInstance() # Need an instance
|
||||
|
||||
# self._thread = None # type: Optional[threading.Thread] # The actual thread, # type: threading.Thread
|
||||
# self._thread2 = None # type: Optional[threading.Thread] Thread # watching stderr
|
||||
self._status = Status(Status.STATUS_STOPPED) # The status of the underlying service, actually.
|
||||
|
||||
self._start(subProc, logDir)
|
||||
|
||||
def __repr__(self):
|
||||
raise CrashGenError("SMT status moved to TdeSubProcess")
|
||||
# return "[SvcMgrThread: status={}, subProc={}]".format(
|
||||
# self.getStatus(), self._tdeSubProcess)
|
||||
|
||||
def getStatus(self):
|
||||
'''
|
||||
Get the status of the process being managed. (misnomer alert!)
|
||||
'''
|
||||
return self._status
|
||||
|
||||
def setStatus(self, statusVal: int):
|
||||
self._status.set(statusVal)
|
||||
|
||||
# Start the thread (with sub process), and wait for the sub service
|
||||
# to become fully operational
|
||||
def _start(self, subProc :TdeSubProcess, logDir: str):
|
||||
'''
|
||||
Request the manager thread to start a new sub process, and manage it.
|
||||
|
||||
:param cmdLine: the command line to invoke
|
||||
:param logDir: the logging directory, to hold stdout/stderr files
|
||||
'''
|
||||
# if self._thread:
|
||||
# raise RuntimeError("Unexpected _thread")
|
||||
# if self._tdeSubProcess:
|
||||
# raise RuntimeError("TDengine sub process already created/running")
|
||||
|
||||
# Moved to TdeSubProcess
|
||||
# Logging.info("Attempting to start TAOS service: {}".format(self))
|
||||
|
||||
self._status.set(Status.STATUS_STARTING)
|
||||
# self._tdeSubProcess = TdeSubProcess.start(cmdLine) # TODO: verify process is running
|
||||
|
||||
self._ipcQueue = Queue() # type: Queue
|
||||
self._thread = threading.Thread( # First thread captures server OUTPUT
|
||||
target=self.svcOutputReader,
|
||||
args=(subProc.getIpcStdOut(), self._ipcQueue, logDir))
|
||||
self._thread.daemon = True # thread dies with the program
|
||||
self._thread.start()
|
||||
time.sleep(0.01)
|
||||
if not self._thread.is_alive(): # What happened?
|
||||
Logging.info("Failed to start process to monitor STDOUT")
|
||||
self.stop()
|
||||
raise CrashGenError("Failed to start thread to monitor STDOUT")
|
||||
Logging.info("Successfully started process to monitor STDOUT")
|
||||
|
||||
self._thread2 = threading.Thread( # 2nd thread captures server ERRORs
|
||||
target=self.svcErrorReader,
|
||||
args=(subProc.getIpcStdErr(), self._ipcQueue, logDir))
|
||||
self._thread2.daemon = True # thread dies with the program
|
||||
self._thread2.start()
|
||||
time.sleep(0.01)
|
||||
if not self._thread2.is_alive():
|
||||
self.stop()
|
||||
raise CrashGenError("Failed to start thread to monitor STDERR")
|
||||
|
||||
# wait for service to start
|
||||
for i in range(0, 100):
|
||||
time.sleep(1.0)
|
||||
# self.procIpcBatch() # don't pump message during start up
|
||||
Progress.emit(Progress.SERVICE_START_NAP)
|
||||
# print("_zz_", end="", flush=True)
|
||||
if self._status.isRunning():
|
||||
Logging.info("[] TDengine service READY to process requests: pid={}".format(subProc.getPid()))
|
||||
# Logging.info("[] TAOS service started: {}".format(self))
|
||||
# self._verifyDnode(self._tInst) # query and ensure dnode is ready
|
||||
# Logging.debug("[] TAOS Dnode verified: {}".format(self))
|
||||
return # now we've started
|
||||
# TODO: handle failure-to-start better?
|
||||
self.procIpcBatch(100, True) # display output before cronking out, trim to last 20 msgs, force output
|
||||
raise RuntimeError("TDengine service DID NOT achieve READY status: pid={}".format(subProc.getPid()))
|
||||
|
||||
def _verifyDnode(self, tInst: TdeInstance):
|
||||
dbc = DbConn.createNative(tInst.getDbTarget())
|
||||
dbc.open()
|
||||
dbc.query("show dnodes")
|
||||
# dbc.query("DESCRIBE {}.{}".format(dbName, self._stName))
|
||||
cols = dbc.getQueryResult() # id,end_point,vnodes,cores,status,role,create_time,offline reason
|
||||
# ret = {row[0]:row[1] for row in stCols if row[3]=='TAG'} # name:type
|
||||
isValid = False
|
||||
for col in cols:
|
||||
# print("col = {}".format(col))
|
||||
ep = col[1].split(':') # 10.1.30.2:6030
|
||||
print("Found ep={}".format(ep))
|
||||
if tInst.getPort() == int(ep[1]): # That's us
|
||||
# print("Valid Dnode matched!")
|
||||
isValid = True # now we are valid
|
||||
break
|
||||
if not isValid:
|
||||
print("Failed to start dnode, sleep for a while")
|
||||
time.sleep(10.0)
|
||||
raise RuntimeError("Failed to start Dnode, expected port not found: {}".
|
||||
format(tInst.getPort()))
|
||||
dbc.close()
|
||||
|
||||
def stop(self):
|
||||
# can be called from both main thread or signal handler
|
||||
|
||||
# Linux will send Control-C generated SIGINT to the TDengine process already, ref:
|
||||
# https://unix.stackexchange.com/questions/176235/fork-and-how-signals-are-delivered-to-processes
|
||||
|
||||
self.join() # stop the thread, status change moved to TdeSubProcess
|
||||
|
||||
# Check if it's really stopped
|
||||
outputLines = 10 # for last output
|
||||
if self.getStatus().isStopped():
|
||||
self.procIpcBatch(outputLines) # one last time
|
||||
Logging.debug("End of TDengine Service Output")
|
||||
Logging.info("----- TDengine Service (managed by SMT) is now terminated -----\n")
|
||||
else:
|
||||
print("WARNING: SMT did not terminate as expected")
|
||||
|
||||
def join(self):
|
||||
# TODO: sanity check
|
||||
s = self.getStatus()
|
||||
if s.isStopping() or s.isStopped(): # we may be stopping ourselves, or have been stopped/killed by others
|
||||
if self._thread or self._thread2 :
|
||||
if self._thread:
|
||||
self._thread.join()
|
||||
self._thread = None
|
||||
if self._thread2: # STD ERR thread
|
||||
self._thread2.join()
|
||||
self._thread2 = None
|
||||
else:
|
||||
Logging.warning("Joining empty thread, doing nothing")
|
||||
else:
|
||||
raise RuntimeError(
|
||||
"SMT.Join(): Unexpected status: {}".format(self._status))
|
||||
|
||||
def _trimQueue(self, targetSize):
|
||||
if targetSize <= 0:
|
||||
return # do nothing
|
||||
q = self._ipcQueue
|
||||
if (q.qsize() <= targetSize): # no need to trim
|
||||
return
|
||||
|
||||
Logging.debug("Triming IPC queue to target size: {}".format(targetSize))
|
||||
itemsToTrim = q.qsize() - targetSize
|
||||
for i in range(0, itemsToTrim):
|
||||
try:
|
||||
q.get_nowait()
|
||||
except Empty:
|
||||
break # break out of for loop, no more trimming
|
||||
|
||||
TD_READY_MSG = "TDengine is initialized successfully"
|
||||
|
||||
def procIpcBatch(self, trimToTarget=0, forceOutput=False):
|
||||
'''
|
||||
Process a batch of STDOUT/STDERR data, until we read EMPTY from
|
||||
the queue.
|
||||
'''
|
||||
self._trimQueue(trimToTarget) # trim if necessary
|
||||
# Process all the output generated by the underlying sub process,
|
||||
# managed by IO thread
|
||||
print("<", end="", flush=True)
|
||||
while True:
|
||||
try:
|
||||
line = self._ipcQueue.get_nowait() # getting output at fast speed
|
||||
self._printProgress("_o")
|
||||
except Empty:
|
||||
# time.sleep(2.3) # wait only if there's no output
|
||||
# no more output
|
||||
print(".>", end="", flush=True)
|
||||
return # we are done with THIS BATCH
|
||||
else: # got line, printing out
|
||||
if forceOutput:
|
||||
Logging.info('[TAOSD] ' + line)
|
||||
else:
|
||||
Logging.debug('[TAOSD] ' + line)
|
||||
print(">", end="", flush=True)
|
||||
|
||||
_ProgressBars = ["--", "//", "||", "\\\\"]
|
||||
|
||||
def _printProgress(self, msg): # TODO: assuming 2 chars
|
||||
print(msg, end="", flush=True)
|
||||
pBar = self._ProgressBars[Dice.throw(4)]
|
||||
print(pBar, end="", flush=True)
|
||||
print('\b\b\b\b', end="", flush=True)
|
||||
|
||||
BinaryChunk = NewType('BinaryChunk', bytes) # line with binary data, directly from STDOUT, etc.
|
||||
TextChunk = NewType('TextChunk', str) # properly decoded, suitable for printing, etc.
|
||||
|
||||
@classmethod
|
||||
def _decodeBinaryChunk(cls, bChunk: bytes) -> Optional[TextChunk] :
|
||||
try:
|
||||
tChunk = bChunk.decode("utf-8").rstrip()
|
||||
return cls.TextChunk(tChunk)
|
||||
except UnicodeError:
|
||||
print("\nNon-UTF8 server output: {}\n".format(bChunk.decode('cp437')))
|
||||
return None
|
||||
|
||||
def _textChunkGenerator(self, streamIn: IpcStream, logDir: str, logFile: str
|
||||
) -> Generator[TextChunk, None, None]:
|
||||
'''
|
||||
Take an input stream with binary data (likely from Popen), produced a generator of decoded
|
||||
"text chunks".
|
||||
|
||||
Side effect: it also save the original binary data in a log file.
|
||||
'''
|
||||
os.makedirs(logDir, exist_ok=True)
|
||||
logF = open(os.path.join(logDir, logFile), 'wb')
|
||||
if logF is None:
|
||||
Logging.error("Failed to open log file (binary write): {}/{}".format(logDir, logFile))
|
||||
return
|
||||
for bChunk in iter(streamIn.readline, b''):
|
||||
logF.write(bChunk) # Write to log file immediately
|
||||
tChunk = self._decodeBinaryChunk(bChunk) # decode
|
||||
if tChunk is not None:
|
||||
yield tChunk # TODO: split into actual text lines
|
||||
|
||||
# At the end...
|
||||
streamIn.close() # Close the incoming stream
|
||||
logF.close() # Close the log file
|
||||
|
||||
def svcOutputReader(self, ipcStdOut: IpcStream, queue, logDir: str):
|
||||
'''
|
||||
The infinite routine that processes the STDOUT stream for the sub process being managed.
|
||||
|
||||
:param ipcStdOut: the IO stream object used to fetch the data from
|
||||
:param queue: the queue where we dump the roughly parsed chunk-by-chunk text data
|
||||
:param logDir: where we should dump a verbatim output file
|
||||
'''
|
||||
|
||||
# Important Reference: https://stackoverflow.com/questions/375427/non-blocking-read-on-a-subprocess-pipe-in-python
|
||||
# print("This is the svcOutput Reader...")
|
||||
# stdOut.readline() # Skip the first output? TODO: remove?
|
||||
for tChunk in self._textChunkGenerator(ipcStdOut, logDir, 'stdout.log') :
|
||||
queue.put(tChunk) # tChunk garanteed not to be None
|
||||
self._printProgress("_i")
|
||||
|
||||
if self._status.isStarting(): # we are starting, let's see if we have started
|
||||
if tChunk.find(self.TD_READY_MSG) != -1: # found
|
||||
Logging.info("Waiting for the service to become FULLY READY")
|
||||
time.sleep(1.0) # wait for the server to truly start. TODO: remove this
|
||||
Logging.info("Service is now FULLY READY") # TODO: more ID info here?
|
||||
self._status.set(Status.STATUS_RUNNING)
|
||||
|
||||
# Trim the queue if necessary: TODO: try this 1 out of 10 times
|
||||
self._trimQueue(self.MAX_QUEUE_SIZE * 9 // 10) # trim to 90% size
|
||||
|
||||
if self._status.isStopping(): # TODO: use thread status instead
|
||||
# WAITING for stopping sub process to finish its outptu
|
||||
print("_w", end="", flush=True)
|
||||
|
||||
# queue.put(line)
|
||||
# stdOut has no more data, meaning sub process must have died
|
||||
Logging.info("EOF found TDengine STDOUT, marking the process as terminated")
|
||||
self.setStatus(Status.STATUS_STOPPED)
|
||||
|
||||
def svcErrorReader(self, ipcStdErr: IpcStream, queue, logDir: str):
|
||||
# os.makedirs(logDir, exist_ok=True)
|
||||
# logFile = os.path.join(logDir,'stderr.log')
|
||||
# fErr = open(logFile, 'wb')
|
||||
# for line in iter(err.readline, b''):
|
||||
for tChunk in self._textChunkGenerator(ipcStdErr, logDir, 'stderr.log') :
|
||||
queue.put(tChunk) # tChunk garanteed not to be None
|
||||
# fErr.write(line)
|
||||
Logging.info("TDengine STDERR: {}".format(tChunk))
|
||||
Logging.info("EOF for TDengine STDERR")
|
|
@ -0,0 +1,42 @@
|
|||
from __future__ import annotations
|
||||
import argparse
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from .misc import CrashGenError
|
||||
|
||||
# from crash_gen.misc import CrashGenError
|
||||
|
||||
# gConfig: Optional[argparse.Namespace]
|
||||
|
||||
class Config:
|
||||
_config = None # type Optional[argparse.Namespace]
|
||||
|
||||
@classmethod
|
||||
def init(cls, parser: argparse.ArgumentParser):
|
||||
if cls._config is not None:
|
||||
raise CrashGenError("Config can only be initialized once")
|
||||
cls._config = parser.parse_args()
|
||||
# print(cls._config)
|
||||
|
||||
@classmethod
|
||||
def setConfig(cls, config: argparse.Namespace):
|
||||
cls._config = config
|
||||
|
||||
@classmethod
|
||||
# TODO: check items instead of exposing everything
|
||||
def getConfig(cls) -> argparse.Namespace:
|
||||
if cls._config is None:
|
||||
raise CrashGenError("invalid state")
|
||||
return cls._config
|
||||
|
||||
@classmethod
|
||||
def clearConfig(cls):
|
||||
cls._config = None
|
||||
|
||||
@classmethod
|
||||
def isSet(cls, cfgKey):
|
||||
cfg = cls.getConfig()
|
||||
if cfgKey not in cfg:
|
||||
return False
|
||||
return cfg.__getattribute__(cfgKey)
|
|
@ -0,0 +1,474 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import os
|
||||
import datetime
|
||||
import time
|
||||
import threading
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
|
||||
|
||||
import taos
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
from util.log import *
|
||||
|
||||
import traceback
|
||||
# from .service_manager import TdeInstance
|
||||
|
||||
from .config import Config
|
||||
from .misc import Logging, CrashGenError, Helper
|
||||
from .types import QueryResult
|
||||
|
||||
class DbConn:
|
||||
TYPE_NATIVE = "native-c"
|
||||
TYPE_REST = "rest-api"
|
||||
TYPE_INVALID = "invalid"
|
||||
|
||||
@classmethod
|
||||
def create(cls, connType, dbTarget):
|
||||
if connType == cls.TYPE_NATIVE:
|
||||
return DbConnNative(dbTarget)
|
||||
elif connType == cls.TYPE_REST:
|
||||
return DbConnRest(dbTarget)
|
||||
else:
|
||||
raise RuntimeError(
|
||||
"Unexpected connection type: {}".format(connType))
|
||||
|
||||
@classmethod
|
||||
def createNative(cls, dbTarget) -> DbConn:
|
||||
return cls.create(cls.TYPE_NATIVE, dbTarget)
|
||||
|
||||
@classmethod
|
||||
def createRest(cls, dbTarget) -> DbConn:
|
||||
return cls.create(cls.TYPE_REST, dbTarget)
|
||||
|
||||
def __init__(self, dbTarget):
|
||||
self.isOpen = False
|
||||
self._type = self.TYPE_INVALID
|
||||
self._lastSql = None
|
||||
self._dbTarget = dbTarget
|
||||
|
||||
def __repr__(self):
|
||||
return "[DbConn: type={}, target={}]".format(self._type, self._dbTarget)
|
||||
|
||||
def getLastSql(self):
|
||||
return self._lastSql
|
||||
|
||||
def open(self):
|
||||
if (self.isOpen):
|
||||
raise RuntimeError("Cannot re-open an existing DB connection")
|
||||
|
||||
# below implemented by child classes
|
||||
self.openByType()
|
||||
|
||||
Logging.debug("[DB] data connection opened: {}".format(self))
|
||||
self.isOpen = True
|
||||
|
||||
def close(self):
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def queryScalar(self, sql) -> int:
|
||||
return self._queryAny(sql)
|
||||
|
||||
def queryString(self, sql) -> str:
|
||||
return self._queryAny(sql)
|
||||
|
||||
def _queryAny(self, sql): # actual query result as an int
|
||||
if (not self.isOpen):
|
||||
raise RuntimeError("Cannot query database until connection is open")
|
||||
nRows = self.query(sql)
|
||||
if nRows != 1:
|
||||
raise CrashGenError(
|
||||
"Unexpected result for query: {}, rows = {}".format(sql, nRows),
|
||||
(CrashGenError.INVALID_EMPTY_RESULT if nRows==0 else CrashGenError.INVALID_MULTIPLE_RESULT)
|
||||
)
|
||||
if self.getResultRows() != 1 or self.getResultCols() != 1:
|
||||
raise RuntimeError("Unexpected result set for query: {}".format(sql))
|
||||
return self.getQueryResult()[0][0]
|
||||
|
||||
def use(self, dbName):
|
||||
self.execute("use {}".format(dbName))
|
||||
|
||||
def existsDatabase(self, dbName: str):
|
||||
''' Check if a certain database exists '''
|
||||
self.query("show databases")
|
||||
dbs = [v[0] for v in self.getQueryResult()] # ref: https://stackoverflow.com/questions/643823/python-list-transformation
|
||||
# ret2 = dbName in dbs
|
||||
# print("dbs = {}, str = {}, ret2={}, type2={}".format(dbs, dbName,ret2, type(dbName)))
|
||||
return dbName in dbs # TODO: super weird type mangling seen, once here
|
||||
|
||||
def existsSuperTable(self, stName):
|
||||
self.query("show stables")
|
||||
sts = [v[0] for v in self.getQueryResult()]
|
||||
return stName in sts
|
||||
|
||||
def hasTables(self):
|
||||
return self.query("show tables") > 0
|
||||
|
||||
def execute(self, sql):
|
||||
''' Return the number of rows affected'''
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def safeExecute(self, sql):
|
||||
'''Safely execute any SQL query, returning True/False upon success/failure'''
|
||||
try:
|
||||
self.execute(sql)
|
||||
return True # ignore num of results, return success
|
||||
except taos.error.Error as err:
|
||||
return False # failed, for whatever TAOS reason
|
||||
# Not possile to reach here, non-TAOS exception would have been thrown
|
||||
|
||||
def query(self, sql) -> int: # return num rows returned
|
||||
''' Return the number of rows affected'''
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def openByType(self):
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def getQueryResult(self) -> QueryResult :
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def getResultRows(self):
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
def getResultCols(self):
|
||||
raise RuntimeError("Unexpected execution, should be overriden")
|
||||
|
||||
# Sample: curl -u root:taosdata -d "show databases" localhost:6020/rest/sql
|
||||
|
||||
|
||||
class DbConnRest(DbConn):
|
||||
REST_PORT_INCREMENT = 11
|
||||
|
||||
def __init__(self, dbTarget: DbTarget):
|
||||
super().__init__(dbTarget)
|
||||
self._type = self.TYPE_REST
|
||||
restPort = dbTarget.port + 11
|
||||
self._url = "http://{}:{}/rest/sql".format(
|
||||
dbTarget.hostAddr, dbTarget.port + self.REST_PORT_INCREMENT)
|
||||
self._result = None
|
||||
|
||||
def openByType(self): # Open connection
|
||||
pass # do nothing, always open
|
||||
|
||||
def close(self):
|
||||
if (not self.isOpen):
|
||||
raise RuntimeError("Cannot clean up database until connection is open")
|
||||
# Do nothing for REST
|
||||
Logging.debug("[DB] REST Database connection closed")
|
||||
self.isOpen = False
|
||||
|
||||
def _doSql(self, sql):
|
||||
self._lastSql = sql # remember this, last SQL attempted
|
||||
try:
|
||||
r = requests.post(self._url,
|
||||
data = sql,
|
||||
auth = HTTPBasicAuth('root', 'taosdata'))
|
||||
except:
|
||||
print("REST API Failure (TODO: more info here)")
|
||||
raise
|
||||
rj = r.json()
|
||||
# Sanity check for the "Json Result"
|
||||
if ('status' not in rj):
|
||||
raise RuntimeError("No status in REST response")
|
||||
|
||||
if rj['status'] == 'error': # clearly reported error
|
||||
if ('code' not in rj): # error without code
|
||||
raise RuntimeError("REST error return without code")
|
||||
errno = rj['code'] # May need to massage this in the future
|
||||
# print("Raising programming error with REST return: {}".format(rj))
|
||||
raise taos.error.ProgrammingError(
|
||||
rj['desc'], errno) # todo: check existance of 'desc'
|
||||
|
||||
if rj['status'] != 'succ': # better be this
|
||||
raise RuntimeError(
|
||||
"Unexpected REST return status: {}".format(
|
||||
rj['status']))
|
||||
|
||||
nRows = rj['rows'] if ('rows' in rj) else 0
|
||||
self._result = rj
|
||||
return nRows
|
||||
|
||||
def execute(self, sql):
|
||||
if (not self.isOpen):
|
||||
raise RuntimeError(
|
||||
"Cannot execute database commands until connection is open")
|
||||
Logging.debug("[SQL-REST] Executing SQL: {}".format(sql))
|
||||
nRows = self._doSql(sql)
|
||||
Logging.debug(
|
||||
"[SQL-REST] Execution Result, nRows = {}, SQL = {}".format(nRows, sql))
|
||||
return nRows
|
||||
|
||||
def query(self, sql): # return rows affected
|
||||
return self.execute(sql)
|
||||
|
||||
def getQueryResult(self):
|
||||
return self._result['data']
|
||||
|
||||
def getResultRows(self):
|
||||
print(self._result)
|
||||
raise RuntimeError("TBD") # TODO: finish here to support -v under -c rest
|
||||
# return self._tdSql.queryRows
|
||||
|
||||
def getResultCols(self):
|
||||
print(self._result)
|
||||
raise RuntimeError("TBD")
|
||||
|
||||
# Duplicate code from TDMySQL, TODO: merge all this into DbConnNative
|
||||
|
||||
|
||||
class MyTDSql:
|
||||
# Class variables
|
||||
_clsLock = threading.Lock() # class wide locking
|
||||
longestQuery = '' # type: str
|
||||
longestQueryTime = 0.0 # seconds
|
||||
lqStartTime = 0.0
|
||||
# lqEndTime = 0.0 # Not needed, as we have the two above already
|
||||
|
||||
def __init__(self, hostAddr, cfgPath):
|
||||
# Make the DB connection
|
||||
self._conn = taos.connect(host=hostAddr, config=cfgPath)
|
||||
self._cursor = self._conn.cursor()
|
||||
|
||||
self.queryRows = 0
|
||||
self.queryCols = 0
|
||||
self.affectedRows = 0
|
||||
|
||||
# def init(self, cursor, log=True):
|
||||
# self.cursor = cursor
|
||||
# if (log):
|
||||
# caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
# self.cursor.log(caller.filename + ".sql")
|
||||
|
||||
def close(self):
|
||||
self._cursor.close() # can we double close?
|
||||
self._conn.close() # TODO: very important, cursor close does NOT close DB connection!
|
||||
self._cursor.close()
|
||||
|
||||
def _execInternal(self, sql):
|
||||
startTime = time.time()
|
||||
# Logging.debug("Executing SQL: " + sql)
|
||||
# ret = None # TODO: use strong type here
|
||||
# try: # Let's not capture the error, and let taos.error.ProgrammingError pass through
|
||||
ret = self._cursor.execute(sql)
|
||||
# except taos.error.ProgrammingError as err:
|
||||
# Logging.warning("Taos SQL execution error: {}, SQL: {}".format(err.msg, sql))
|
||||
# raise CrashGenError(err.msg)
|
||||
|
||||
# print("\nSQL success: {}".format(sql))
|
||||
queryTime = time.time() - startTime
|
||||
# Record the query time
|
||||
cls = self.__class__
|
||||
if queryTime > (cls.longestQueryTime + 0.01) :
|
||||
with cls._clsLock:
|
||||
cls.longestQuery = sql
|
||||
cls.longestQueryTime = queryTime
|
||||
cls.lqStartTime = startTime
|
||||
|
||||
# Now write to the shadow database
|
||||
if Config.isSet('use_shadow_db'):
|
||||
if sql[:11] == "INSERT INTO":
|
||||
if sql[:16] == "INSERT INTO db_0":
|
||||
sql2 = "INSERT INTO db_s" + sql[16:]
|
||||
self._cursor.execute(sql2)
|
||||
else:
|
||||
raise CrashGenError("Did not find db_0 in INSERT statement: {}".format(sql))
|
||||
else: # not an insert statement
|
||||
pass
|
||||
|
||||
if sql[:12] == "CREATE TABLE":
|
||||
if sql[:17] == "CREATE TABLE db_0":
|
||||
sql2 = sql.replace('db_0', 'db_s')
|
||||
self._cursor.execute(sql2)
|
||||
else:
|
||||
raise CrashGenError("Did not find db_0 in CREATE TABLE statement: {}".format(sql))
|
||||
else: # not an insert statement
|
||||
pass
|
||||
|
||||
return ret
|
||||
|
||||
def query(self, sql):
|
||||
self.sql = sql
|
||||
try:
|
||||
self._execInternal(sql)
|
||||
self.queryResult = self._cursor.fetchall()
|
||||
self.queryRows = len(self.queryResult)
|
||||
self.queryCols = len(self._cursor.description)
|
||||
except Exception as e:
|
||||
# caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
# args = (caller.filename, caller.lineno, sql, repr(e))
|
||||
# tdLog.exit("%s(%d) failed: sql:%s, %s" % args)
|
||||
raise
|
||||
return self.queryRows
|
||||
|
||||
def execute(self, sql):
|
||||
self.sql = sql
|
||||
try:
|
||||
self.affectedRows = self._execInternal(sql)
|
||||
except Exception as e:
|
||||
# caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
# args = (caller.filename, caller.lineno, sql, repr(e))
|
||||
# tdLog.exit("%s(%d) failed: sql:%s, %s" % args)
|
||||
raise
|
||||
return self.affectedRows
|
||||
|
||||
class DbTarget:
|
||||
def __init__(self, cfgPath, hostAddr, port):
|
||||
self.cfgPath = cfgPath
|
||||
self.hostAddr = hostAddr
|
||||
self.port = port
|
||||
|
||||
def __repr__(self):
|
||||
return "[DbTarget: cfgPath={}, host={}:{}]".format(
|
||||
Helper.getFriendlyPath(self.cfgPath), self.hostAddr, self.port)
|
||||
|
||||
def getEp(self):
|
||||
return "{}:{}".format(self.hostAddr, self.port)
|
||||
|
||||
class DbConnNative(DbConn):
|
||||
# Class variables
|
||||
_lock = threading.Lock()
|
||||
# _connInfoDisplayed = False # TODO: find another way to display this
|
||||
totalConnections = 0 # Not private
|
||||
totalRequests = 0
|
||||
|
||||
def __init__(self, dbTarget):
|
||||
super().__init__(dbTarget)
|
||||
self._type = self.TYPE_NATIVE
|
||||
self._conn = None
|
||||
# self._cursor = None
|
||||
|
||||
@classmethod
|
||||
def resetTotalRequests(cls):
|
||||
with cls._lock: # force single threading for opening DB connections. # TODO: whaaat??!!!
|
||||
cls.totalRequests = 0
|
||||
|
||||
def openByType(self): # Open connection
|
||||
# global gContainer
|
||||
# tInst = tInst or gContainer.defTdeInstance # set up in ClientManager, type: TdeInstance
|
||||
# cfgPath = self.getBuildPath() + "/test/cfg"
|
||||
# cfgPath = tInst.getCfgDir()
|
||||
# hostAddr = tInst.getHostAddr()
|
||||
|
||||
cls = self.__class__ # Get the class, to access class variables
|
||||
with cls._lock: # force single threading for opening DB connections. # TODO: whaaat??!!!
|
||||
dbTarget = self._dbTarget
|
||||
# if not cls._connInfoDisplayed:
|
||||
# cls._connInfoDisplayed = True # updating CLASS variable
|
||||
Logging.debug("Initiating TAOS native connection to {}".format(dbTarget))
|
||||
# Make the connection
|
||||
# self._conn = taos.connect(host=hostAddr, config=cfgPath) # TODO: make configurable
|
||||
# self._cursor = self._conn.cursor()
|
||||
# Record the count in the class
|
||||
self._tdSql = MyTDSql(dbTarget.hostAddr, dbTarget.cfgPath) # making DB connection
|
||||
cls.totalConnections += 1
|
||||
|
||||
self._tdSql.execute('reset query cache')
|
||||
# self._cursor.execute('use db') # do this at the beginning of every
|
||||
|
||||
# Open connection
|
||||
# self._tdSql = MyTDSql()
|
||||
# self._tdSql.init(self._cursor)
|
||||
|
||||
def close(self):
|
||||
if (not self.isOpen):
|
||||
raise RuntimeError("Cannot clean up database until connection is open")
|
||||
self._tdSql.close()
|
||||
# Decrement the class wide counter
|
||||
cls = self.__class__ # Get the class, to access class variables
|
||||
with cls._lock:
|
||||
cls.totalConnections -= 1
|
||||
|
||||
Logging.debug("[DB] Database connection closed")
|
||||
self.isOpen = False
|
||||
|
||||
def execute(self, sql):
|
||||
if (not self.isOpen):
|
||||
traceback.print_stack()
|
||||
raise CrashGenError(
|
||||
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||
self._lastSql = sql
|
||||
nRows = self._tdSql.execute(sql)
|
||||
cls = self.__class__
|
||||
cls.totalRequests += 1
|
||||
Logging.debug(
|
||||
"[SQL] Execution Result, nRows = {}, SQL = {}".format(
|
||||
nRows, sql))
|
||||
return nRows
|
||||
|
||||
def query(self, sql): # return rows affected
|
||||
if (not self.isOpen):
|
||||
traceback.print_stack()
|
||||
raise CrashGenError(
|
||||
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||
self._lastSql = sql
|
||||
nRows = self._tdSql.query(sql)
|
||||
cls = self.__class__
|
||||
cls.totalRequests += 1
|
||||
Logging.debug(
|
||||
"[SQL] Query Result, nRows = {}, SQL = {}".format(
|
||||
nRows, sql))
|
||||
return nRows
|
||||
# results are in: return self._tdSql.queryResult
|
||||
|
||||
def getQueryResult(self):
|
||||
return self._tdSql.queryResult
|
||||
|
||||
def getResultRows(self):
|
||||
return self._tdSql.queryRows
|
||||
|
||||
def getResultCols(self):
|
||||
return self._tdSql.queryCols
|
||||
|
||||
|
||||
class DbManager():
|
||||
''' This is a wrapper around DbConn(), to make it easier to use.
|
||||
|
||||
TODO: rename this to DbConnManager
|
||||
'''
|
||||
def __init__(self, cType, dbTarget):
|
||||
# self.tableNumQueue = LinearQueue() # TODO: delete?
|
||||
# self.openDbServerConnection()
|
||||
self._dbConn = DbConn.createNative(dbTarget) if (
|
||||
cType == 'native') else DbConn.createRest(dbTarget)
|
||||
try:
|
||||
self._dbConn.open() # may throw taos.error.ProgrammingError: disconnected
|
||||
Logging.debug("DbManager opened DB connection...")
|
||||
except taos.error.ProgrammingError as err:
|
||||
# print("Error type: {}, msg: {}, value: {}".format(type(err), err.msg, err))
|
||||
if (err.msg == 'client disconnected'): # cannot open DB connection
|
||||
print(
|
||||
"Cannot establish DB connection, please re-run script without parameter, and follow the instructions.")
|
||||
sys.exit(2)
|
||||
else:
|
||||
print("Failed to connect to DB, errno = {}, msg: {}"
|
||||
.format(Helper.convertErrno(err.errno), err.msg))
|
||||
raise
|
||||
except BaseException:
|
||||
print("[=] Unexpected exception")
|
||||
raise
|
||||
|
||||
# Do this after dbConn is in proper shape
|
||||
# Moved to Database()
|
||||
# self._stateMachine = StateMechine(self._dbConn)
|
||||
|
||||
def __del__(self):
|
||||
''' Release the underlying DB connection upon deletion of DbManager '''
|
||||
self.cleanUp()
|
||||
|
||||
def getDbConn(self) -> DbConn :
|
||||
if self._dbConn is None:
|
||||
raise CrashGenError("Unexpected empty DbConn")
|
||||
return self._dbConn
|
||||
|
||||
def cleanUp(self):
|
||||
if self._dbConn:
|
||||
self._dbConn.close()
|
||||
self._dbConn = None
|
||||
Logging.debug("DbManager closed DB connection...")
|
||||
|
|
@ -0,0 +1,209 @@
|
|||
import threading
|
||||
import random
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import taos
|
||||
|
||||
|
||||
class CrashGenError(taos.error.ProgrammingError):
|
||||
INVALID_EMPTY_RESULT = 0x991
|
||||
INVALID_MULTIPLE_RESULT = 0x992
|
||||
DB_CONNECTION_NOT_OPEN = 0x993
|
||||
# def __init__(self, msg=None, errno=None):
|
||||
# self.msg = msg
|
||||
# self.errno = errno
|
||||
|
||||
# def __str__(self):
|
||||
# return self.msg
|
||||
pass
|
||||
|
||||
|
||||
class LoggingFilter(logging.Filter):
|
||||
def filter(self, record: logging.LogRecord):
|
||||
if (record.levelno >= logging.INFO):
|
||||
return True # info or above always log
|
||||
|
||||
# Commenting out below to adjust...
|
||||
|
||||
# if msg.startswith("[TRD]"):
|
||||
# return False
|
||||
return True
|
||||
|
||||
|
||||
class MyLoggingAdapter(logging.LoggerAdapter):
|
||||
def process(self, msg, kwargs):
|
||||
shortTid = threading.get_ident() % 10000
|
||||
return "[{:04d}] {}".format(shortTid, msg), kwargs
|
||||
# return '[%s] %s' % (self.extra['connid'], msg), kwargs
|
||||
|
||||
|
||||
class Logging:
|
||||
logger = None # type: Optional[MyLoggingAdapter]
|
||||
|
||||
@classmethod
|
||||
def getLogger(cls):
|
||||
return cls.logger
|
||||
|
||||
@classmethod
|
||||
def clsInit(cls, debugMode: bool):
|
||||
if cls.logger:
|
||||
return
|
||||
|
||||
# Logging Stuff
|
||||
# global misc.logger
|
||||
_logger = logging.getLogger('CrashGen') # real logger
|
||||
_logger.addFilter(LoggingFilter())
|
||||
ch = logging.StreamHandler(sys.stdout) # Ref: https://stackoverflow.com/questions/14058453/making-python-loggers-output-all-messages-to-stdout-in-addition-to-log-file
|
||||
_logger.addHandler(ch)
|
||||
|
||||
# Logging adapter, to be used as a logger
|
||||
# print("setting logger variable")
|
||||
# global logger
|
||||
cls.logger = MyLoggingAdapter(_logger, {})
|
||||
cls.logger.setLevel(logging.DEBUG if debugMode else logging.INFO) # default seems to be INFO
|
||||
|
||||
@classmethod
|
||||
def info(cls, msg):
|
||||
cls.logger.info(msg)
|
||||
|
||||
@classmethod
|
||||
def debug(cls, msg):
|
||||
cls.logger.debug(msg)
|
||||
|
||||
@classmethod
|
||||
def warning(cls, msg):
|
||||
cls.logger.warning(msg)
|
||||
|
||||
@classmethod
|
||||
def error(cls, msg):
|
||||
cls.logger.error(msg)
|
||||
|
||||
class Status:
|
||||
STATUS_EMPTY = 99
|
||||
STATUS_STARTING = 1
|
||||
STATUS_RUNNING = 2
|
||||
STATUS_STOPPING = 3
|
||||
STATUS_STOPPED = 4
|
||||
|
||||
def __init__(self, status):
|
||||
self.set(status)
|
||||
|
||||
def __repr__(self):
|
||||
return "[Status: v={}]".format(self._status)
|
||||
|
||||
def set(self, status: int):
|
||||
self._status = status
|
||||
|
||||
def get(self):
|
||||
return self._status
|
||||
|
||||
def isEmpty(self):
|
||||
''' Empty/Undefined '''
|
||||
return self._status == Status.STATUS_EMPTY
|
||||
|
||||
def isStarting(self):
|
||||
return self._status == Status.STATUS_STARTING
|
||||
|
||||
def isRunning(self):
|
||||
# return self._thread and self._thread.is_alive()
|
||||
return self._status == Status.STATUS_RUNNING
|
||||
|
||||
def isStopping(self):
|
||||
return self._status == Status.STATUS_STOPPING
|
||||
|
||||
def isStopped(self):
|
||||
return self._status == Status.STATUS_STOPPED
|
||||
|
||||
def isStable(self):
|
||||
return self.isRunning() or self.isStopped()
|
||||
|
||||
def isActive(self):
|
||||
return self.isStarting() or self.isRunning() or self.isStopping()
|
||||
|
||||
# Deterministic random number generator
|
||||
class Dice():
|
||||
seeded = False # static, uninitialized
|
||||
|
||||
@classmethod
|
||||
def seed(cls, s): # static
|
||||
if (cls.seeded):
|
||||
raise RuntimeError(
|
||||
"Cannot seed the random generator more than once")
|
||||
cls.verifyRNG()
|
||||
random.seed(s)
|
||||
cls.seeded = True # TODO: protect against multi-threading
|
||||
|
||||
@classmethod
|
||||
def verifyRNG(cls): # Verify that the RNG is determinstic
|
||||
random.seed(0)
|
||||
x1 = random.randrange(0, 1000)
|
||||
x2 = random.randrange(0, 1000)
|
||||
x3 = random.randrange(0, 1000)
|
||||
if (x1 != 864 or x2 != 394 or x3 != 776):
|
||||
raise RuntimeError("System RNG is not deterministic")
|
||||
|
||||
@classmethod
|
||||
def throw(cls, stop): # get 0 to stop-1
|
||||
return cls.throwRange(0, stop)
|
||||
|
||||
@classmethod
|
||||
def throwRange(cls, start, stop): # up to stop-1
|
||||
if (not cls.seeded):
|
||||
raise RuntimeError("Cannot throw dice before seeding it")
|
||||
return random.randrange(start, stop)
|
||||
|
||||
@classmethod
|
||||
def choice(cls, cList):
|
||||
return random.choice(cList)
|
||||
|
||||
class Helper:
|
||||
@classmethod
|
||||
def convertErrno(cls, errno):
|
||||
return errno if (errno > 0) else 0x80000000 + errno
|
||||
|
||||
@classmethod
|
||||
def getFriendlyPath(cls, path): # returns .../xxx/yyy
|
||||
ht1 = os.path.split(path)
|
||||
ht2 = os.path.split(ht1[0])
|
||||
return ".../" + ht2[1] + '/' + ht1[1]
|
||||
|
||||
|
||||
class Progress:
|
||||
STEP_BOUNDARY = 0
|
||||
BEGIN_THREAD_STEP = 1
|
||||
END_THREAD_STEP = 2
|
||||
SERVICE_HEART_BEAT= 3
|
||||
SERVICE_RECONNECT_START = 4
|
||||
SERVICE_RECONNECT_SUCCESS = 5
|
||||
SERVICE_RECONNECT_FAILURE = 6
|
||||
SERVICE_START_NAP = 7
|
||||
CREATE_TABLE_ATTEMPT = 8
|
||||
QUERY_GROUP_BY = 9
|
||||
CONCURRENT_INSERTION = 10
|
||||
ACCEPTABLE_ERROR = 11
|
||||
|
||||
tokens = {
|
||||
STEP_BOUNDARY: '.',
|
||||
BEGIN_THREAD_STEP: ' [',
|
||||
END_THREAD_STEP: ']',
|
||||
SERVICE_HEART_BEAT: '.Y.',
|
||||
SERVICE_RECONNECT_START: '<r.',
|
||||
SERVICE_RECONNECT_SUCCESS: '.r>',
|
||||
SERVICE_RECONNECT_FAILURE: '.xr>',
|
||||
SERVICE_START_NAP: '_zz',
|
||||
CREATE_TABLE_ATTEMPT: 'c',
|
||||
QUERY_GROUP_BY: 'g',
|
||||
CONCURRENT_INSERTION: 'x',
|
||||
ACCEPTABLE_ERROR: '_',
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def emit(cls, token):
|
||||
print(cls.tokens[token], end="", flush=True)
|
||||
|
||||
@classmethod
|
||||
def emitStr(cls, str):
|
||||
print('({})'.format(str), end="", flush=True)
|
|
@ -0,0 +1,30 @@
|
|||
from typing import Any, BinaryIO, List, Dict, NewType
|
||||
from enum import Enum
|
||||
|
||||
DirPath = NewType('DirPath', str)
|
||||
|
||||
QueryResult = NewType('QueryResult', List[List[Any]])
|
||||
|
||||
class TdDataType(Enum):
|
||||
'''
|
||||
Use a Python Enum types of represent all the data types in TDengine.
|
||||
|
||||
Ref: https://www.taosdata.com/cn/documentation/taos-sql#data-type
|
||||
'''
|
||||
TIMESTAMP = 'TIMESTAMP'
|
||||
INT = 'INT'
|
||||
BIGINT = 'BIGINT'
|
||||
FLOAT = 'FLOAT'
|
||||
DOUBLE = 'DOUBLE'
|
||||
BINARY = 'BINARY'
|
||||
BINARY16 = 'BINARY(16)' # TODO: get rid of this hack
|
||||
BINARY200 = 'BINARY(200)'
|
||||
SMALLINT = 'SMALLINT'
|
||||
TINYINT = 'TINYINT'
|
||||
BOOL = 'BOOL'
|
||||
NCHAR = 'NCHAR'
|
||||
|
||||
TdColumns = Dict[str, TdDataType]
|
||||
TdTags = Dict[str, TdDataType]
|
||||
|
||||
IpcStream = NewType('IpcStream', BinaryIO)
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,23 @@
|
|||
# -----!/usr/bin/python3.7
|
||||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
import sys
|
||||
from crash_gen.crash_gen_main import MainExec
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
mExec = MainExec()
|
||||
mExec.init()
|
||||
exitCode = mExec.run()
|
||||
|
||||
print("\nCrash_Gen is now exiting with status code: {}".format(exitCode))
|
||||
sys.exit(exitCode)
|
|
@ -0,0 +1,151 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
import threading
|
||||
import traceback
|
||||
import random
|
||||
import datetime
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
|
||||
def init(self):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdLog.info("prepare cluster")
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
tdSql.init(self.conn.cursor())
|
||||
tdSql.execute('reset query cache')
|
||||
tdSql.execute('create dnode 192.168.0.2')
|
||||
tdDnodes.deploy(2)
|
||||
tdDnodes.start(2)
|
||||
tdSql.execute('create dnode 192.168.0.3')
|
||||
tdDnodes.deploy(3)
|
||||
tdDnodes.start(3)
|
||||
time.sleep(3)
|
||||
|
||||
self.db = "db"
|
||||
self.stb = "stb"
|
||||
self.tbPrefix = "tb"
|
||||
self.tbNum = 100000
|
||||
self.count = 0
|
||||
# self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
self.threadNum = 1
|
||||
# threadLock = threading.Lock()
|
||||
# global counter for number of tables created by all threads
|
||||
self.global_counter = 0
|
||||
|
||||
tdSql.init(self.conn.cursor())
|
||||
|
||||
def _createTable(self, threadId):
|
||||
print("Thread%d : createTable" % (threadId))
|
||||
conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
cursor = conn.cursor()
|
||||
i = 0
|
||||
try:
|
||||
sql = "use %s" % (self.db)
|
||||
cursor.execute(sql)
|
||||
while i < self.tbNum:
|
||||
if (i % self.threadNum == threadId):
|
||||
cursor.execute(
|
||||
"create table tb%d using %s tags(%d)" %
|
||||
(i + 1, self.stb, i + 1))
|
||||
with threading.Lock():
|
||||
self.global_counter += 1
|
||||
i += 1
|
||||
except Exception as e:
|
||||
tdLog.info(
|
||||
"Failure when creating table tb%d, exception: %s" %
|
||||
(i + 1, str(e)))
|
||||
finally:
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
def _interfereDnodes(self, threadId, dnodeId):
|
||||
conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
cursor = conn.cursor()
|
||||
# interfere dnode while creating table
|
||||
print("Thread%d to interfere dnode%d" % (threadId, dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.05:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("drop dnode 192.168.0.%d" % (dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.15:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("create dnode 192.168.0.%d" % (dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.35:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("drop dnode 192.168.0.%d" % (dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.45:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("create dnode 192.168.0.%d" % (dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.65:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("drop dnode 192.168.0.%d" % (dnodeId))
|
||||
while self.global_counter < self.tbNum * 0.85:
|
||||
time.sleep(0.2)
|
||||
cursor.execute("create dnode 192.168.0.%d" % (dnodeId))
|
||||
|
||||
def run(self):
|
||||
tdLog.info("================= creating database with replica 2")
|
||||
threadId = 0
|
||||
threads = []
|
||||
try:
|
||||
tdSql.execute("drop database if exists %s" % (self.db))
|
||||
tdSql.execute(
|
||||
"create database %s replica 2 cache 2048 ablocks 2.0 tblocks 10 tables 2000" %
|
||||
(self.db))
|
||||
tdLog.sleep(3)
|
||||
tdSql.execute("use %s" % (self.db))
|
||||
tdSql.execute(
|
||||
"create table %s (ts timestamp, c1 bigint, stime timestamp) tags(tg1 bigint)" %
|
||||
(self.stb))
|
||||
tdLog.info("Start to create tables")
|
||||
while threadId < self.threadNum:
|
||||
tdLog.info("Thread-%d starts to create tables" % (threadId))
|
||||
cThread = threading.Thread(
|
||||
target=self._createTable,
|
||||
name="thread-%d" %
|
||||
(threadId),
|
||||
args=(
|
||||
threadId,
|
||||
))
|
||||
cThread.start()
|
||||
threads.append(cThread)
|
||||
threadId += 1
|
||||
|
||||
except Exception as e:
|
||||
tdLog.info("Failed to create tb%d, exception: %s" % (i, str(e)))
|
||||
# tdDnodes.stopAll()
|
||||
finally:
|
||||
time.sleep(1)
|
||||
|
||||
threading.Thread(
|
||||
target=self._interfereDnodes,
|
||||
name="thread-interfereDnode%d" %
|
||||
(3),
|
||||
args=(
|
||||
1,
|
||||
3,
|
||||
)).start()
|
||||
for t in range(len(threads)):
|
||||
tdLog.info("Join threads")
|
||||
# threads[t].start()
|
||||
threads[t].join()
|
||||
|
||||
tdSql.query("show stables")
|
||||
tdSql.checkData(0, 4, self.tbNum)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addCluster(__file__, TDTestCase())
|
|
@ -0,0 +1,172 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
import threading
|
||||
import traceback
|
||||
import random
|
||||
import datetime
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
|
||||
def init(self):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdLog.info("prepare cluster")
|
||||
tdDnodes.stopAll()
|
||||
tdDnodes.deploy(1)
|
||||
tdDnodes.start(1)
|
||||
|
||||
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
tdSql.init(self.conn.cursor())
|
||||
tdSql.execute('reset query cache')
|
||||
tdSql.execute('create dnode 192.168.0.2')
|
||||
tdDnodes.deploy(2)
|
||||
tdDnodes.start(2)
|
||||
tdSql.execute('create dnode 192.168.0.3')
|
||||
tdDnodes.deploy(3)
|
||||
tdDnodes.start(3)
|
||||
time.sleep(3)
|
||||
|
||||
self.db = "db"
|
||||
self.stb = "stb"
|
||||
self.tbPrefix = "tb"
|
||||
self.tbNum = 100000
|
||||
self.count = 0
|
||||
# self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
self.threadNum = 1
|
||||
# threadLock = threading.Lock()
|
||||
# global counter for number of tables created by all threads
|
||||
self.global_counter = 0
|
||||
|
||||
tdSql.init(self.conn.cursor())
|
||||
|
||||
def _createTable(self, threadId):
|
||||
print("Thread%d : createTable" % (threadId))
|
||||
conn = taos.connect(config=tdDnodes.getSimCfgPath())
|
||||
cursor = conn.cursor()
|
||||
i = 0
|
||||
try:
|
||||
sql = "use %s" % (self.db)
|
||||
cursor.execute(sql)
|
||||
while i < self.tbNum:
|
||||
if (i % self.threadNum == threadId):
|
||||
cursor.execute(
|
||||
"create table tb%d using %s tags(%d)" %
|
||||
(i + 1, self.stb, i + 1))
|
||||
with threading.Lock():
|
||||
self.global_counter += 1
|
||||
time.sleep(0.01)
|
||||
i += 1
|
||||
except Exception as e:
|
||||
tdLog.info(
|
||||
"Failure when creating table tb%d, exception: %s" %
|
||||
(i + 1, str(e)))
|
||||
finally:
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
def _interfereDnodes(self, threadId, dnodeId):
|
||||
# interfere dnode while creating table
|
||||
print("Thread%d to interfere dnode%d" % (threadId, dnodeId))
|
||||
percent = 0.05
|
||||
loop = int(1 / (2 * percent))
|
||||
for t in range(1, loop):
|
||||
while self.global_counter < self.tbNum * (t * percent):
|
||||
time.sleep(0.2)
|
||||
tdDnodes.forcestop(dnodeId)
|
||||
while self.global_counter < self.tbNum * ((t + 1) * percent):
|
||||
time.sleep(0.2)
|
||||
tdDnodes.start(dnodeId)
|
||||
|
||||
# while self.global_counter < self.tbNum * 0.05:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.forcestop(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.10:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.start(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.15:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.forcestop(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.20:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.start(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.25:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.forcestop(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.30:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.start(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.35:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.forcestop(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.40:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.start(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.45:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.forcestop(dnodeId)
|
||||
# while self.global_counter < self.tbNum * 0.50:
|
||||
# time.sleep(0.2)
|
||||
# tdDnodes.start(dnodeId)
|
||||
|
||||
def run(self):
|
||||
tdLog.info("================= creating database with replica 2")
|
||||
threadId = 0
|
||||
threads = []
|
||||
try:
|
||||
tdSql.execute("drop database if exists %s" % (self.db))
|
||||
tdSql.execute(
|
||||
"create database %s replica 2 cache 1024 ablocks 2.0 tblocks 4 tables 1000" %
|
||||
(self.db))
|
||||
tdLog.sleep(3)
|
||||
tdSql.execute("use %s" % (self.db))
|
||||
tdSql.execute(
|
||||
"create table %s (ts timestamp, c1 bigint, stime timestamp) tags(tg1 bigint)" %
|
||||
(self.stb))
|
||||
tdLog.info("Start to create tables")
|
||||
while threadId < self.threadNum:
|
||||
tdLog.info("Thread-%d starts to create tables" % (threadId))
|
||||
cThread = threading.Thread(
|
||||
target=self._createTable,
|
||||
name="thread-%d" %
|
||||
(threadId),
|
||||
args=(
|
||||
threadId,
|
||||
))
|
||||
cThread.start()
|
||||
threads.append(cThread)
|
||||
threadId += 1
|
||||
|
||||
except Exception as e:
|
||||
tdLog.info("Failed to create tb%d, exception: %s" % (i, str(e)))
|
||||
# tdDnodes.stopAll()
|
||||
finally:
|
||||
time.sleep(1)
|
||||
|
||||
threading.Thread(
|
||||
target=self._interfereDnodes,
|
||||
name="thread-interfereDnode%d" %
|
||||
(3),
|
||||
args=(
|
||||
1,
|
||||
3,
|
||||
)).start()
|
||||
for t in range(len(threads)):
|
||||
tdLog.info("Join threads")
|
||||
# threads[t].start()
|
||||
threads[t].join()
|
||||
|
||||
tdSql.query("show stables")
|
||||
tdSql.checkData(0, 4, self.tbNum)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addCluster(__file__, TDTestCase())
|
|
@ -0,0 +1,70 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import datetime
|
||||
import string
|
||||
import random
|
||||
import subprocess
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
|
||||
chars = string.ascii_uppercase + string.ascii_lowercase
|
||||
|
||||
getDbNameLen = "grep -w '#define TSDB_DB_NAME_LEN' ../../src/inc/taosdef.h|awk '{print $3}'"
|
||||
dbNameMaxLen = int(subprocess.check_output(getDbNameLen, shell=True))
|
||||
tdLog.info("DB name max length is %d" % dbNameMaxLen)
|
||||
|
||||
tdLog.info("=============== step1")
|
||||
db_name = ''.join(random.choices(chars, k=(dbNameMaxLen + 1)))
|
||||
tdLog.info('db_name length %d' % len(db_name))
|
||||
tdLog.info('create database %s' % db_name)
|
||||
tdSql.error('create database %s' % db_name)
|
||||
|
||||
tdLog.info("=============== step2")
|
||||
db_name = ''.join(random.choices(chars, k=dbNameMaxLen))
|
||||
tdLog.info('db_name length %d' % len(db_name))
|
||||
tdLog.info('create database %s' % db_name)
|
||||
tdSql.execute('create database %s' % db_name)
|
||||
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, db_name.lower())
|
||||
|
||||
tdLog.info("=============== step3")
|
||||
db_name = ''.join(random.choices(chars, k=(dbNameMaxLen - 1)))
|
||||
tdLog.info('db_name length %d' % len(db_name))
|
||||
tdLog.info('create database %s' % db_name)
|
||||
tdSql.execute('create database %s' % db_name)
|
||||
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, db_name.lower())
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,67 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
tbNum = 10000
|
||||
insertRows = 1
|
||||
db = "db"
|
||||
loop = 2
|
||||
tdSql.execute("drop database if exists %s" % (db))
|
||||
tdSql.execute("reset query cache")
|
||||
tdLog.sleep(1)
|
||||
for k in range(1, loop + 1):
|
||||
tdLog.info("===========Loop%d starts============" % (k))
|
||||
tdSql.execute(
|
||||
"create database %s cache 163840 ablocks 40 maxtables 5000 wal 0" %
|
||||
(db))
|
||||
tdSql.execute("use %s" % (db))
|
||||
tdSql.execute(
|
||||
"create table stb (ts timestamp, c1 int) tags(t1 bigint, t2 double)")
|
||||
for j in range(1, tbNum):
|
||||
tdSql.execute(
|
||||
"create table tb%d using stb tags(%d, %d)" %
|
||||
(j, j, j))
|
||||
|
||||
for j in range(1, tbNum):
|
||||
for i in range(0, insertRows):
|
||||
tdSql.execute(
|
||||
"insert into tb%d values (now + %dm, %d)" %
|
||||
(j, i, i))
|
||||
tdSql.query("select * from tb%d" % (j))
|
||||
tdSql.checkRows(insertRows)
|
||||
tdLog.info("insert %d rows into tb%d" % (insertRows, j))
|
||||
# tdSql.sleep(3)
|
||||
tdSql.execute("drop database %s" % (db))
|
||||
tdLog.sleep(2)
|
||||
tdLog.info("===========Loop%d completed!=============" % (k))
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
#tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,219 @@
|
|||
# #################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
|
||||
# #################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
|
||||
|
||||
import sys
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import time
|
||||
from datetime import datetime
|
||||
import os
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
tdSql.execute('reset query cache')
|
||||
tdSql.execute('drop database if exists db')
|
||||
tdSql.execute('create database db precision "ns";')
|
||||
tdSql.query('show databases;')
|
||||
tdSql.checkData(0,16,'ns')
|
||||
tdSql.execute('use db')
|
||||
|
||||
tdLog.debug('testing nanosecond support in 1st timestamp')
|
||||
tdSql.execute('create table tb (ts timestamp, speed int)')
|
||||
tdSql.execute('insert into tb values(\'2021-06-10 0:00:00.100000001\', 1);')
|
||||
tdSql.execute('insert into tb values(1623254400150000000, 2);')
|
||||
tdSql.execute('import into tb values(1623254400300000000, 3);')
|
||||
tdSql.execute('import into tb values(1623254400299999999, 4);')
|
||||
tdSql.execute('insert into tb values(1623254400300000001, 5);')
|
||||
tdSql.execute('insert into tb values(1623254400999999999, 7);')
|
||||
|
||||
|
||||
tdSql.query('select * from tb;')
|
||||
tdSql.checkData(0,0,'2021-06-10 0:00:00.100000001')
|
||||
tdSql.checkData(1,0,'2021-06-10 0:00:00.150000000')
|
||||
tdSql.checkData(2,0,'2021-06-10 0:00:00.299999999')
|
||||
tdSql.checkData(3,1,3)
|
||||
tdSql.checkData(4,1,5)
|
||||
tdSql.checkData(5,1,7)
|
||||
tdSql.checkRows(6)
|
||||
tdSql.query('select count(*) from tb where ts > 1623254400100000000 and ts < 1623254400100000002;')
|
||||
tdSql.checkData(0,0,1)
|
||||
tdSql.query('select count(*) from tb where ts > \'2021-06-10 0:00:00.100000001\' and ts < \'2021-06-10 0:00:00.160000000\';')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts > 1623254400100000000 and ts < 1623254400150000000;')
|
||||
tdSql.checkData(0,0,1)
|
||||
tdSql.query('select count(*) from tb where ts > \'2021-06-10 0:00:00.100000000\' and ts < \'2021-06-10 0:00:00.150000000\';')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts > 1623254400400000000;')
|
||||
tdSql.checkData(0,0,1)
|
||||
tdSql.query('select count(*) from tb where ts < \'2021-06-10 00:00:00.400000000\';')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts > now + 400000000b;')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts >= \'2021-06-10 0:00:00.100000001\';')
|
||||
tdSql.checkData(0,0,6)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts <= 1623254400300000000;')
|
||||
tdSql.checkData(0,0,4)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts = \'2021-06-10 0:00:00.000000000\';')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts = 1623254400150000000;')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts = \'2021-06-10 0:00:00.100000001\';')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts between 1623254400000000000 and 1623254400400000000;')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb where ts between \'2021-06-10 0:00:00.299999999\' and \'2021-06-10 0:00:00.300000001\';')
|
||||
tdSql.checkData(0,0,3)
|
||||
|
||||
tdSql.query('select avg(speed) from tb interval(5000000000b);')
|
||||
tdSql.checkRows(1)
|
||||
|
||||
tdSql.query('select avg(speed) from tb interval(100000000b)')
|
||||
tdSql.checkRows(4)
|
||||
|
||||
tdSql.error('select avg(speed) from tb interval(1b);')
|
||||
tdSql.error('select avg(speed) from tb interval(999b);')
|
||||
|
||||
tdSql.query('select avg(speed) from tb interval(1000b);')
|
||||
tdSql.checkRows(5)
|
||||
|
||||
tdSql.query('select avg(speed) from tb interval(1u);')
|
||||
tdSql.checkRows(5)
|
||||
|
||||
tdSql.query('select avg(speed) from tb interval(100000000b) sliding (100000000b);')
|
||||
tdSql.checkRows(4)
|
||||
|
||||
tdSql.query('select last(*) from tb')
|
||||
tdSql.checkData(0,0, '2021-06-10 0:00:00.999999999')
|
||||
tdSql.checkData(0,0, 1623254400999999999)
|
||||
|
||||
tdSql.query('select first(*) from tb')
|
||||
tdSql.checkData(0,0, 1623254400100000001)
|
||||
tdSql.checkData(0,0, '2021-06-10 0:00:00.100000001')
|
||||
|
||||
tdSql.execute('insert into tb values(now + 500000000b, 6);')
|
||||
tdSql.query('select * from tb;')
|
||||
tdSql.checkRows(7)
|
||||
|
||||
tdLog.debug('testing nanosecond support in other timestamps')
|
||||
tdSql.execute('create table tb2 (ts timestamp, speed int, ts2 timestamp);')
|
||||
tdSql.execute('insert into tb2 values(\'2021-06-10 0:00:00.100000001\', 1, \'2021-06-11 0:00:00.100000001\');')
|
||||
tdSql.execute('insert into tb2 values(1623254400150000000, 2, 1623340800150000000);')
|
||||
tdSql.execute('import into tb2 values(1623254400300000000, 3, 1623340800300000000);')
|
||||
tdSql.execute('import into tb2 values(1623254400299999999, 4, 1623340800299999999);')
|
||||
tdSql.execute('insert into tb2 values(1623254400300000001, 5, 1623340800300000001);')
|
||||
tdSql.execute('insert into tb2 values(1623254400999999999, 7, 1623513600999999999);')
|
||||
|
||||
tdSql.query('select * from tb2;')
|
||||
tdSql.checkData(0,0,'2021-06-10 0:00:00.100000001')
|
||||
tdSql.checkData(1,0,'2021-06-10 0:00:00.150000000')
|
||||
tdSql.checkData(2,1,4)
|
||||
tdSql.checkData(3,1,3)
|
||||
tdSql.checkData(4,2,'2021-06-11 00:00:00.300000001')
|
||||
tdSql.checkData(5,2,'2021-06-13 00:00:00.999999999')
|
||||
tdSql.checkRows(6)
|
||||
tdSql.query('select count(*) from tb2 where ts2 > 1623340800000000000 and ts2 < 1623340800150000000;')
|
||||
tdSql.checkData(0,0,1)
|
||||
tdSql.query('select count(*) from tb2 where ts2 > \'2021-06-11 0:00:00.100000000\' and ts2 < \'2021-06-11 0:00:00.100000002\';')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 > 1623340800500000000;')
|
||||
tdSql.checkData(0,0,1)
|
||||
tdSql.query('select count(*) from tb2 where ts2 < \'2021-06-11 0:00:00.400000000\';')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 > now + 400000000b;')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 >= \'2021-06-11 0:00:00.100000001\';')
|
||||
tdSql.checkData(0,0,6)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 <= 1623340800400000000;')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 = \'2021-06-11 0:00:00.000000000\';')
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 = \'2021-06-11 0:00:00.300000001\';')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 = 1623340800300000001;')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 between 1623340800000000000 and 1623340800450000000;')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 between \'2021-06-11 0:00:00.299999999\' and \'2021-06-11 0:00:00.300000001\';')
|
||||
tdSql.checkData(0,0,3)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 <> 1623513600999999999;')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 <> \'2021-06-11 0:00:00.100000001\';')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 <> \'2021-06-11 0:00:00.100000000\';')
|
||||
tdSql.checkData(0,0,6)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 != 1623513600999999999;')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 != \'2021-06-11 0:00:00.100000001\';')
|
||||
tdSql.checkData(0,0,5)
|
||||
|
||||
tdSql.query('select count(*) from tb2 where ts2 != \'2021-06-11 0:00:00.100000000\';')
|
||||
tdSql.checkData(0,0,6)
|
||||
|
||||
tdSql.execute('insert into tb2 values(now + 500000000b, 6, now +2d);')
|
||||
tdSql.query('select * from tb2;')
|
||||
tdSql.checkRows(7)
|
||||
|
||||
tdLog.debug('testing ill nanosecond format handling')
|
||||
tdSql.execute('create table tb3 (ts timestamp, speed int);')
|
||||
|
||||
tdSql.error('insert into tb3 values(16232544001500000, 2);')
|
||||
tdSql.execute('insert into tb3 values(\'2021-06-10 0:00:00.123456\', 2);')
|
||||
tdSql.query('select * from tb3 where ts = \'2021-06-10 0:00:00.123456000\';')
|
||||
tdSql.checkRows(1)
|
||||
|
||||
tdSql.execute('insert into tb3 values(\'2021-06-10 0:00:00.123456789000\', 2);')
|
||||
tdSql.query('select * from tb3 where ts = \'2021-06-10 0:00:00.123456789\';')
|
||||
tdSql.checkRows(1)
|
||||
|
||||
os.system('sudo timedatectl set-ntp on')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,45 @@
|
|||
FROM ubuntu:latest AS builder
|
||||
|
||||
ARG PACKAGE=TDengine-server-1.6.5.10-Linux-x64.tar.gz
|
||||
ARG EXTRACTDIR=TDengine-enterprise-server
|
||||
ARG TARBITRATORPKG=TDengine-tarbitrator-1.6.5.10-Linux-x64.tar.gz
|
||||
ARG EXTRACTDIR2=TDengine-enterprise-arbitrator
|
||||
ARG CONTENT=taos.tar.gz
|
||||
|
||||
WORKDIR /root
|
||||
|
||||
COPY ${PACKAGE} .
|
||||
COPY ${TARBITRATORPKG} .
|
||||
|
||||
RUN tar -zxf ${PACKAGE}
|
||||
RUN tar -zxf ${TARBITRATORPKG}
|
||||
RUN mv ${EXTRACTDIR}/driver ./lib
|
||||
RUN tar -zxf ${EXTRACTDIR}/${CONTENT}
|
||||
RUN mv ${EXTRACTDIR2}/bin/* /root/bin
|
||||
|
||||
FROM ubuntu:latest
|
||||
|
||||
WORKDIR /root
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y vim tmux net-tools
|
||||
RUN echo 'alias ll="ls -l --color=auto"' >> /root/.bashrc
|
||||
RUN ulimit -c unlimited
|
||||
|
||||
COPY --from=builder /root/bin/taosd /usr/bin
|
||||
COPY --from=builder /root/bin/tarbitrator /usr/bin
|
||||
COPY --from=builder /root/bin/taosdemo /usr/bin
|
||||
COPY --from=builder /root/bin/taosdump /usr/bin
|
||||
COPY --from=builder /root/bin/taos /usr/bin
|
||||
COPY --from=builder /root/cfg/taos.cfg /etc/taos/
|
||||
COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1
|
||||
|
||||
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
|
||||
ENV LC_CTYPE=en_US.UTF-8
|
||||
ENV LANG=en_US.UTF-8
|
||||
|
||||
EXPOSE 6030-6042/tcp 6060/tcp 6030-6039/udp
|
||||
|
||||
# VOLUME [ "/var/lib/taos", "/var/log/taos", "/etc/taos" ]
|
||||
|
||||
CMD [ "bash" ]
|
|
@ -0,0 +1,38 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from basic import *
|
||||
|
||||
class TDTestCase:
|
||||
|
||||
def init(self):
|
||||
# tdLog.debug("start to execute %s" % __file__)
|
||||
|
||||
self.numOfNodes = 5
|
||||
self.dockerDir = "/data"
|
||||
cluster.init(self.numOfNodes, self.dockerDir)
|
||||
cluster.prepardBuild()
|
||||
for i in range(self.numOfNodes):
|
||||
if i == 0:
|
||||
cluster.cfg("role", "1", i + 1)
|
||||
else:
|
||||
cluster.cfg("role", "2", i + 1)
|
||||
cluster.run()
|
||||
|
||||
td = TDTestCase()
|
||||
td.init()
|
||||
|
||||
|
||||
## usage: python3 OneMnodeMultipleVnodesTest.py
|
||||
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import taos
|
||||
|
||||
class BuildDockerCluser:
|
||||
|
||||
def init(self, numOfNodes, dockerDir):
|
||||
self.numOfNodes = numOfNodes
|
||||
self.dockerDir = dockerDir
|
||||
|
||||
self.hostName = "tdnode1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.configDir = "/etc/taos"
|
||||
self.dirs = ["data", "cfg", "log", "core"]
|
||||
self.cfgDict = {
|
||||
"numOfLogLines":"100000000",
|
||||
"mnodeEqualVnodeNum":"0",
|
||||
"walLevel":"1",
|
||||
"numOfThreadsPerCore":"2.0",
|
||||
"monitor":"0",
|
||||
"vnodeBak":"1",
|
||||
"dDebugFlag":"135",
|
||||
"mDebugFlag":"135",
|
||||
"sdbDebugFlag":"135",
|
||||
"rpcDebugFlag":"135",
|
||||
"tmrDebugFlag":"131",
|
||||
"cDebugFlag":"135",
|
||||
"httpDebugFlag":"135",
|
||||
"monitorDebugFlag":"135",
|
||||
"udebugFlag":"135",
|
||||
"jnidebugFlag":"135",
|
||||
"qdebugFlag":"135",
|
||||
"maxSQLLength":"1048576"
|
||||
}
|
||||
cmd = "mkdir -p %s" % self.dockerDir
|
||||
self.execCmd(cmd)
|
||||
|
||||
cmd = "cp *.yml %s" % self.dockerDir
|
||||
self.execCmd(cmd)
|
||||
|
||||
cmd = "cp Dockerfile %s" % self.dockerDir
|
||||
self.execCmd(cmd)
|
||||
|
||||
|
||||
# execute command, and return the output
|
||||
# ref: https://blog.csdn.net/wowocpp/article/details/80775650
|
||||
def execCmdAndGetOutput(self, cmd):
|
||||
r = os.popen(cmd)
|
||||
text = r.read()
|
||||
r.close()
|
||||
return text
|
||||
|
||||
def execCmd(self, cmd):
|
||||
if os.system(cmd) != 0:
|
||||
quit()
|
||||
|
||||
def getTaosdVersion(self):
|
||||
cmd = "taosd -V |grep version|awk '{print $3}'"
|
||||
taosdVersion = self.execCmdAndGetOutput(cmd)
|
||||
cmd = "find %s -name '*server*.tar.gz' | awk -F- '{print $(NF-2)}'|sort|awk 'END {print}'" % self.dockerDir
|
||||
packageVersion = self.execCmdAndGetOutput(cmd)
|
||||
|
||||
if (taosdVersion is None or taosdVersion.isspace()) and (packageVersion is None or packageVersion.isspace()):
|
||||
print("Please install taosd or have a install package ready")
|
||||
quit()
|
||||
else:
|
||||
self.version = taosdVersion if taosdVersion >= packageVersion else packageVersion
|
||||
return self.version.strip()
|
||||
|
||||
def getConnection(self):
|
||||
self.conn = taos.connect(
|
||||
host = self.hostName,
|
||||
user = self.user,
|
||||
password = self.password,
|
||||
config = self.configDir)
|
||||
|
||||
def removeFile(self, rootDir, index, dir):
|
||||
cmd = "rm -rf %s/node%d/%s/*" % (rootDir, index, dir)
|
||||
self.execCmd(cmd)
|
||||
|
||||
def clearEnv(self):
|
||||
cmd = "cd %s && docker-compose down --remove-orphans" % self.dockerDir
|
||||
self.execCmd(cmd)
|
||||
for i in range(1, self.numOfNodes + 1):
|
||||
self.removeFile(self.dockerDir, i, self.dirs[0])
|
||||
self.removeFile(self.dockerDir, i, self.dirs[1])
|
||||
self.removeFile(self.dockerDir, i, self.dirs[2])
|
||||
|
||||
def createDir(self, rootDir, index, dir):
|
||||
cmd = "mkdir -p %s/node%d/%s" % (rootDir, index, dir)
|
||||
self.execCmd(cmd)
|
||||
|
||||
def createDirs(self):
|
||||
for i in range(1, self.numOfNodes + 1):
|
||||
for j in range(len(self.dirs)):
|
||||
self.createDir(self.dockerDir, i, self.dirs[j])
|
||||
|
||||
def addExtraCfg(self, option, value):
|
||||
self.cfgDict.update({option: value})
|
||||
|
||||
def cfg(self, option, value, nodeIndex):
|
||||
cfgPath = "%s/node%d/cfg/taos.cfg" % (self.dockerDir, nodeIndex)
|
||||
cmd = "echo '%s %s' >> %s" % (option, value, cfgPath)
|
||||
self.execCmd(cmd)
|
||||
|
||||
def updateLocalhosts(self):
|
||||
cmd = "grep '172.27.0.7 *tdnode1' /etc/hosts | sed 's: ::g'"
|
||||
result = self.execCmdAndGetOutput(cmd)
|
||||
print(result)
|
||||
if result is None or result.isspace():
|
||||
print("==========")
|
||||
cmd = "echo '172.27.0.7 tdnode1' >> /etc/hosts"
|
||||
display = "echo %s" % cmd
|
||||
self.execCmd(display)
|
||||
self.execCmd(cmd)
|
||||
|
||||
def deploy(self):
|
||||
self.clearEnv()
|
||||
self.createDirs()
|
||||
for i in range(1, self.numOfNodes + 1):
|
||||
self.cfg("firstEp", "tdnode1:6030", i)
|
||||
|
||||
for key, value in self.cfgDict.items():
|
||||
self.cfg(key, value, i)
|
||||
|
||||
def createDondes(self):
|
||||
self.cursor = self.conn.cursor()
|
||||
for i in range(2, self.numOfNodes + 1):
|
||||
self.cursor.execute("create dnode tdnode%d" % i)
|
||||
|
||||
def startArbitrator(self):
|
||||
for i in range(1, self.numOfNodes + 1):
|
||||
self.cfg("arbitrator", "tdnode1:6042", i)
|
||||
cmd = "docker exec -d $(docker ps|grep tdnode1|awk '{print $1}') tarbitrator"
|
||||
self.execCmd(cmd)
|
||||
|
||||
def prepardBuild(self):
|
||||
if self.numOfNodes < 2 or self.numOfNodes > 10:
|
||||
print("the number of nodes must be between 2 and 10")
|
||||
exit(0)
|
||||
self.updateLocalhosts()
|
||||
self.deploy()
|
||||
|
||||
def run(self):
|
||||
cmd = "./buildClusterEnv.sh -n %d -v %s -d %s" % (self.numOfNodes, self.getTaosdVersion(), self.dockerDir)
|
||||
display = "echo %s" % cmd
|
||||
self.execCmd(display)
|
||||
self.execCmd(cmd)
|
||||
self.getConnection()
|
||||
self.createDondes()
|
||||
|
||||
cluster = BuildDockerCluser()
|
|
@ -0,0 +1,127 @@
|
|||
#!/bin/bash
|
||||
echo "Executing buildClusterEnv.sh"
|
||||
CURR_DIR=`pwd`
|
||||
IN_TDINTERNAL="community"
|
||||
|
||||
if [ $# != 6 ]; then
|
||||
echo "argument list need input : "
|
||||
echo " -n numOfNodes"
|
||||
echo " -v version"
|
||||
echo " -d docker dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
NUM_OF_NODES=
|
||||
VERSION=
|
||||
DOCKER_DIR=
|
||||
while getopts "n:v:d:" arg
|
||||
do
|
||||
case $arg in
|
||||
n)
|
||||
NUM_OF_NODES=$OPTARG
|
||||
;;
|
||||
v)
|
||||
VERSION=$OPTARG
|
||||
;;
|
||||
d)
|
||||
DOCKER_DIR=$OPTARG
|
||||
;;
|
||||
?)
|
||||
echo "unkonwn argument"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
function prepareBuild {
|
||||
|
||||
if [ -d $CURR_DIR/../../../release ]; then
|
||||
echo release exists
|
||||
rm -rf $CURR_DIR/../../../release/*
|
||||
fi
|
||||
|
||||
cd $CURR_DIR/../../../packaging
|
||||
|
||||
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||
if [ ! -e $DOCKER_DIR/TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||
|
||||
echo "generating TDeninge enterprise packages"
|
||||
./release.sh -v cluster -n $VERSION >> /dev/null 2>&1
|
||||
|
||||
if [ ! -e $CURR_DIR/../../../release/TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz ]; then
|
||||
echo "no TDengine install package found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -e $CURR_DIR/../../../release/TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||
echo "no arbitrator install package found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd $CURR_DIR/../../../release
|
||||
mv TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||
mv TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||
fi
|
||||
else
|
||||
if [ ! -e $DOCKER_DIR/TDengine-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||
|
||||
echo "generating TDeninge community packages"
|
||||
./release.sh -v edge -n $VERSION >> /dev/null 2>&1
|
||||
|
||||
if [ ! -e $CURR_DIR/../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then
|
||||
echo "no TDengine install package found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -e $CURR_DIR/../../../release/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||
echo "no arbitrator install package found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd $CURR_DIR/../../../release
|
||||
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||
mv TDengine-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -rf $DOCKER_DIR/*.yml
|
||||
cd $CURR_DIR
|
||||
|
||||
cp *.yml $DOCKER_DIR
|
||||
cp Dockerfile $DOCKER_DIR
|
||||
}
|
||||
|
||||
function clusterUp {
|
||||
echo "docker compose start"
|
||||
|
||||
cd $DOCKER_DIR
|
||||
|
||||
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||
docker_run="PACKAGE=TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-enterprise-server-$VERSION DIR2=TDengine-enterprise-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
|
||||
else
|
||||
docker_run="PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
|
||||
fi
|
||||
|
||||
if [ $NUM_OF_NODES -ge 2 ];then
|
||||
echo "create $NUM_OF_NODES dnodes"
|
||||
for((i=3;i<=$NUM_OF_NODES;i++))
|
||||
do
|
||||
if [ ! -f node$i.yml ];then
|
||||
echo "node$i.yml not exist"
|
||||
cp node3.yml node$i.yml
|
||||
sed -i "s/td2.0-node3/td2.0-node$i/g" node$i.yml
|
||||
sed -i "s/'tdnode3'/'tdnode$i'/g" node$i.yml
|
||||
sed -i "s#/node3/#/node$i/#g" node$i.yml
|
||||
sed -i "s#hostname: tdnode3#hostname: tdnode$i#g" node$i.yml
|
||||
sed -i "s#ipv4_address: 172.27.0.9#ipv4_address: 172.27.0.`expr $i + 6`#g" node$i.yml
|
||||
fi
|
||||
docker_run=$docker_run" -f node$i.yml "
|
||||
done
|
||||
docker_run=$docker_run" up -d"
|
||||
fi
|
||||
echo $docker_run |sh
|
||||
|
||||
echo "docker compose finish"
|
||||
}
|
||||
|
||||
prepareBuild
|
||||
clusterUp
|
|
@ -0,0 +1,131 @@
|
|||
version: '3.7'
|
||||
|
||||
services:
|
||||
td2.0-node1:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- PACKAGE=${PACKAGE}
|
||||
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||
- EXTRACTDIR=${DIR}
|
||||
- EXTRACTDIR2=${DIR2}
|
||||
- DATADIR=${DATADIR}
|
||||
image: 'tdengine:${VERSION}'
|
||||
container_name: 'tdnode1'
|
||||
cap_add:
|
||||
- ALL
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
TZ: "Asia/Shanghai"
|
||||
command: >
|
||||
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
|
||||
echo $TZ > /etc/timezone &&
|
||||
mkdir /coredump &&
|
||||
echo 'kernel.core_pattern=/coredump/core_%e_%p' >> /etc/sysctl.conf &&
|
||||
sysctl -p &&
|
||||
exec my-main-application"
|
||||
extra_hosts:
|
||||
- "tdnode2:172.27.0.8"
|
||||
- "tdnode3:172.27.0.9"
|
||||
- "tdnode4:172.27.0.10"
|
||||
- "tdnode5:172.27.0.11"
|
||||
- "tdnode6:172.27.0.12"
|
||||
- "tdnode7:172.27.0.13"
|
||||
- "tdnode8:172.27.0.14"
|
||||
- "tdnode9:172.27.0.15"
|
||||
- "tdnode10:172.27.0.16"
|
||||
volumes:
|
||||
# bind data directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node1/data
|
||||
target: /var/lib/taos
|
||||
# bind log directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node1/log
|
||||
target: /var/log/taos
|
||||
# bind configuration
|
||||
- type: bind
|
||||
source: ${DATADIR}/node1/cfg
|
||||
target: /etc/taos
|
||||
# bind core dump path
|
||||
- type: bind
|
||||
source: ${DATADIR}/node1/core
|
||||
target: /coredump
|
||||
- type: bind
|
||||
source: ${DATADIR}
|
||||
target: /root
|
||||
hostname: tdnode1
|
||||
networks:
|
||||
taos_update_net:
|
||||
ipv4_address: 172.27.0.7
|
||||
command: taosd
|
||||
|
||||
td2.0-node2:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- PACKAGE=${PACKAGE}
|
||||
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||
- EXTRACTDIR=${DIR}
|
||||
- EXTRACTDIR2=${DIR2}
|
||||
- DATADIR=${DATADIR}
|
||||
image: 'tdengine:${VERSION}'
|
||||
container_name: 'tdnode2'
|
||||
cap_add:
|
||||
- ALL
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
TZ: "Asia/Shanghai"
|
||||
command: >
|
||||
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
|
||||
echo $TZ > /etc/timezone &&
|
||||
mkdir /coredump &&
|
||||
echo 'kernel.core_pattern=/coredump/core_%e_%p' >> /etc/sysctl.conf &&
|
||||
sysctl -p &&
|
||||
exec my-main-application"
|
||||
extra_hosts:
|
||||
- "tdnode1:172.27.0.7"
|
||||
- "tdnode3:172.27.0.9"
|
||||
- "tdnode4:172.27.0.10"
|
||||
- "tdnode5:172.27.0.11"
|
||||
- "tdnode6:172.27.0.12"
|
||||
- "tdnode7:172.27.0.13"
|
||||
- "tdnode8:172.27.0.14"
|
||||
- "tdnode9:172.27.0.15"
|
||||
- "tdnode10:172.27.0.16"
|
||||
volumes:
|
||||
# bind data directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node2/data
|
||||
target: /var/lib/taos
|
||||
# bind log directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node2/log
|
||||
target: /var/log/taos
|
||||
# bind configuration
|
||||
- type: bind
|
||||
source: ${DATADIR}/node2/cfg
|
||||
target: /etc/taos
|
||||
# bind core dump path
|
||||
- type: bind
|
||||
source: ${DATADIR}/node2/core
|
||||
target: /coredump
|
||||
- type: bind
|
||||
source: ${DATADIR}
|
||||
target: /root
|
||||
hostname: tdnode2
|
||||
networks:
|
||||
taos_update_net:
|
||||
ipv4_address: 172.27.0.8
|
||||
command: taosd
|
||||
|
||||
|
||||
networks:
|
||||
taos_update_net:
|
||||
# external: true
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: "172.27.0.0/24"
|
|
@ -0,0 +1,53 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 1,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db",
|
||||
"drop": "no",
|
||||
"replica": 1,
|
||||
"days": 2,
|
||||
"cache": 16,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1,
|
||||
"childtable_prefix": "stb_",
|
||||
"auto_create_table": "no",
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rate": 0,
|
||||
"insert_rows": 100000,
|
||||
"interlace_rows": 100,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 10,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,63 @@
|
|||
version: '3.7'
|
||||
|
||||
services:
|
||||
td2.0-node3:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- PACKAGE=${PACKAGE}
|
||||
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||
- EXTRACTDIR=${DIR}
|
||||
- EXTRACTDIR2=${DIR2}
|
||||
- DATADIR=${DATADIR}
|
||||
image: 'tdengine:${VERSION}'
|
||||
container_name: 'tdnode3'
|
||||
cap_add:
|
||||
- ALL
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
TZ: "Asia/Shanghai"
|
||||
command: >
|
||||
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
|
||||
echo $TZ > /etc/timezone &&
|
||||
mkdir /coredump &&
|
||||
echo 'kernel.core_pattern=/coredump/core_%e_%p' >> /etc/sysctl.conf &&
|
||||
sysctl -p &&
|
||||
exec my-main-application"
|
||||
extra_hosts:
|
||||
- "tdnode1:172.27.0.7"
|
||||
- "tdnode2:172.27.0.8"
|
||||
- "tdnode3:172.27.0.9"
|
||||
- "tdnode4:172.27.0.10"
|
||||
- "tdnode5:172.27.0.11"
|
||||
- "tdnode6:172.27.0.12"
|
||||
- "tdnode7:172.27.0.13"
|
||||
- "tdnode8:172.27.0.14"
|
||||
- "tdnode9:172.27.0.15"
|
||||
- "tdnode10:172.27.0.16"
|
||||
volumes:
|
||||
# bind data directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node3/data
|
||||
target: /var/lib/taos
|
||||
# bind log directory
|
||||
- type: bind
|
||||
source: ${DATADIR}/node3/log
|
||||
target: /var/log/taos
|
||||
# bind configuration
|
||||
- type: bind
|
||||
source: ${DATADIR}/node3/cfg
|
||||
target: /etc/taos
|
||||
# bind core dump path
|
||||
- type: bind
|
||||
source: ${DATADIR}/node3/core
|
||||
target: /coredump
|
||||
- type: bind
|
||||
source: ${DATADIR}
|
||||
target: /root
|
||||
hostname: tdnode3
|
||||
networks:
|
||||
taos_update_net:
|
||||
ipv4_address: 172.27.0.9
|
||||
command: taosd
|
|
@ -0,0 +1,142 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import random
|
||||
import argparse
|
||||
|
||||
class taosdemoWrapper:
|
||||
|
||||
def __init__(self, host, metadata, database, tables, threads, configDir, replica,
|
||||
columnType, columnsPerTable, rowsPerTable, disorderRatio, disorderRange, charTypeLen):
|
||||
self.host = host
|
||||
self.metadata = metadata
|
||||
self.database = database
|
||||
self.tables = tables
|
||||
self.threads = threads
|
||||
self.configDir = configDir
|
||||
self.replica = replica
|
||||
self.columnType = columnType
|
||||
self.columnsPerTable = columnsPerTable
|
||||
self.rowsPerTable = rowsPerTable
|
||||
self.disorderRatio = disorderRatio
|
||||
self.disorderRange = disorderRange
|
||||
self.charTypeLen = charTypeLen
|
||||
|
||||
def run(self):
|
||||
if self.metadata is None:
|
||||
os.system("taosdemo -h %s -d %s -t %d -T %d -c %s -a %d -b %s -n %d -t %d -O %d -R %d -w %d -x -y"
|
||||
% (self.host, self.database, self.tables, self.threads, self.configDir, self.replica, self.columnType,
|
||||
self.rowsPerTable, self.disorderRatio, self.disorderRange, self.charTypeLen))
|
||||
else:
|
||||
os.system("taosdemo -f %s" % self.metadata)
|
||||
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
'-H',
|
||||
'--host-name',
|
||||
action='store',
|
||||
default='tdnode1',
|
||||
type=str,
|
||||
help='host name to be connected (default: tdnode1)')
|
||||
parser.add_argument(
|
||||
'-f',
|
||||
'--metadata',
|
||||
action='store',
|
||||
default=None,
|
||||
type=str,
|
||||
help='The meta data to execution procedure, if use -f, all other options invalid, Default is NULL')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--db-name',
|
||||
action='store',
|
||||
default='test',
|
||||
type=str,
|
||||
help='Database name to be created (default: test)')
|
||||
parser.add_argument(
|
||||
'-t',
|
||||
'--num-of-tables',
|
||||
action='store',
|
||||
default=10,
|
||||
type=int,
|
||||
help='Number of tables (default: 10000)')
|
||||
parser.add_argument(
|
||||
'-T',
|
||||
'--num-of-threads',
|
||||
action='store',
|
||||
default=10,
|
||||
type=int,
|
||||
help='Number of rest threads (default: 10)')
|
||||
parser.add_argument(
|
||||
'-c',
|
||||
'--config-dir',
|
||||
action='store',
|
||||
default='/etc/taos/',
|
||||
type=str,
|
||||
help='Configuration directory. (Default is /etc/taos/)')
|
||||
parser.add_argument(
|
||||
'-a',
|
||||
'--replica',
|
||||
action='store',
|
||||
default=100,
|
||||
type=int,
|
||||
help='Set the replica parameters of the database (default: 1, min: 1, max: 3)')
|
||||
parser.add_argument(
|
||||
'-b',
|
||||
'--column-type',
|
||||
action='store',
|
||||
default='int',
|
||||
type=str,
|
||||
help='the data_type of columns (default: TINYINT,SMALLINT,INT,BIGINT,FLOAT,DOUBLE,BINARY,NCHAR,BOOL,TIMESTAMP)')
|
||||
parser.add_argument(
|
||||
'-l',
|
||||
'--num-of-cols',
|
||||
action='store',
|
||||
default=10,
|
||||
type=int,
|
||||
help='The number of columns per record (default: 10)')
|
||||
parser.add_argument(
|
||||
'-n',
|
||||
'--num-of-rows',
|
||||
action='store',
|
||||
default=1000,
|
||||
type=int,
|
||||
help='Number of subtales per stable (default: 1000)')
|
||||
parser.add_argument(
|
||||
'-O',
|
||||
'--disorder-ratio',
|
||||
action='store',
|
||||
default=0,
|
||||
type=int,
|
||||
help=' (0: in order, > 0: disorder ratio, default: 0)')
|
||||
parser.add_argument(
|
||||
'-R',
|
||||
'--disorder-range',
|
||||
action='store',
|
||||
default=0,
|
||||
type=int,
|
||||
help='Out of order datas range, ms (default: 1000)')
|
||||
parser.add_argument(
|
||||
'-w',
|
||||
'--char-type-length',
|
||||
action='store',
|
||||
default=16,
|
||||
type=int,
|
||||
help='Out of order datas range, ms (default: 16)')
|
||||
|
||||
args = parser.parse_args()
|
||||
taosdemo = taosdemoWrapper(args.host_name, args.metadata, args.db_name, args.num_of_tables,
|
||||
args.num_of_threads, args.config_dir, args.replica, args.column_type, args.num_of_cols,
|
||||
args.num_of_rows, args.disorder_ratio, args.disorder_range, args.char_type_length)
|
||||
taosdemo.run()
|
|
@ -0,0 +1,406 @@
|
|||
#!/bin/bash
|
||||
ulimit -c unlimited
|
||||
#======================p1-start===============
|
||||
|
||||
python3 ./test.py -f insert/basic.py
|
||||
python3 ./test.py -f insert/int.py
|
||||
python3 ./test.py -f insert/float.py
|
||||
python3 ./test.py -f insert/bigint.py
|
||||
python3 ./test.py -f insert/bool.py
|
||||
python3 ./test.py -f insert/double.py
|
||||
python3 ./test.py -f insert/smallint.py
|
||||
python3 ./test.py -f insert/tinyint.py
|
||||
python3 ./test.py -f insert/date.py
|
||||
python3 ./test.py -f insert/binary.py
|
||||
python3 ./test.py -f insert/nchar.py
|
||||
#python3 ./test.py -f insert/nchar-boundary.py
|
||||
python3 ./test.py -f insert/nchar-unicode.py
|
||||
python3 ./test.py -f insert/multi.py
|
||||
python3 ./test.py -f insert/randomNullCommit.py
|
||||
python3 insert/retentionpolicy.py
|
||||
python3 ./test.py -f insert/alterTableAndInsert.py
|
||||
python3 ./test.py -f insert/insertIntoTwoTables.py
|
||||
python3 ./test.py -f insert/before_1970.py
|
||||
python3 ./test.py -f insert/special_character_show.py
|
||||
python3 bug2265.py
|
||||
python3 ./test.py -f insert/bug3654.py
|
||||
python3 ./test.py -f insert/insertDynamicColBeforeVal.py
|
||||
python3 ./test.py -f insert/in_function.py
|
||||
python3 ./test.py -f insert/modify_column.py
|
||||
python3 ./test.py -f insert/line_insert.py
|
||||
|
||||
# timezone
|
||||
|
||||
python3 ./test.py -f TimeZone/TestCaseTimeZone.py
|
||||
|
||||
#table
|
||||
python3 ./test.py -f table/alter_wal0.py
|
||||
python3 ./test.py -f table/column_name.py
|
||||
python3 ./test.py -f table/column_num.py
|
||||
python3 ./test.py -f table/db_table.py
|
||||
python3 ./test.py -f table/create_sensitive.py
|
||||
python3 ./test.py -f table/tablename-boundary.py
|
||||
python3 ./test.py -f table/max_table_length.py
|
||||
python3 ./test.py -f table/alter_column.py
|
||||
python3 ./test.py -f table/boundary.py
|
||||
python3 ./test.py -f table/create.py
|
||||
python3 ./test.py -f table/del_stable.py
|
||||
|
||||
#stable
|
||||
python3 ./test.py -f stable/insert.py
|
||||
python3 test.py -f tools/taosdemoAllTest/taosdemoTestInsertWithJsonStmt.py
|
||||
|
||||
# tag
|
||||
python3 ./test.py -f tag_lite/filter.py
|
||||
python3 ./test.py -f tag_lite/create-tags-boundary.py
|
||||
python3 ./test.py -f tag_lite/3.py
|
||||
python3 ./test.py -f tag_lite/4.py
|
||||
python3 ./test.py -f tag_lite/5.py
|
||||
python3 ./test.py -f tag_lite/6.py
|
||||
python3 ./test.py -f tag_lite/add.py
|
||||
python3 ./test.py -f tag_lite/bigint.py
|
||||
python3 ./test.py -f tag_lite/binary_binary.py
|
||||
python3 ./test.py -f tag_lite/binary.py
|
||||
python3 ./test.py -f tag_lite/bool_binary.py
|
||||
python3 ./test.py -f tag_lite/bool_int.py
|
||||
python3 ./test.py -f tag_lite/bool.py
|
||||
python3 ./test.py -f tag_lite/change.py
|
||||
python3 ./test.py -f tag_lite/column.py
|
||||
python3 ./test.py -f tag_lite/commit.py
|
||||
python3 ./test.py -f tag_lite/create.py
|
||||
python3 ./test.py -f tag_lite/datatype.py
|
||||
python3 ./test.py -f tag_lite/datatype-without-alter.py
|
||||
python3 ./test.py -f tag_lite/delete.py
|
||||
python3 ./test.py -f tag_lite/double.py
|
||||
python3 ./test.py -f tag_lite/float.py
|
||||
python3 ./test.py -f tag_lite/int_binary.py
|
||||
python3 ./test.py -f tag_lite/int_float.py
|
||||
python3 ./test.py -f tag_lite/int.py
|
||||
python3 ./test.py -f tag_lite/set.py
|
||||
python3 ./test.py -f tag_lite/smallint.py
|
||||
python3 ./test.py -f tag_lite/tinyint.py
|
||||
python3 ./test.py -f tag_lite/timestamp.py
|
||||
python3 ./test.py -f tag_lite/TestModifyTag.py
|
||||
|
||||
#python3 ./test.py -f dbmgmt/database-name-boundary.py
|
||||
python3 test.py -f dbmgmt/nanoSecondCheck.py
|
||||
|
||||
python3 ./test.py -f import_merge/importBlock1HO.py
|
||||
python3 ./test.py -f import_merge/importBlock1HPO.py
|
||||
python3 ./test.py -f import_merge/importBlock1H.py
|
||||
python3 ./test.py -f import_merge/importBlock1S.py
|
||||
python3 ./test.py -f import_merge/importBlock1Sub.py
|
||||
python3 ./test.py -f import_merge/importBlock1TO.py
|
||||
python3 ./test.py -f import_merge/importBlock1TPO.py
|
||||
python3 ./test.py -f import_merge/importBlock1T.py
|
||||
python3 ./test.py -f import_merge/importBlock2HO.py
|
||||
python3 ./test.py -f import_merge/importBlock2HPO.py
|
||||
python3 ./test.py -f import_merge/importBlock2H.py
|
||||
python3 ./test.py -f import_merge/importBlock2S.py
|
||||
python3 ./test.py -f import_merge/importBlock2Sub.py
|
||||
python3 ./test.py -f import_merge/importBlock2TO.py
|
||||
python3 ./test.py -f import_merge/importBlock2TPO.py
|
||||
python3 ./test.py -f import_merge/importBlock2T.py
|
||||
python3 ./test.py -f import_merge/importBlockbetween.py
|
||||
python3 ./test.py -f import_merge/importCacheFileHO.py
|
||||
python3 ./test.py -f import_merge/importCacheFileHPO.py
|
||||
python3 ./test.py -f import_merge/importCacheFileH.py
|
||||
python3 ./test.py -f import_merge/importCacheFileS.py
|
||||
python3 ./test.py -f import_merge/importCacheFileSub.py
|
||||
python3 ./test.py -f import_merge/importCacheFileTO.py
|
||||
python3 ./test.py -f import_merge/importCacheFileTPO.py
|
||||
python3 ./test.py -f import_merge/importCacheFileT.py
|
||||
python3 ./test.py -f import_merge/importDataH2.py
|
||||
python3 ./test.py -f import_merge/importDataHO2.py
|
||||
python3 ./test.py -f import_merge/importDataHO.py
|
||||
python3 ./test.py -f import_merge/importDataHPO.py
|
||||
python3 ./test.py -f import_merge/importDataLastHO.py
|
||||
python3 ./test.py -f import_merge/importDataLastHPO.py
|
||||
python3 ./test.py -f import_merge/importDataLastH.py
|
||||
python3 ./test.py -f import_merge/importDataLastS.py
|
||||
python3 ./test.py -f import_merge/importDataLastSub.py
|
||||
python3 ./test.py -f import_merge/importDataLastTO.py
|
||||
python3 ./test.py -f import_merge/importDataLastTPO.py
|
||||
python3 ./test.py -f import_merge/importDataLastT.py
|
||||
python3 ./test.py -f import_merge/importDataS.py
|
||||
python3 ./test.py -f import_merge/importDataSub.py
|
||||
python3 ./test.py -f import_merge/importDataTO.py
|
||||
python3 ./test.py -f import_merge/importDataTPO.py
|
||||
python3 ./test.py -f import_merge/importDataT.py
|
||||
python3 ./test.py -f import_merge/importHeadOverlap.py
|
||||
python3 ./test.py -f import_merge/importHeadPartOverlap.py
|
||||
python3 ./test.py -f import_merge/importHead.py
|
||||
python3 ./test.py -f import_merge/importHORestart.py
|
||||
python3 ./test.py -f import_merge/importHPORestart.py
|
||||
python3 ./test.py -f import_merge/importHRestart.py
|
||||
python3 ./test.py -f import_merge/importLastHO.py
|
||||
python3 ./test.py -f import_merge/importLastHPO.py
|
||||
python3 ./test.py -f import_merge/importLastH.py
|
||||
python3 ./test.py -f import_merge/importLastS.py
|
||||
python3 ./test.py -f import_merge/importLastSub.py
|
||||
python3 ./test.py -f import_merge/importLastTO.py
|
||||
python3 ./test.py -f import_merge/importLastTPO.py
|
||||
python3 ./test.py -f import_merge/importLastT.py
|
||||
python3 ./test.py -f import_merge/importSpan.py
|
||||
python3 ./test.py -f import_merge/importSRestart.py
|
||||
python3 ./test.py -f import_merge/importSubRestart.py
|
||||
python3 ./test.py -f import_merge/importTailOverlap.py
|
||||
python3 ./test.py -f import_merge/importTailPartOverlap.py
|
||||
python3 ./test.py -f import_merge/importTail.py
|
||||
python3 ./test.py -f import_merge/importToCommit.py
|
||||
python3 ./test.py -f import_merge/importTORestart.py
|
||||
python3 ./test.py -f import_merge/importTPORestart.py
|
||||
python3 ./test.py -f import_merge/importTRestart.py
|
||||
python3 ./test.py -f import_merge/importInsertThenImport.py
|
||||
python3 ./test.py -f import_merge/importCSV.py
|
||||
python3 ./test.py -f import_merge/import_update_0.py
|
||||
python3 ./test.py -f import_merge/import_update_1.py
|
||||
python3 ./test.py -f import_merge/import_update_2.py
|
||||
python3 ./test.py -f update/merge_commit_data.py
|
||||
#======================p1-end===============
|
||||
#======================p2-start===============
|
||||
# tools
|
||||
python3 test.py -f tools/taosdumpTest.py
|
||||
python3 test.py -f tools/taosdumpTest2.py
|
||||
|
||||
python3 test.py -f tools/taosdemoTest.py
|
||||
python3 test.py -f tools/taosdemoTestWithoutMetric.py
|
||||
python3 test.py -f tools/taosdemoTestWithJson.py
|
||||
python3 test.py -f tools/taosdemoTestLimitOffset.py
|
||||
python3 test.py -f tools/taosdemoTestTblAlt.py
|
||||
python3 test.py -f tools/taosdemoTestSampleData.py
|
||||
python3 test.py -f tools/taosdemoTestInterlace.py
|
||||
python3 test.py -f tools/taosdemoTestQuery.py
|
||||
|
||||
# restful test for python
|
||||
python3 test.py -f restful/restful_bind_db1.py
|
||||
python3 test.py -f restful/restful_bind_db2.py
|
||||
|
||||
# nano support
|
||||
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoInsert.py
|
||||
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
|
||||
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py
|
||||
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
|
||||
python3 test.py -f tools/taosdumpTestNanoSupport.py
|
||||
|
||||
#
|
||||
python3 ./test.py -f tsdb/tsdbComp.py
|
||||
|
||||
# update
|
||||
python3 ./test.py -f update/allow_update.py
|
||||
python3 ./test.py -f update/allow_update-0.py
|
||||
python3 ./test.py -f update/append_commit_data.py
|
||||
python3 ./test.py -f update/append_commit_last-0.py
|
||||
python3 ./test.py -f update/append_commit_last.py
|
||||
|
||||
|
||||
python3 ./test.py -f update/merge_commit_data2.py
|
||||
python3 ./test.py -f update/merge_commit_data2_update0.py
|
||||
python3 ./test.py -f update/merge_commit_last-0.py
|
||||
python3 ./test.py -f update/merge_commit_last.py
|
||||
python3 ./test.py -f update/bug_td2279.py
|
||||
|
||||
#======================p2-end===============
|
||||
#======================p3-start===============
|
||||
|
||||
|
||||
# user
|
||||
python3 ./test.py -f user/user_create.py
|
||||
python3 ./test.py -f user/pass_len.py
|
||||
|
||||
# stable
|
||||
python3 ./test.py -f stable/query_after_reset.py
|
||||
|
||||
# perfbenchmark
|
||||
python3 ./test.py -f perfbenchmark/bug3433.py
|
||||
#python3 ./test.py -f perfbenchmark/bug3589.py
|
||||
python3 ./test.py -f perfbenchmark/taosdemoInsert.py
|
||||
|
||||
#taosdemo
|
||||
python3 test.py -f tools/taosdemoAllTest/taosdemoTestInsertWithJson.py
|
||||
python3 test.py -f tools/taosdemoAllTest/taosdemoTestQueryWithJson.py
|
||||
|
||||
#query
|
||||
python3 test.py -f query/distinctOneColTb.py
|
||||
python3 ./test.py -f query/filter.py
|
||||
python3 ./test.py -f query/filterCombo.py
|
||||
python3 ./test.py -f query/queryNormal.py
|
||||
python3 ./test.py -f query/queryError.py
|
||||
python3 ./test.py -f query/filterAllIntTypes.py
|
||||
python3 ./test.py -f query/filterFloatAndDouble.py
|
||||
python3 ./test.py -f query/filterOtherTypes.py
|
||||
python3 ./test.py -f query/querySort.py
|
||||
python3 ./test.py -f query/queryJoin.py
|
||||
python3 ./test.py -f query/select_last_crash.py
|
||||
python3 ./test.py -f query/queryNullValueTest.py
|
||||
python3 ./test.py -f query/queryInsertValue.py
|
||||
python3 ./test.py -f query/queryConnection.py
|
||||
python3 ./test.py -f query/queryCountCSVData.py
|
||||
python3 ./test.py -f query/natualInterval.py
|
||||
python3 ./test.py -f query/bug1471.py
|
||||
#python3 ./test.py -f query/dataLossTest.py
|
||||
python3 ./test.py -f query/bug1874.py
|
||||
python3 ./test.py -f query/bug1875.py
|
||||
python3 ./test.py -f query/bug1876.py
|
||||
python3 ./test.py -f query/bug2218.py
|
||||
python3 ./test.py -f query/bug2117.py
|
||||
python3 ./test.py -f query/bug2118.py
|
||||
python3 ./test.py -f query/bug2143.py
|
||||
python3 ./test.py -f query/sliding.py
|
||||
python3 ./test.py -f query/unionAllTest.py
|
||||
python3 ./test.py -f query/bug2281.py
|
||||
python3 ./test.py -f query/bug2119.py
|
||||
python3 ./test.py -f query/isNullTest.py
|
||||
python3 ./test.py -f query/queryWithTaosdKilled.py
|
||||
python3 ./test.py -f query/floatCompare.py
|
||||
python3 ./test.py -f query/query1970YearsAf.py
|
||||
python3 ./test.py -f query/bug3351.py
|
||||
python3 ./test.py -f query/bug3375.py
|
||||
python3 ./test.py -f query/queryJoin10tables.py
|
||||
python3 ./test.py -f query/queryStddevWithGroupby.py
|
||||
python3 ./test.py -f query/querySecondtscolumnTowherenow.py
|
||||
python3 ./test.py -f query/queryFilterTswithDateUnit.py
|
||||
python3 ./test.py -f query/queryTscomputWithNow.py
|
||||
python3 ./test.py -f query/queryStableJoin.py
|
||||
python3 ./test.py -f query/computeErrorinWhere.py
|
||||
python3 ./test.py -f query/queryTsisNull.py
|
||||
python3 ./test.py -f query/subqueryFilter.py
|
||||
python3 ./test.py -f query/nestedQuery/queryInterval.py
|
||||
python3 ./test.py -f query/queryStateWindow.py
|
||||
# python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
|
||||
python3 ./test.py -f query/nestquery_last_row.py
|
||||
python3 ./test.py -f query/queryCnameDisplay.py
|
||||
# python3 ./test.py -f query/operator_cost.py
|
||||
# python3 ./test.py -f query/long_where_query.py
|
||||
python3 test.py -f query/nestedQuery/queryWithSpread.py
|
||||
|
||||
#stream
|
||||
python3 ./test.py -f stream/metric_1.py
|
||||
python3 ./test.py -f stream/metric_n.py
|
||||
python3 ./test.py -f stream/new.py
|
||||
python3 ./test.py -f stream/stream1.py
|
||||
python3 ./test.py -f stream/stream2.py
|
||||
#python3 ./test.py -f stream/parser.py
|
||||
python3 ./test.py -f stream/history.py
|
||||
python3 ./test.py -f stream/sys.py
|
||||
python3 ./test.py -f stream/table_1.py
|
||||
python3 ./test.py -f stream/table_n.py
|
||||
python3 ./test.py -f stream/showStreamExecTimeisNull.py
|
||||
python3 ./test.py -f stream/cqSupportBefore1970.py
|
||||
|
||||
#alter table
|
||||
python3 ./test.py -f alter/alter_table_crash.py
|
||||
python3 ./test.py -f alter/alterTabAddTagWithNULL.py
|
||||
python3 ./test.py -f alter/alterTimestampColDataProcess.py
|
||||
|
||||
# client
|
||||
python3 ./test.py -f client/client.py
|
||||
python3 ./test.py -f client/version.py
|
||||
python3 ./test.py -f client/alterDatabase.py
|
||||
python3 ./test.py -f client/noConnectionErrorTest.py
|
||||
# python3 test.py -f client/change_time_1_1.py
|
||||
# python3 test.py -f client/change_time_1_2.py
|
||||
|
||||
# Misc
|
||||
python3 testCompress.py
|
||||
python3 testNoCompress.py
|
||||
python3 testMinTablesPerVnode.py
|
||||
python3 queryCount.py
|
||||
python3 ./test.py -f query/queryGroupbyWithInterval.py
|
||||
python3 client/twoClients.py
|
||||
python3 test.py -f query/queryInterval.py
|
||||
python3 test.py -f query/queryFillTest.py
|
||||
# subscribe
|
||||
python3 test.py -f subscribe/singlemeter.py
|
||||
#python3 test.py -f subscribe/stability.py
|
||||
python3 test.py -f subscribe/supertable.py
|
||||
# topic
|
||||
python3 ./test.py -f topic/topicQuery.py
|
||||
#======================p3-end===============
|
||||
#======================p4-start===============
|
||||
|
||||
python3 ./test.py -f update/merge_commit_data-0.py
|
||||
# wal
|
||||
python3 ./test.py -f wal/addOldWalTest.py
|
||||
python3 ./test.py -f wal/sdbComp.py
|
||||
|
||||
# function
|
||||
python3 ./test.py -f functions/all_null_value.py
|
||||
# functions
|
||||
python3 ./test.py -f functions/function_avg.py -r 1
|
||||
python3 ./test.py -f functions/function_bottom.py -r 1
|
||||
python3 ./test.py -f functions/function_count.py -r 1
|
||||
python3 ./test.py -f functions/function_count_last_stab.py
|
||||
python3 ./test.py -f functions/function_diff.py -r 1
|
||||
python3 ./test.py -f functions/function_first.py -r 1
|
||||
python3 ./test.py -f functions/function_last.py -r 1
|
||||
python3 ./test.py -f functions/function_last_row.py -r 1
|
||||
python3 ./test.py -f functions/function_leastsquares.py -r 1
|
||||
python3 ./test.py -f functions/function_max.py -r 1
|
||||
python3 ./test.py -f functions/function_min.py -r 1
|
||||
python3 ./test.py -f functions/function_operations.py -r 1
|
||||
python3 ./test.py -f functions/function_percentile.py -r 1
|
||||
python3 ./test.py -f functions/function_spread.py -r 1
|
||||
python3 ./test.py -f functions/function_stddev.py -r 1
|
||||
python3 ./test.py -f functions/function_sum.py -r 1
|
||||
python3 ./test.py -f functions/function_top.py -r 1
|
||||
python3 ./test.py -f functions/function_twa.py -r 1
|
||||
python3 ./test.py -f functions/function_twa_test2.py
|
||||
python3 ./test.py -f functions/function_stddev_td2555.py
|
||||
python3 ./test.py -f functions/showOfflineThresholdIs864000.py
|
||||
python3 ./test.py -f functions/function_interp.py
|
||||
python3 ./test.py -f insert/metadataUpdate.py
|
||||
python3 ./test.py -f query/last_cache.py
|
||||
python3 ./test.py -f query/last_row_cache.py
|
||||
python3 ./test.py -f account/account_create.py
|
||||
python3 ./test.py -f alter/alter_table.py
|
||||
python3 ./test.py -f query/queryGroupbySort.py
|
||||
python3 ./test.py -f functions/queryTestCases.py
|
||||
python3 ./test.py -f functions/function_stateWindow.py
|
||||
python3 ./test.py -f functions/function_derivative.py
|
||||
python3 ./test.py -f functions/function_irate.py
|
||||
|
||||
python3 ./test.py -f insert/unsignedInt.py
|
||||
python3 ./test.py -f insert/unsignedBigint.py
|
||||
python3 ./test.py -f insert/unsignedSmallint.py
|
||||
python3 ./test.py -f insert/unsignedTinyint.py
|
||||
python3 ./test.py -f insert/insertFromCSV.py
|
||||
python3 ./test.py -f query/filterAllUnsignedIntTypes.py
|
||||
|
||||
python3 ./test.py -f tag_lite/unsignedInt.py
|
||||
python3 ./test.py -f tag_lite/unsignedBigint.py
|
||||
python3 ./test.py -f tag_lite/unsignedSmallint.py
|
||||
python3 ./test.py -f tag_lite/unsignedTinyint.py
|
||||
|
||||
python3 ./test.py -f functions/function_percentile2.py
|
||||
python3 ./test.py -f insert/boundary2.py
|
||||
python3 ./test.py -f insert/insert_locking.py
|
||||
python3 ./test.py -f alter/alter_debugFlag.py
|
||||
python3 ./test.py -f query/queryBetweenAnd.py
|
||||
python3 ./test.py -f tag_lite/alter_tag.py
|
||||
python3 test.py -f tools/taosdemoAllTest/TD-4985/query-limit-offset.py
|
||||
python3 test.py -f tools/taosdemoAllTest/TD-5213/insert4096columns_not_use_taosdemo.py
|
||||
python3 test.py -f tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.py
|
||||
python3 ./test.py -f tag_lite/drop_auto_create.py
|
||||
python3 test.py -f insert/insert_before_use_db.py
|
||||
python3 test.py -f alter/alter_keep.py
|
||||
python3 test.py -f alter/alter_cacheLastRow.py
|
||||
python3 ./test.py -f query/querySession.py
|
||||
python3 test.py -f alter/alter_create_exception.py
|
||||
python3 ./test.py -f insert/flushwhiledrop.py
|
||||
python3 ./test.py -f insert/schemalessInsert.py
|
||||
python3 ./test.py -f alter/alterColMultiTimes.py
|
||||
python3 ./test.py -f query/queryWildcardLength.py
|
||||
python3 ./test.py -f query/queryTbnameUpperLower.py
|
||||
python3 ./test.py -f query/query.py
|
||||
python3 ./test.py -f query/queryDiffColsOr.py
|
||||
#======================p4-end===============
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute("create table st(ts timestamp, c1 int, c2 int)")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into st values(%d, null, null)" % (self.ts + i))
|
||||
|
||||
tdSql.query("select avg(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select max(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select min(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select bottom(c1, 1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select top(c1, 1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select first(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select last(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select last_row(c1) from st")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, None)
|
||||
|
||||
tdSql.query("select count(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select leastsquares(c1, 1, 1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select c1 + c2 from st")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select spread(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select stddev(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select sum(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select twa(c1) from st")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,71 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute('''create table test(ts timestamp, col1 int, col2 int) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using test tags('beijing')")
|
||||
tdSql.execute("create table test2 using test tags('shanghai')")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d)" % (self.ts + i, i + 1, i + 1))
|
||||
tdSql.execute("insert into test2 values(%d, %d, %d)" % (self.ts + i, i + 1, i + 1))
|
||||
|
||||
# arithmetic verifacation
|
||||
tdSql.query("select 0.1 + 0.1 from test")
|
||||
tdSql.checkRows(self.rowNum * 2)
|
||||
for i in range(self.rowNum * 2):
|
||||
tdSql.checkData(0, 0, 0.20000000)
|
||||
|
||||
tdSql.query("select 4 * avg(col1) from test")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 22)
|
||||
|
||||
tdSql.query("select 4 * sum(col1) from test")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 440)
|
||||
|
||||
tdSql.query("select 4 * avg(col1) * sum(col2) from test")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 2420)
|
||||
|
||||
tdSql.query("select 4 * avg(col1) * sum(col2) from test group by loc")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, 1210)
|
||||
tdSql.checkData(1, 0, 1210)
|
||||
|
||||
tdSql.error("select avg(col1 * 2)from test group by loc")
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,81 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
intData = []
|
||||
floatData = []
|
||||
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using test tags('beijing')")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
intData.append(i + 1)
|
||||
floatData.append(i + 0.1)
|
||||
|
||||
# average verifacation
|
||||
tdSql.error("select avg(ts) from test")
|
||||
tdSql.error("select avg(ts) from test1")
|
||||
tdSql.error("select avg(col7) from test")
|
||||
tdSql.error("select avg(col7) from test1")
|
||||
tdSql.error("select avg(col8) from test")
|
||||
tdSql.error("select avg(col8) from test1")
|
||||
tdSql.error("select avg(col9) from test")
|
||||
tdSql.error("select avg(col9) from test1")
|
||||
|
||||
tdSql.query("select avg(col1) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col2) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col3) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col4) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col5) from test")
|
||||
tdSql.checkData(0, 0, np.average(floatData))
|
||||
tdSql.query("select avg(col6) from test")
|
||||
tdSql.checkData(0, 0, np.average(floatData))
|
||||
tdSql.query("select avg(col11) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col12) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col13) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col14) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,81 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("use db")
|
||||
|
||||
intData = []
|
||||
floatData = []
|
||||
|
||||
#tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
# col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
#tdSql.execute("create table test1 using test tags('beijing')")
|
||||
for i in range(self.rowNum):
|
||||
#tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
# % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
intData.append(i + 1)
|
||||
floatData.append(i + 0.1)
|
||||
|
||||
# average verifacation
|
||||
tdSql.error("select avg(ts) from test")
|
||||
tdSql.error("select avg(ts) from test1")
|
||||
tdSql.error("select avg(col7) from test")
|
||||
tdSql.error("select avg(col7) from test1")
|
||||
tdSql.error("select avg(col8) from test")
|
||||
tdSql.error("select avg(col8) from test1")
|
||||
tdSql.error("select avg(col9) from test")
|
||||
tdSql.error("select avg(col9) from test1")
|
||||
|
||||
tdSql.query("select avg(col1) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col2) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col3) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col4) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col5) from test")
|
||||
tdSql.checkData(0, 0, np.average(floatData))
|
||||
tdSql.query("select avg(col6) from test")
|
||||
tdSql.checkData(0, 0, np.average(floatData))
|
||||
tdSql.query("select avg(col11) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col12) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col13) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
tdSql.query("select avg(col14) from test")
|
||||
tdSql.checkData(0, 0, np.average(intData))
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,144 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using test tags('beijing')")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
# bottom verifacation
|
||||
tdSql.error("select bottom(ts, 10) from test")
|
||||
tdSql.error("select bottom(col1, 0) from test")
|
||||
tdSql.error("select bottom(col1, 101) from test")
|
||||
tdSql.error("select bottom(col2, 0) from test")
|
||||
tdSql.error("select bottom(col2, 101) from test")
|
||||
tdSql.error("select bottom(col3, 0) from test")
|
||||
tdSql.error("select bottom(col3, 101) from test")
|
||||
tdSql.error("select bottom(col4, 0) from test")
|
||||
tdSql.error("select bottom(col4, 101) from test")
|
||||
tdSql.error("select bottom(col5, 0) from test")
|
||||
tdSql.error("select bottom(col5, 101) from test")
|
||||
tdSql.error("select bottom(col6, 0) from test")
|
||||
tdSql.error("select bottom(col6, 101) from test")
|
||||
tdSql.error("select bottom(col7, 10) from test")
|
||||
tdSql.error("select bottom(col8, 10) from test")
|
||||
tdSql.error("select bottom(col9, 10) from test")
|
||||
|
||||
tdSql.query("select bottom(col1, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col2, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col3, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col4, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col5, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 0.1)
|
||||
tdSql.checkData(1, 1, 1.1)
|
||||
|
||||
tdSql.query("select bottom(col6, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 0.1)
|
||||
tdSql.checkData(1, 1, 1.1)
|
||||
|
||||
tdSql.query("select bottom(col11, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col12, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col13, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col14, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select ts,bottom(col1, 2),ts from test1")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
|
||||
|
||||
|
||||
tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
|
||||
|
||||
#TD-2457 bottom + interval + order by
|
||||
tdSql.error('select top(col2,1) from test interval(1y) order by col2;')
|
||||
|
||||
#TD-2563 top + super_table + interval
|
||||
tdSql.execute("create table meters(ts timestamp, c int) tags (d int)")
|
||||
tdSql.execute("create table t1 using meters tags (1)")
|
||||
sql = 'insert into t1 values '
|
||||
for i in range(20000):
|
||||
sql = sql + '(%d, %d)' % (self.ts + i , i % 47)
|
||||
if i % 2000 == 0:
|
||||
tdSql.execute(sql)
|
||||
sql = 'insert into t1 values '
|
||||
tdSql.execute(sql)
|
||||
tdSql.query('select bottom(c,1) from meters interval(10a)')
|
||||
tdSql.checkData(0,1,0)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,113 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("use db")
|
||||
|
||||
#tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
# col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
#tdSql.execute("create table test1 using test tags('beijing')")
|
||||
#for i in range(self.rowNum):
|
||||
# tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
# % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
# bottom verifacation
|
||||
tdSql.error("select bottom(ts, 10) from test")
|
||||
tdSql.error("select bottom(col1, 0) from test")
|
||||
tdSql.error("select bottom(col1, 101) from test")
|
||||
tdSql.error("select bottom(col2, 0) from test")
|
||||
tdSql.error("select bottom(col2, 101) from test")
|
||||
tdSql.error("select bottom(col3, 0) from test")
|
||||
tdSql.error("select bottom(col3, 101) from test")
|
||||
tdSql.error("select bottom(col4, 0) from test")
|
||||
tdSql.error("select bottom(col4, 101) from test")
|
||||
tdSql.error("select bottom(col5, 0) from test")
|
||||
tdSql.error("select bottom(col5, 101) from test")
|
||||
tdSql.error("select bottom(col6, 0) from test")
|
||||
tdSql.error("select bottom(col6, 101) from test")
|
||||
tdSql.error("select bottom(col7, 10) from test")
|
||||
tdSql.error("select bottom(col8, 10) from test")
|
||||
tdSql.error("select bottom(col9, 10) from test")
|
||||
|
||||
tdSql.query("select bottom(col1, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col2, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col3, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col4, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col11, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col12, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col13, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col14, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select bottom(col5, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 0.1)
|
||||
tdSql.checkData(1, 1, 1.1)
|
||||
|
||||
tdSql.query("select bottom(col6, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 0.1)
|
||||
tdSql.checkData(1, 1, 1.1)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,88 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using test tags('beijing')")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
# Count verifacation
|
||||
tdSql.query("select count(*) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
|
||||
tdSql.query("select count(ts) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col1) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col2) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col3) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col4) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col5) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col6) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col7) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col8) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col9) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
|
||||
tdSql.query("select count(col11) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col12) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col13) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.query("select count(col14) from test")
|
||||
tdSql.checkData(0, 0, 10)
|
||||
|
||||
tdSql.execute("alter table test add column col10 int")
|
||||
tdSql.query("select count(col10) from test")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.execute("insert into test1 values(now, 1, 2, 3, 4, 1.1, 2.2, false, 'test', 'test' , 1, 1, 1, 1, 1)")
|
||||
tdSql.query("select count(col10) from test")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,70 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute('''create stable stest(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using stest tags('beijing')")
|
||||
tdSql.execute("insert into test1(ts) values(%d)" % (self.ts - 1))
|
||||
|
||||
|
||||
# last verifacation
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
tdSql.query("select count(*),last(*) from stest group by col1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.checkData(1, 2, 2)
|
||||
tdSql.checkData(1, 3, 1)
|
||||
|
||||
tdSql.query("select count(*),last(*) from stest group by col2")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 10)
|
||||
tdSql.checkData(0, 2, 10)
|
||||
tdSql.checkData(0, 3, 1)
|
||||
|
||||
tdSql.query("select count(*),last(ts,stest.*) from stest group by col1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.checkData(0, 2, "2018-09-17 09:00:00")
|
||||
tdSql.checkData(1, 4, 1)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("use db")
|
||||
|
||||
#tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
# col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
#tdSql.execute("create table test1 using test tags('beijing')")
|
||||
#for i in range(self.rowNum):
|
||||
# tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
# % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
# Count verifacation
|
||||
tdSql.query("select count(*) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
|
||||
tdSql.query("select count(ts) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col1) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col2) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col3) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col4) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col5) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col6) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col7) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col8) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col9) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
|
||||
tdSql.query("select count(col11) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col12) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col13) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
tdSql.query("select count(col14) from test")
|
||||
tdSql.checkData(0, 0, 11)
|
||||
|
||||
#tdSql.execute("alter table test add column col10 int")
|
||||
#tdSql.query("select count(col10) from test")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
##tdSql.execute("insert into test1 values(now, 1, 2, 3, 4, 1.1, 2.2, false, 'test', 'test' 1)")
|
||||
tdSql.query("select count(col10) from test")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,168 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def insertAndCheckData(self):
|
||||
types = ["tinyint", "tinyint unsigned", "smallint", "smallint unsigned", "int", "int unsigned", "bigint", "bigint unsigned", "float", "double", "bool", "binary(20)", "nchar(20)"]
|
||||
|
||||
for type in types:
|
||||
print("============== create table using %s type ================" % type)
|
||||
tdSql.execute("drop table if exists stb")
|
||||
tdSql.execute("create table stb(ts timestamp, col %s) tags (id int)" % type)
|
||||
tdSql.execute("create table tb1 using stb tags(1)")
|
||||
tdSql.execute("create table tb2 using stb tags(2)")
|
||||
|
||||
if type == "tinyint" or type == "smallint" or type == "int" or type == "bigint":
|
||||
tdSql.execute("insert into tb1 values(%d, 1)(%d, 11)(%d, 21)" % (self.ts, self.ts + 10000, self.ts + 20000))
|
||||
tdSql.execute("insert into tb1 values(%d, -1)(%d, -11)(%d, -21)" % (self.ts + 30000, self.ts + 40000, self.ts + 50000))
|
||||
tdSql.execute("insert into tb2 values(%d, 10)(%d, 20)(%d, 30)" % (self.ts + 60000, self.ts + 70000, self.ts + 80000))
|
||||
tdSql.execute("insert into tb2 values(%d, -10)(%d, -20)(%d, -30)" % (self.ts + 90000, self.ts + 1000000, self.ts + 1100000))
|
||||
|
||||
tdSql.execute("insert into tb3 using stb tags(3) values(%d, 10)" % (self.ts + 1200000))
|
||||
|
||||
tdSql.query("select derivative(col, 1s, 1) from stb group by tbname")
|
||||
tdSql.checkRows(4)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from stb group by tbname")
|
||||
tdSql.checkRows(4)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from stb group by tbname")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select ts,derivative(col, 10s, 1),ts from stb group by tbname")
|
||||
tdSql.checkRows(4)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(3, 0, "2018-09-17 09:01:20.000")
|
||||
tdSql.checkData(3, 1, "2018-09-17 09:01:20.000")
|
||||
tdSql.checkData(3, 3, "2018-09-17 09:01:20.000")
|
||||
|
||||
tdSql.query("select ts,derivative(col, 10s, 1),ts from tb1")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:20.009")
|
||||
tdSql.checkData(1, 1, "2018-09-17 09:00:20.009")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:20.009")
|
||||
|
||||
tdSql.query("select ts from(select ts,derivative(col, 10s, 0) from stb group by tbname)")
|
||||
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname")
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from tb1")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from tb1")
|
||||
tdSql.checkRows(5)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from tb2")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from tb2")
|
||||
tdSql.checkRows(5)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from tb3")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
elif type == "tinyint unsigned" or type == "smallint unsigned" or type == "int unsigned" or type == "bigint unsigned":
|
||||
tdSql.execute("insert into tb1 values(%d, 1)(%d, 11)(%d, 21)" % (self.ts, self.ts + 10000, self.ts + 20000))
|
||||
tdSql.execute("insert into tb2 values(%d, 10)(%d, 20)(%d, 30)" % (self.ts + 60000, self.ts + 70000, self.ts + 80000))
|
||||
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb1")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb2")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb2")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb2")
|
||||
|
||||
elif type == "float" or type == "double":
|
||||
tdSql.execute("insert into tb1 values(%d, 1.0)(%d, 11.0)(%d, 21.0)" % (self.ts, self.ts + 10000, self.ts + 20000))
|
||||
tdSql.execute("insert into tb2 values(%d, 3.0)(%d, 4.0)(%d, 5.0)" % (self.ts + 60000, self.ts + 70000, self.ts + 80000))
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from tb1")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from tb1")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from tb2")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 0) from tb2")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
elif type == "bool":
|
||||
tdSql.execute("insert into tb1 values(%d, true)(%d, false)(%d, true)" % (self.ts, self.ts + 10000, self.ts + 20000))
|
||||
tdSql.execute("insert into tb2 values(%d, false)(%d, true)(%d, true)" % (self.ts + 60000, self.ts + 70000, self.ts + 80000))
|
||||
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb1")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb2")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb2")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb2")
|
||||
|
||||
else:
|
||||
tdSql.execute("insert into tb1 values(%d, 'test01')(%d, 'test01')(%d, 'test01')" % (self.ts, self.ts + 10000, self.ts + 20000))
|
||||
tdSql.execute("insert into tb2 values(%d, 'test01')(%d, 'test01')(%d, 'test01')" % (self.ts + 60000, self.ts + 70000, self.ts + 80000))
|
||||
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb1")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb1")
|
||||
tdSql.error("select derivative(col, 1s, 1) from tb2")
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb2")
|
||||
tdSql.error("select derivative(col, 999ms, 0) from tb2")
|
||||
|
||||
tdSql.error("select derivative(col, 10s, 1) from stb")
|
||||
tdSql.error("select derivative(col, 10s, 1) from stb group by col")
|
||||
tdSql.error("select derivative(col, 10s, 1) from stb group by id")
|
||||
tdSql.error("select derivative(col, 999ms, 1) from stb group by id")
|
||||
tdSql.error("select derivative(col, 10s, 2) from stb group by id")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
self.insertAndCheckData()
|
||||
|
||||
tdSql.execute("create table st(ts timestamp, c1 int, c2 int) tags(id int)")
|
||||
tdSql.execute("insert into dev1(ts, c1) using st tags(1) values(now, 1)")
|
||||
|
||||
tdSql.error("select derivative(c1, 10s, 0) from (select c1 from st)")
|
||||
tdSql.query("select diff(c1) from (select derivative(c1, 1s, 0) c1 from dev1)")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,165 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
self.perfix = 'dev'
|
||||
self.tables = 10
|
||||
|
||||
def insertData(self):
|
||||
print("==============step1")
|
||||
tdSql.execute(
|
||||
"create table if not exists st (ts timestamp, col int) tags(dev nchar(50))")
|
||||
|
||||
for i in range(self.tables):
|
||||
tdSql.execute("create table %s%d using st tags(%d)" % (self.perfix, i, i))
|
||||
rows = 15 + i
|
||||
for j in range(rows):
|
||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, i, self.ts + i * 20 * 10000 + j * 10000, j))
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
tdSql.execute("create table test1 using test tags('beijing')")
|
||||
tdSql.execute("insert into test1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ', 0, 0, 0, 0)" % (self.ts - 1))
|
||||
|
||||
# diff verifacation
|
||||
tdSql.query("select diff(col1) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(col2) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(col3) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(col4) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(col5) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query("select diff(col6) from test1")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
tdSql.error("select diff(ts) from test")
|
||||
tdSql.error("select diff(ts) from test1")
|
||||
tdSql.error("select diff(col1) from test")
|
||||
tdSql.error("select diff(col2) from test")
|
||||
tdSql.error("select diff(col3) from test")
|
||||
tdSql.error("select diff(col4) from test")
|
||||
tdSql.error("select diff(col5) from test")
|
||||
tdSql.error("select diff(col6) from test")
|
||||
tdSql.error("select diff(col7) from test")
|
||||
tdSql.error("select diff(col7) from test1")
|
||||
tdSql.error("select diff(col8) from test")
|
||||
tdSql.error("select diff(col8) from test1")
|
||||
tdSql.error("select diff(col9) from test")
|
||||
tdSql.error("select diff(col9) from test1")
|
||||
tdSql.error("select diff(col11) from test1")
|
||||
tdSql.error("select diff(col12) from test1")
|
||||
tdSql.error("select diff(col13) from test1")
|
||||
tdSql.error("select diff(col14) from test1")
|
||||
tdSql.error("select diff(col11) from test")
|
||||
tdSql.error("select diff(col12) from test")
|
||||
tdSql.error("select diff(col13) from test")
|
||||
tdSql.error("select diff(col14) from test")
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test group by tbname")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test group by tbname")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select diff(col1) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col2) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col3) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col4) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col5) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col6) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
self.insertData()
|
||||
|
||||
tdSql.query("select diff(col) from st group by tbname")
|
||||
tdSql.checkRows(185)
|
||||
|
||||
tdSql.error("select diff(col) from st group by dev")
|
||||
|
||||
tdSql.error("select diff(col) from st group by col")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,112 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.execute("use db")
|
||||
|
||||
#tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
# col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||
#tdSql.execute("create table test1 using test tags('beijing')")
|
||||
#tdSql.execute("insert into test1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ')" % (self.ts - 1))
|
||||
|
||||
# diff verifacation
|
||||
#tdSql.query("select diff(col1) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
#
|
||||
#tdSql.query("select diff(col2) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
#tdSql.query("select diff(col3) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
#tdSql.query("select diff(col4) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
#tdSql.query("select diff(col5) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
#tdSql.query("select diff(col6) from test1")
|
||||
#tdSql.checkRows(0)
|
||||
|
||||
#for i in range(self.rowNum):
|
||||
# tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
# % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
tdSql.error("select diff(ts) from test")
|
||||
tdSql.error("select diff(ts) from test1")
|
||||
tdSql.error("select diff(col1) from test")
|
||||
tdSql.error("select diff(col2) from test")
|
||||
tdSql.error("select diff(col3) from test")
|
||||
tdSql.error("select diff(col4) from test")
|
||||
tdSql.error("select diff(col5) from test")
|
||||
tdSql.error("select diff(col6) from test")
|
||||
tdSql.error("select diff(col7) from test")
|
||||
tdSql.error("select diff(col7) from test1")
|
||||
tdSql.error("select diff(col8) from test")
|
||||
tdSql.error("select diff(col8) from test1")
|
||||
tdSql.error("select diff(col9) from test")
|
||||
tdSql.error("select diff(col11) from test1")
|
||||
tdSql.error("select diff(col12) from test1")
|
||||
tdSql.error("select diff(col13) from test1")
|
||||
tdSql.error("select diff(col14) from test1")
|
||||
tdSql.error("select diff(col11) from test")
|
||||
tdSql.error("select diff(col12) from test")
|
||||
tdSql.error("select diff(col13) from test")
|
||||
tdSql.error("select diff(col14) from test")
|
||||
|
||||
tdSql.query("select diff(col1) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col2) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col3) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col4) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col5) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col6) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select diff(col) from st group by tbname")
|
||||
tdSql.checkRows(185)
|
||||
|
||||
tdSql.error("select diff(col) from st group by dev")
|
||||
tdSql.error("select diff(col) from st group by col")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue