Merge branch 'v3.0' into fix/long_query

This commit is contained in:
Minglei Jin 2022-12-29 17:43:51 +08:00
commit 2ed9c19da4
1163 changed files with 84453 additions and 64683 deletions

1
.gitignore vendored
View File

@ -101,6 +101,7 @@ tests/examples/JDBC/JDBCDemo/.settings/
source/libs/parser/inc/sql.*
tests/script/tmqResult.txt
tests/tmqResult.txt
tests/script/jenkins/basic.txt
# Emacs
# -*- mode: gitignore; -*-

View File

@ -7,25 +7,18 @@
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
- 欢迎提供包含由 Bug 生成的日志文件的附录。
## 需要强调的代码提交规则
## 代码提交规则
- 在提交代码之前,需要**同意贡献者许可协议CLA**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
1. 在提交代码之前,需要**同意贡献者许可协议CLA**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
2. 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
3. 将TDengine仓库库fork到自己的账户中并创建分支(branch)。
注意:默认分支`main`不能直接接受PR请基于开发分支`3.0`创建自己的分支。
注意:修改文档的分支要以`docs/`为开头,以免进行不必要的测试。
4. 创建pull request将自己的分支合并到开发分支`3.0`,我们开发团队将尽快审核。
## 贡献指南
1. 请用友好的语气书写。
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
3. 文档写作建议
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母“TD” 和 “engine” 之间没有空格 **正确拼写TDengine**。
- 在句号或其他标点符号后只留一个空格。
4. 尽量**使用简单句**,而不是复杂句。
如遇任何问题请添加官方微信TDengineECO。我们的团队会帮忙解决。
## 给贡献者的礼品

View File

@ -1,40 +1,36 @@
# Contributing
# Contributing to TDengine
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
TDengine Community Edition is free, open-source software. Its development is led by the TDengine Team, but we welcome contributions from all community members and open-source developers. This document describes how you can contribute, no matter whether you're a user or a developer yourself.
## Reporting a bug
## Bug reports
- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
All users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. To ensure that the development team can locate and resolve the issue that you experienced, please include the following in your bug report:
- Attaching log files caused by the bug is really appreciated.
- A detailed description of the issue, including the steps to reproduce it.
- Any log files that may be relevant to the issue.
## Guidelines for committing code
## Code contributions
- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
Developers are encouraged to submit patches to the project, and all contributions, from minor documentation changes to bug fixes, are appreciated by our team. To ensure that your code can be merged successfully and improve the experience for other community members, we ask that you go through the following procedure before submitting a pull request:
- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
- If no corresponding issue or feature is found in the issue tracker, please **create one**.
- When submitting your code to our repository, please create a pull request with the **issue number** included.
1. Read and accept the terms of the TAOS Data Contributor License Agreement (CLA) located at [https://cla-assistant.io/taosdata/TDengine](https://cla-assistant.io/taosdata/TDengine).
## Guidelines for communicating
2. For bug fixes, search the [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) to check whether the bug has already been filed.
- If the bug that you want to fix already exists in the issue tracker, review the previous discussion before submitting your patch.
- If the bug that you want to fix does not exist in the issue tracker, click **New issue** and file a report.
- Ensure that you note the issue number in your pull request when you submit your patch.
3. Fork our repository to your GitHub account and create a branch for your patch.
**Important:** The `main` branch is for stable versions and cannot accept patches directly. For all code and documentation changes, create your own branch from the development branch `3.0` and not from `main`.
Note: For a documentation change, ensure that the branch name starts with `docs/` so that the change can be merged without running tests.
4. Create a pull request to merge your changes into the development branch `3.0`, and our team members will review the request as soon as possible.
1. Please be **nice and polite** in the description.
2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
3. Documentation writing advice
If you encounter any difficulties or problems in contributing your code, you can join our [Discord server](https://discord.com/invite/VZdSuUg4pS) and receive assistance from members of the TDengine Team.
- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
- Please **capitalize the first letter** of every sentence.
- Leave **only one space** after periods or other punctuation marks.
- Use **American spelling**.
- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
## Expressing our thanks
5. Use **simple sentences**, rather than complex sentences.
## Gifts for the contributors
Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
**You can choose one of the following gifts:**
To thank community members for your support, we are offering a free gift to any developer who submits at least one contribution. You can choose one of the following items:
<p align="left">
<img
@ -53,12 +49,10 @@ Developers, as long as you contribute to TDengine, whether it's code contributio
width="200"
/>
The TDengine community is committed to making TDengine accepted and used by more developers.
If you would like to claim your gift, send an email to [developer@tdengine.com](mailto:developer@tdengine.com?subject=Claiming&20my%20developer%20gift) including the following information:
Just fill out the **Contributor Submission Form** to choose your desired gift.
- Your GitHub account name
- Your name and mailing address
- Your preferred gift
- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
## Contact us
If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
Note: Limit one per person.

View File

@ -173,8 +173,9 @@ def pre_test_build_mac() {
'''
sh '''
cd ${WK}/debug
cmake .. -DBUILD_TEST=true -DBUILD_HTTPS=false
cmake .. -DBUILD_TEST=true -DBUILD_HTTPS=false -DCMAKE_BUILD_TYPE=Release
make -j10
ctest -j10 || exit 7
'''
sh '''
date
@ -200,6 +201,7 @@ def pre_test_win(){
'''
bat '''
cd %WIN_COMMUNITY_ROOT%
git clean -fxd
git reset --hard
git remote prune origin
git fetch
@ -302,7 +304,7 @@ def pre_test_build_win() {
set CL=/MP8
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cmake"
time /t
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true || exit 7
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true -DBUILD_TOOLS=true || exit 7
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jom -j 6"
time /t
jom -j 6 || exit 8
@ -360,7 +362,7 @@ pipeline {
}
parallel {
stage('check docs') {
agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
agent{label " slave1_47 || slave1_48 || slave1_49 || slave1_52 || worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
steps {
check_docs()
}
@ -406,7 +408,7 @@ pipeline {
}
}
stage('linux test') {
agent{label " worker03 || slave215 || slave217 || slave219 "}
agent{label " slave1_47 || slave1_48 || slave1_49 || slave1_52 || worker03 || slave215 || slave217 || slave219 "}
options { skipDefaultCheckout() }
when {
changeRequest()
@ -429,8 +431,6 @@ pipeline {
rm -rf ${WKC}/debug
cd ${WKC}/tests/parallel_test
time ./container_build.sh -w ${WKDIR} -t 10 -e
rm -f /tmp/cases.task
./collect_cases.sh -e
'''
def extra_param = ""
def log_server_file = "/home/log_server.json"
@ -461,7 +461,7 @@ pipeline {
cd ${WKC}/tests/parallel_test
export DEFAULT_RETRY_TIME=2
date
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t /tmp/cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 480 ''' + extra_param + '''
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 480 ''' + extra_param + '''
'''
}
}

View File

@ -60,7 +60,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
@ -85,7 +85,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Rocky Linux
@ -94,7 +94,7 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy实际上工作正常。

View File

@ -31,7 +31,7 @@ TDengine is an open source, high-performance, cloud native [time-series database
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 19.9k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
For a full list of TDengine competitive advantages, please [check here](https://tdengine.com/tdengine/). The easiest way to experience TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
@ -62,7 +62,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
@ -85,7 +85,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
#### CentOS 7.9
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Rocky Linux
@ -94,7 +94,7 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.

View File

@ -1,6 +1,7 @@
cmake_minimum_required(VERSION 3.0)
set(CMAKE_VERBOSE_MAKEFILE OFF)
set(TD_BUILD_TAOSA_INTERNAL FALSE)
#set output directory
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib)
@ -123,15 +124,39 @@ ELSE ()
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
ENDIF ()
MESSAGE("System processor ID: ${CMAKE_SYSTEM_PROCESSOR}")
IF (TD_INTEL_64 OR TD_INTEL_32)
ADD_DEFINITIONS("-msse4.2")
IF("${FMA_SUPPORT}" MATCHES "true")
MESSAGE(STATUS "turn fma function support on")
ADD_DEFINITIONS("-mfma")
ELSE ()
MESSAGE(STATUS "turn fma function support off")
INCLUDE(CheckCCompilerFlag)
IF (TD_ARM_64 OR TD_ARM_32)
SET(COMPILER_SUPPORT_SSE42 false)
ELSEIF (("${CMAKE_C_COMPILER_ID}" MATCHES "Clang") OR ("${CMAKE_C_COMPILER_ID}" MATCHES "AppleClang"))
SET(COMPILER_SUPPORT_SSE42 true)
MESSAGE(STATUS "Always enable sse4.2 for Clang/AppleClang")
ELSE()
CHECK_C_COMPILER_FLAG("-msse4.2" COMPILER_SUPPORT_SSE42)
ENDIF()
CHECK_C_COMPILER_FLAG("-mfma" COMPILER_SUPPORT_FMA)
CHECK_C_COMPILER_FLAG("-mavx" COMPILER_SUPPORT_AVX)
CHECK_C_COMPILER_FLAG("-mavx2" COMPILER_SUPPORT_AVX2)
IF (COMPILER_SUPPORT_SSE42)
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -msse4.2")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse4.2")
ENDIF()
IF ("${SIMD_SUPPORT}" MATCHES "true")
IF (COMPILER_SUPPORT_FMA)
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mfma")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mfma")
ENDIF()
IF (COMPILER_SUPPORT_AVX)
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mavx")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mavx")
ENDIF()
ENDIF ()
IF (COMPILER_SUPPORT_AVX2)
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mavx2")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mavx2")
ENDIF()
MESSAGE(STATUS "SIMD instructions (AVX/AVX2) is ACTIVATED")
ENDIF()
ENDIF ()

View File

@ -21,7 +21,7 @@ IF (TD_LINUX)
ELSEIF (TD_WINDOWS)
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.bat")
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER})")
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER} ${TD_BUILD_TAOSA_INTERNAL})")
ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh")
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")

View File

@ -1,20 +1,17 @@
cmake_minimum_required(VERSION 3.0)
MESSAGE("Current system is ${CMAKE_SYSTEM_NAME}")
# init
SET(TD_LINUX FALSE)
SET(TD_WINDOWS FALSE)
SET(TD_DARWIN FALSE)
MESSAGE("Compiler ID: ${CMAKE_CXX_COMPILER_ID}")
if(CMAKE_COMPILER_IS_GNUCXX MATCHES 1)
set(CXX_COMPILER_IS_GNU TRUE)
else()
set(CXX_COMPILER_IS_GNU FALSE)
endif()
MESSAGE("Current system name is ${CMAKE_SYSTEM_NAME}.")
MESSAGE("Current system: ${CMAKE_SYSTEM_NAME}")
IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
@ -26,6 +23,8 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
set(CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS "${CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS} -undefined dynamic_lookup")
ENDIF ()
MESSAGE("Current system processor: ${CMAKE_SYSTEM_PROCESSOR}")
IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux")
SET(TD_LINUX TRUE)
@ -44,7 +43,6 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
SET(OSTYPE "macOS")
ADD_DEFINITIONS("-DDARWIN -Wno-tautological-pointer-compare")
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64")
MESSAGE("Current system arch is arm64")
SET(TD_DARWIN_64 TRUE)
@ -80,28 +78,34 @@ ELSEIF (${CMAKE_SYSTEM_NAME} MATCHES "Windows")
ENDIF()
IF ("${CPUTYPE}" STREQUAL "")
MESSAGE(STATUS "The current platform " ${CMAKE_SYSTEM_PROCESSOR} " is detected")
IF (CMAKE_SYSTEM_PROCESSOR MATCHES "(amd64)|(AMD64)")
MESSAGE(STATUS "The current platform is amd64")
MESSAGE(STATUS "Current platform is amd64")
SET(PLATFORM_ARCH_STR "amd64")
SET(TD_INTEL_64 TRUE)
ADD_DEFINITIONS("-D_TD_X86_")
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)")
MESSAGE(STATUS "The current platform is x86")
MESSAGE(STATUS "Current platform is x86")
SET(PLATFORM_ARCH_STR "i386")
SET(TD_INTEL_32 TRUE)
ADD_DEFINITIONS("-D_TD_X86_")
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "armv7l")
MESSAGE(STATUS "The current platform is aarch32")
MESSAGE(STATUS "Current platform is aarch32")
SET(PLATFORM_ARCH_STR "arm")
SET(TD_ARM_32 TRUE)
ADD_DEFINITIONS("-D_TD_ARM_")
ADD_DEFINITIONS("-D_TD_ARM_32")
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "(aarch64)|(arm64)")
MESSAGE(STATUS "The current platform is aarch64")
MESSAGE(STATUS "Current platform is aarch64")
SET(PLATFORM_ARCH_STR "arm64")
SET(TD_ARM_64 TRUE)
ADD_DEFINITIONS("-D_TD_ARM_")
ADD_DEFINITIONS("-D_TD_ARM_64")
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "loongarch64")
MESSAGE(STATUS "The current platform is loongarch64")
SET(PLATFORM_ARCH_STR "loongarch64")
SET(TD_LOONGARCH_64 TRUE)
ADD_DEFINITIONS("-D_TD_LOONGARCH_")
ADD_DEFINITIONS("-D_TD_LOONGARCH_64")
ENDIF ()
ELSE ()
# if generate ARM version:
@ -118,6 +122,12 @@ ELSE ()
ADD_DEFINITIONS("-D_TD_ARM_")
ADD_DEFINITIONS("-D_TD_ARM_64")
SET(TD_ARM_64 TRUE)
ELSEIF (${CPUTYPE} MATCHES "loongarch64")
SET(PLATFORM_ARCH_STR "loongarch64")
MESSAGE(STATUS "input cpuType: loongarch64")
SET(TD_LOONGARCH_64 TRUE)
ADD_DEFINITIONS("-D_TD_LOONGARCH_")
ADD_DEFINITIONS("-D_TD_LOONGARCH_64")
ELSEIF (${CPUTYPE} MATCHES "mips64")
SET(PLATFORM_ARCH_STR "mips")
MESSAGE(STATUS "input cpuType: mips64")
@ -137,7 +147,7 @@ ELSE ()
ENDIF ()
ENDIF ()
MESSAGE(STATUS "platform arch:" ${PLATFORM_ARCH_STR})
MESSAGE(STATUS "Platform arch:" ${PLATFORM_ARCH_STR})
MESSAGE("C Compiler ID: ${CMAKE_C_COMPILER_ID}")
MESSAGE("CXX Compiler ID: ${CMAKE_CXX_COMPILER_ID}")
MESSAGE("C Compiler: ${CMAKE_C_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_C_COMPILER_VERSION})")
MESSAGE("CXX Compiler: ${CMAKE_CXX_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_CXX_COMPILER_VERSION})")

View File

@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.0.1.5")
SET(TD_VER_NUMBER "3.0.2.1")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
@ -26,7 +26,7 @@ ELSEIF (HAVE_GIT)
SET(TD_VER_GIT "no git commit id")
ENDIF ()
ELSE ()
message(STATUS "no git cmd")
message(STATUS "no git found")
SET(TD_VER_GIT "no git commit id")
ENDIF ()
@ -65,13 +65,14 @@ ELSE ()
ENDIF ()
MESSAGE(STATUS "============= compile version parameter information start ============= ")
MESSAGE(STATUS "ver number:" ${TD_VER_NUMBER})
MESSAGE(STATUS "compatible ver number:" ${TD_VER_COMPATIBLE})
MESSAGE(STATUS "communit commit id:" ${TD_VER_GIT})
MESSAGE(STATUS "build date:" ${TD_VER_DATE})
MESSAGE(STATUS "ver type:" ${TD_VER_VERTYPE})
MESSAGE(STATUS "ver cpu:" ${TD_VER_CPUTYPE})
MESSAGE(STATUS "os type:" ${TD_VER_OSTYPE})
MESSAGE(STATUS "version: " ${TD_VER_NUMBER})
MESSAGE(STATUS "compatible: " ${TD_VER_COMPATIBLE})
MESSAGE(STATUS "commit id: " ${TD_VER_GIT})
MESSAGE(STATUS "build date: " ${TD_VER_DATE})
MESSAGE(STATUS "build type: " ${CMAKE_BUILD_TYPE})
MESSAGE(STATUS "type: " ${TD_VER_VERTYPE})
MESSAGE(STATUS "cpu: " ${TD_VER_CPUTYPE})
MESSAGE(STATUS "os: " ${TD_VER_OSTYPE})
MESSAGE(STATUS "============= compile version parameter information end ============= ")
STRING(REPLACE "." "_" TD_LIB_VER_NUMBER ${TD_VER_NUMBER})

View File

@ -2,7 +2,7 @@
# taosadapter
ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG a11131c
GIT_TAG 5662a6d
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 16eb34f
GIT_TAG 11b60a4
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -2,7 +2,7 @@
# taosws-rs
ExternalProject_Add(taosws-rs
GIT_REPOSITORY https://github.com/taosdata/taos-connector-rust.git
GIT_TAG 0373a70
GIT_TAG f406d51
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosws-rs"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-0.5 -1 32 32" width="50" height="50">
<g fill="#5865f2">
<path
d="M26.0015 6.9529C24.0021 6.03845 21.8787 5.37198 19.6623 5C19.3833 5.48048 19.0733 6.13144 18.8563 6.64292C16.4989 6.30193 14.1585 6.30193 11.8336 6.64292C11.6166 6.13144 11.2911 5.48048 11.0276 5C8.79575 5.37198 6.67235 6.03845 4.6869 6.9529C0.672601 12.8736 -0.41235 18.6548 0.130124 24.3585C2.79599 26.2959 5.36889 27.4739 7.89682 28.2489C8.51679 27.4119 9.07477 26.5129 9.55525 25.5675C8.64079 25.2265 7.77283 24.808 6.93587 24.312C7.15286 24.1571 7.36986 23.9866 7.57135 23.8161C12.6241 26.1255 18.0969 26.1255 23.0876 23.8161C23.3046 23.9866 23.5061 24.1571 23.7231 24.312C22.8861 24.808 22.0182 25.2265 21.1037 25.5675C21.5842 26.5129 22.1422 27.4119 22.7621 28.2489C25.2885 27.4739 27.8769 26.2959 30.5288 24.3585C31.1952 17.7559 29.4733 12.0212 26.0015 6.9529ZM10.2527 20.8402C8.73376 20.8402 7.49382 19.4608 7.49382 17.7714C7.49382 16.082 8.70276 14.7025 10.2527 14.7025C11.7871 14.7025 13.0425 16.082 13.0115 17.7714C13.0115 19.4608 11.7871 20.8402 10.2527 20.8402ZM20.4373 20.8402C18.9183 20.8402 17.6768 19.4608 17.6768 17.7714C17.6768 16.082 18.8873 14.7025 20.4373 14.7025C21.9717 14.7025 23.2271 16.082 23.1961 17.7714C23.1961 19.4608 21.9872 20.8402 20.4373 20.8402Z"
></path>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-1 -2 18 18" width="50" height="50">
<path
fill="#000"
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"
></path>
</svg>

After

Width:  |  Height:  |  Size: 705 B

View File

@ -3,6 +3,13 @@ title: Get Started
description: This article describes how to install TDengine and test its performance.
---
import GitHubSVG from './github.svg'
import DiscordSVG from './discord.svg'
import TwitterSVG from './twitter.svg'
import YouTubeSVG from './youtube.svg'
import LinkedInSVG from './linkedin.svg'
import StackOverflowSVG from './stackoverflow.svg'
You can install and run TDengine on Linux/Windows/macOS machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud.
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
@ -12,4 +19,36 @@ import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
```
## Study TDengine Knowledge Map
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
</center>
</figure>
## Join TDengine Community
<table width="100%">
<tr align="center" style={{border:0}}>
<td width="16%" style={{border:0}}><a href="https://github.com/taosdata/TDengine" target="_blank"><GitHubSVG /></a></td>
<td width="16%" style={{border:0}}><a href="https://discord.com/invite/VZdSuUg4pS" target="_blank"><DiscordSVG /></a></td>
<td width="16%" style={{border:0}}><a href="https://twitter.com/TDengineDB" target="_blank"><TwitterSVG /></a></td>
<td width="16%" style={{border:0}}><a href="https://www.youtube.com/@tdengine" target="_blank"><YouTubeSVG /></a></td>
<td width="16%" style={{border:0}}><a href="https://www.linkedin.com/company/tdengine" target="_blank"><LinkedInSVG /></a></td>
<td width="16%" style={{border:0}}><a href="https://stackoverflow.com/questions/tagged/tdengine" target="_blank"><StackOverflowSVG /></a></td>
</tr>
<tr align="center" style={{border:0,backgroundColor:'transparent'}}>
<td width="16%" style={{border:0,padding:0}}><a href="https://github.com/taosdata/TDengine" target="_blank">Star GitHub</a></td>
<td width="16%" style={{border:0,padding:0}}><a href="https://discord.com/invite/VZdSuUg4pS" target="_blank">Join Discord</a></td>
<td width="16%" style={{border:0,padding:0}}><a href="https://twitter.com/TDengineDB" target="_blank">Follow Twitter</a></td>
<td width="16%" style={{border:0,padding:0}}><a href="https://www.youtube.com/@tdengine" target="_blank">Subscribe YouTube</a></td>
<td width="16%" style={{border:0,padding:0}}><a href="https://www.linkedin.com/company/tdengine" target="_blank">Follow LinkedIn</a></td>
<td width="16%" style={{border:0,padding:0}}><a href="https://stackoverflow.com/questions/tagged/tdengine" target="_blank">Ask StackOverflow</a></td>
</tr>
</table>

View File

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -2 24 24" width="50" height="50">
<path
fill="rgb(10, 102, 194)"
d="M20.5 2h-17A1.5 1.5 0 002 3.5v17A1.5 1.5 0 003.5 22h17a1.5 1.5 0 001.5-1.5v-17A1.5 1.5 0 0020.5 2zM8 19H5v-9h3zM6.5 8.25A1.75 1.75 0 118.3 6.5a1.78 1.78 0 01-1.8 1.75zM19 19h-3v-4.74c0-1.42-.6-1.93-1.38-1.93A1.74 1.74 0 0013 14.19a.66.66 0 000 .14V19h-3v-9h2.9v1.3a3.11 3.11 0 012.7-1.4c1.55 0 3.36.86 3.36 3.66z"
></path>
</svg>

After

Width:  |  Height:  |  Size: 461 B

View File

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-8 0 48 48" width="50" height="50">
<path d="M26 41v-9h4v13H0V32h4v9h22z" fill="#BCBBBB" />
<path
d="M23 34l.8-3-16.1-3.3L7 31l16 3zM9.2 23.2l15 7 1.4-3-15-7-1.4 3zm4.2-7.4L26 26.4l2.1-2.5-12.7-10.6-2.1 2.5zM21.5 8l-2.7 2 9.9 13.3 2.7-2L21.5 8zM7 38h16v-3H7v3z"
fill="#F48024"
/>
</svg>

After

Width:  |  Height:  |  Size: 350 B

View File

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -2 24 24" width="50" height="50">
<g fill="rgb(29, 155, 240)">
<path
d="M23.643 4.937c-.835.37-1.732.62-2.675.733.962-.576 1.7-1.49 2.048-2.578-.9.534-1.897.922-2.958 1.13-.85-.904-2.06-1.47-3.4-1.47-2.572 0-4.658 2.086-4.658 4.66 0 .364.042.718.12 1.06-3.873-.195-7.304-2.05-9.602-4.868-.4.69-.63 1.49-.63 2.342 0 1.616.823 3.043 2.072 3.878-.764-.025-1.482-.234-2.11-.583v.06c0 2.257 1.605 4.14 3.737 4.568-.392.106-.803.162-1.227.162-.3 0-.593-.028-.877-.082.593 1.85 2.313 3.198 4.352 3.234-1.595 1.25-3.604 1.995-5.786 1.995-.376 0-.747-.022-1.112-.065 2.062 1.323 4.51 2.093 7.14 2.093 8.57 0 13.255-7.098 13.255-13.254 0-.2-.005-.402-.014-.602.91-.658 1.7-1.477 2.323-2.41z"
></path>
</g>
</svg>

After

Width:  |  Height:  |  Size: 772 B

View File

@ -0,0 +1,11 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-2 -8 32 32" width="50" height="50">
<g>
<g>
<path
d="M27.9727 3.12324C27.6435 1.89323 26.6768 0.926623 25.4468 0.597366C23.2197 2.24288e-07 14.285 0 14.285 0C14.285 0 5.35042 2.24288e-07 3.12323 0.597366C1.89323 0.926623 0.926623 1.89323 0.597366 3.12324C2.24288e-07 5.35042 0 10 0 10C0 10 2.24288e-07 14.6496 0.597366 16.8768C0.926623 18.1068 1.89323 19.0734 3.12323 19.4026C5.35042 20 14.285 20 14.285 20C14.285 20 23.2197 20 25.4468 19.4026C26.6768 19.0734 27.6435 18.1068 27.9727 16.8768C28.5701 14.6496 28.5701 10 28.5701 10C28.5701 10 28.5677 5.35042 27.9727 3.12324Z"
fill="#FF0000"
></path>
<path d="M11.4253 14.2854L18.8477 10.0004L11.4253 5.71533V14.2854Z" fill="white"></path>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 801 B

View File

@ -0,0 +1,46 @@
---
title: Write from Kafka
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PyKafka from "./_py_kafka.mdx";
## About Kafka
Apache Kafka is an open-source distributed event streaming platform, used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. For the key concepts of kafka, please refer to [kafka documentation](https://kafka.apache.org/documentation/#gettingStarted).
### kafka topic
Messages in Kafka are organized by topics. A topic may have one or more partitions. We can manage kafka topics through `kafka-topics`.
create a topic named `kafka-events`:
```
bin/kafka-topics.sh --create --topic kafka-events --bootstrap-server localhost:9092
```
Alter `kafka-events` topic to set partitions to 3:
```
bin/kafka-topics.sh --alter --topic kafka-events --partitions 3 --bootstrap-server=localhost:9092
```
Show all topics and partitions in Kafka:
```
bin/kafka-topics.sh --bootstrap-server=localhost:9092 --describe
```
## Insert into TDengine
We can write data into TDengine via SQL or Schemaless. For more information, please refer to [Insert Using SQL](/develop/insert-data/sql-writing/) or [High Performance Writing](/develop/insert-data/high-volume/) or [Schemaless Writing](/reference/schemaless/).
## Examples
<Tabs defaultValue="Python" groupId="lang">
<TabItem label="Python" value="Python">
<PyKafka />
</TabItem>
</Tabs>

View File

@ -37,9 +37,9 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
- All the data in `tag_set` will be converted to NCHAR type automatically .
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
- You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be created automatically. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
:::
:::
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)

View File

@ -32,7 +32,7 @@ For example:
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
```
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
## Examples

View File

@ -47,9 +47,8 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
:::note
- In JSON protocol, strings will be converted to NCHAR type and numeric values will be converted to double type.
- Only data in array format is accepted and so an array must be used even if there is only one row.
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
:::
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
:::
## Examples

View File

@ -0,0 +1,60 @@
### python Kafka 客户端
For python kafka client, please refer to [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python). In this document, we use [kafka-python](http://github.com/dpkp/kafka-python).
### consume from Kafka
The simple way to consume messages from Kafka is to read messages one by one. The demo is as follows:
```
from kafka import KafkaConsumer
consumer = KafkaConsumer('my_favorite_topic')
for msg in consumer:
print (msg)
```
For higher performance, we can consume message from kafka in batch. The demo is as follows:
```
from kafka import KafkaConsumer
consumer = KafkaConsumer('my_favorite_topic')
while True:
msgs = consumer.poll(timeout_ms=500, max_records=1000)
if msgs:
print (msgs)
```
### multi-threading
For more higher performance we can process data from kafka in multi-thread. We can use python's ThreadPoolExecutor to achieve multithreading. The demo is as follows:
```
from concurrent.futures import ThreadPoolExecutor, Future
pool = ThreadPoolExecutor(max_workers=10)
pool.submit(...)
```
### multi-process
For more higher performance, sometimes we use multiprocessing. In this case, the number of Kafka Consumers should not be greater than the number of Kafka Topic Partitions. The demo is as follows:
```
from multiprocessing import Process
ps = []
for i in range(5):
p = Process(target=Consumer().consume())
p.start()
ps.append(p)
for p in ps:
p.join()
```
In addition to python's built-in multithreading and multiprocessing library, we can also use the third-party library gunicorn.
### Examples
```py
{{#include docs/examples/python/kafka_example.py}}
```

View File

@ -205,22 +205,22 @@ Additional functions are defined in `taosudf.h` to make it easier to work with t
To use your user-defined function in TDengine, first compile it to a dynamically linked library (DLL).
For example, the sample UDF `add_one.c` can be compiled into a DLL as follows:
For example, the sample UDF `bit_and.c` can be compiled into a DLL as follows:
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so
```
The generated DLL file `add_one.so` can now be used to implement your function. Note: GCC 7.5 or later is required.
The generated DLL file `libbitand.so` can now be used to implement your function. Note: GCC 7.5 or later is required.
## Manage and Use User-Defined Functions
After compiling your function into a DLL, you add it to TDengine. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md).
## Sample Code
### Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c)
### Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c)
The bit_add function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_add function ignores null values.
The bit_and function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_and function ignores null values.
<details>
<summary>bit_and.c</summary>
@ -231,7 +231,7 @@ The bit_add function implements bitwise addition for multiple columns. If there
</details>
### Sample aggregate function: [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c)
### Sample aggregate function: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c)
The l2norm function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root.

View File

@ -27,10 +27,11 @@ database_option: {
| PRECISION {'ms' | 'us' | 'ns'}
| REPLICA value
| RETENTIONS ingestion_duration:keep_duration ...
| STRICT {'off' | 'on'}
| WAL_LEVEL {1 | 2}
| VGROUPS value
| SINGLE_STABLE {0 | 1}
| TABLE_PREFIX value
| TABLE_SUFFIX value
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
@ -61,9 +62,6 @@ database_option: {
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
- REPLICA: specifies the number of replicas that are made of the database. Enter 1 or 3. The default value is 1. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
- RETENTIONS: specifies the retention period for data aggregated at various intervals. For example, RETENTIONS 15s:7d,1m:21d,15m:50d indicates that data aggregated every 15 seconds is retained for 7 days, data aggregated every 1 minute is retained for 21 days, and data aggregated every 15 minutes is retained for 50 days. You must enter three aggregation intervals and corresponding retention periods.
- STRICT: specifies whether strong data consistency is enabled. The default value is off.
- on: Strong consistency is enabled and implemented through the Raft consensus algorithm. In this mode, an operation is considered successful once it is confirmed by half of the nodes in the cluster.
- off: Strong consistency is disabled. In this mode, an operation is considered successful when it is initiated by the local node.
- WAL_LEVEL: specifies whether fsync is enabled. The default value is 1.
- 1: WAL is enabled but fsync is disabled.
- 2: WAL and fsync are both enabled.
@ -71,6 +69,8 @@ database_option: {
- SINGLE_STABLE: specifies whether the database can contain more than one supertable.
- 0: The database can contain multiple supertables.
- 1: The database can contain only one supertable.
- TABLE_PREFIXThe prefix length in the table name that is ignored when distributing table to vnode based on table name.
- TABLE_SUFFIXThe suffix length in the table name that is ignored when distributing table to vnode based on table name.
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is 4 days.
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is -1.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value of single copy is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk. The default values of multiple copy is 1 day.
@ -142,7 +142,7 @@ The preceding SQL statement can be used in migration scenarios. This command can
### View Database Configuration
```sql
SHOW DATABASES \G;
SELECT * FROM INFORMATION_SCHEMA.INS_DATABASES WHERE NAME='DBNAME' \G;
```
The preceding SQL statement shows the value of each parameter for the specified database. One value is displayed per line.

View File

@ -7,12 +7,12 @@ title: Table
You create standard tables and subtables with the `CREATE TABLE` statement.
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...) [table_options]
CREATE TABLE create_subtable_clause
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
[TAGS (create_definition [, create_definitionn] ...)]
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...)
[TAGS (create_definition [, create_definition] ...)]
[table_options]
create_subtable_clause: {
@ -195,4 +195,4 @@ This command is useful in migrating data from one TDengine cluster to another be
```
DESCRIBE [db_name.]tb_name;
```
```

View File

@ -6,7 +6,7 @@ title: Supertable
## Create a Supertable
```sql
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definition] ...) TAGS (create_definition [, create_definition] ...) [table_options]
create_definition:
col_name column_definition

View File

@ -255,7 +255,7 @@ Note: If you include an ORDER BY clause, only one partition can be displayed.
Some special query functions can be invoked without `FROM` sub-clause.
## Obtain Current Database
### Obtain Current Database
The following SQL statement returns the current database. If a database has not been specified on login or with the `USE` command, a null value is returned.
@ -270,7 +270,7 @@ SELECT CLIENT_VERSION();
SELECT SERVER_VERSION();
```
## Obtain Server Status
### Obtain Server Status
The following SQL statement returns the status of the TDengine server. An integer indicates that the server is running normally. An error code indicates that an error has occurred. This statement can also detect whether a connection pool or third-party tool is connected to TDengine properly. By using this statement, you can ensure that connections in a pool are not lost due to an incorrect heartbeat detection statement.
@ -301,7 +301,7 @@ SELECT TIMEZONE();
### Syntax
```txt
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
```
### Specification
@ -314,6 +314,40 @@ Regular expression filtering is supported only on table names (TBNAME), BINARY t
A regular expression string cannot exceed 128 bytes. You can configure this value by modifying the maxRegexStringLen parameter on the TDengine Client. The modified value takes effect when the client is restarted.
## CASE Expressions
### Syntax
```txt
CASE value WHEN compare_value THEN result [WHEN compare_value THEN result ...] [ELSE result] END
CASE WHEN condition THEN result [WHEN condition THEN result ...] [ELSE result] END
```
### Description
CASE expressions let you use IF ... THEN ... ELSE logic in SQL statements without having to invoke procedures.
The first CASE syntax returns the `result` for the first `value`=`compare_value` comparison that is true.
The second syntax returns the `result` for the first `condition` that is true.
If no comparison or condition is true, the result after ELSE is returned, or NULL if there is no ELSE part.
The return type of the CASE expression is the result type of the first WHEN WHEN part, and the result type of the other WHEN WHEN parts and ELSE parts can be converted to them, otherwise TDengine will report an error.
### Examples
A device has three status codes to display its status. The statements are as follows:
```sql
SELECT CASE dev_status WHEN 1 THEN 'Running' WHEN 2 THEN 'Warning' WHEN 3 THEN 'Downtime' ELSE 'Unknown' END FROM dev_table;
```
The average voltage value of the smart meter is counted. When the voltage is less than 200 or more than 250, it is considered that the statistics is wrong, and the value is corrected to 220. The statement is as follows:
```sql
SELECT AVG(CASE WHEN voltage < 200 or voltage > 250 THEN 220 ELSE voltage END) FROM meters;
```
## JOIN
TDengine supports natural joins between supertables, between standard tables, and between subqueries. The difference between natural joins and inner joins is that natural joins require that the fields being joined in the supertables or standard tables must have the same name. Data or tag columns must be joined with the equivalent column in another table.

View File

@ -532,7 +532,12 @@ TIMEDIFF(expr1, expr2 [, time_unit])
#### TIMETRUNCATE
```sql
TIMETRUNCATE(expr, time_unit)
TIMETRUNCATE(expr, time_unit [, ignore_timezone])
ignore_timezone: {
0
| 1
}
```
**Description**: Truncate the input timestamp with unit specified by `time_unit`
@ -548,7 +553,10 @@ TIMETRUNCATE(expr, time_unit)
1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks)
- The precision of the returned timestamp is same as the precision set for the current data base in use
- If the input data is not formatted as a timestamp, the returned value is null.
- If `1d` is used as `time_unit` to truncate the timestamp, `ignore_timezone` option can be set to indicate if the returned result is affected by client timezone or not.
For example, if client timezone is set to UTC+0800, TIMETRUNCATE('2020-01-01 23:00:00', 1d, 0) will return '2020-01-01 08:00:00'.
Otherwise, TIMETRUNCATE('2020-01-01 23:00:00', 1d, 1) will return '2020-01-01 00:00:00'.
If `ignore_timezone` option is omitted, the default value is set to 1.
#### TIMEZONE
@ -868,7 +876,8 @@ INTERP(expr)
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. The parameter `EVERY` must be an integer, with no quotes, with a time unit of: b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
- Interpolation is performed based on `FILL` parameter.
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
- Pseudo column `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.2.1).
### LAST

View File

@ -126,6 +126,12 @@ Only care about the information of the status window when the status is 2. For e
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
TDengine also supports the use of CASE expressions in state quantities. It can express that the beginning of a state is triggered by meeting a certain condition, and the end of this state is triggered by meeting another condition. For example, if the normal voltage range of the smart meter is 205V to 235V, you can judge whether the circuit is normal by monitoring the voltage.
```
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
```
### Session Window
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:102019-04-28 14:22:30] and [2019-04-28 14:23:102019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.

View File

@ -10,7 +10,7 @@ Because stream processing is built in to TDengine, you are no longer reliant on
## Create a Stream
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
@ -30,6 +30,8 @@ subquery: SELECT [DISTINCT] select_list
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME.
Subtable Clause defines the naming rules of auto-created subtable, you can see more details in below part: Partitions of Stream.
```sql
window_clause: {
SESSION(ts_col, tol_val)
@ -47,6 +49,47 @@ CREATE STREAM avg_vol_s INTO avg_vol AS
SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
```
## Partitions of Stream
A Stream can process data in multiple partitions. Partition rules can be defined by PARTITION BY clause in stream processing. Each partition will have different timelines and windows, and will be processed separately and be written into different subtables of target supertable.
If a stream is created without PARTITION BY clause, all data will be written into one subtable.
If a stream is created with PARTITION BY clause without SUBTABLE clause, each partition will be given a random name.
If a stream is created with PARTITION BY clause and SUBTABLE clause, the name of each partition will be calculated according to SUBTABLE clause. For example:
```sql
CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m);
```
IN PARTITION clause, 'tbname', representing each subtable name of source supertable, is given alias 'tname'. And 'tname' is used in SUBTABLE clause. In SUBTABLE clause, each auto created subtable will concat 'new-' and source subtable name as their name. Other expressions are also allowed in SUBTABLE clause, but the output type must be varchar.
If the output length exceeds the limitation of TDengine(192), the name will be truncated. If the generated name is occupied by some other table, the creation and writing of the new subtable will be failed.
## Filling history data
Normally a stream does not process data already or being written into source table when it's being creating. But adding FILL_HISTORY 1 as a stream option when creating the stream will allow it to process data written before and while creating the stream. For example:
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 interval(10s)
```
Combining fill_history option and where clause, stream can processing data of specific time range. For example, only process data after a past time. (In this case, 2020-01-30)
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' interval(10s)
```
As another example, only processing data starting from some past time, and ending at some future time.
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' and ts < '2023-01-01' interval(10s)
```
If some streams are totally outdated, and you do not want it to monitor or process anymore, those streams can be manually dropped and output data will be still kept.
## Delete a Stream
```sql
@ -65,7 +108,7 @@ SHOW STREAMS;
When you create a stream, you can use the TRIGGER parameter to specify triggering conditions for it.
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering:
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggeringthe default value is AT_ONCE:
1. AT_ONCE: triggers on write

View File

@ -23,7 +23,7 @@ The following characters cannot occur in a password: single quotation marks ('),
## General Limits
- Maximum length of database name is 32 bytes
- Maximum length of database name is 64 bytes
- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator.
- Maximum length of each data row is 48K bytes. Note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
- The maximum length of a column name is 64 bytes.
@ -35,7 +35,7 @@ The following characters cannot occur in a password: single quotation marks ('),
- Maximum numbers of databases, STables, tables are dependent only on the system resources.
- The number of replicas can only be 1 or 3.
- The maximum length of a username is 23 bytes.
- The maximum length of a password is 15 bytes.
- The maximum length of a password is 128 bytes.
- The maximum number of rows depends on system resources.
- The maximum number of vnodes in a database is 1024.

View File

@ -17,6 +17,7 @@ The following list shows all reserved keywords:
- ADD
- AFTER
- AGGREGATE
- ALIVE
- ALL
- ALTER
- ANALYZE

View File

@ -178,6 +178,139 @@ SHOW TABLE DISTRIBUTED table_name;
Shows how table data is distributed.
Examples Below is an example of this command to display the block distribution of table `d0` in detailed format.
```sql
show table distributed d0\G;
```
<details>
<summary> Show Example </summary>
<pre><code>
*************************** 1.row ***************************
_block_dist: Total_Blocks=[5] Total_Size=[93.65 Kb] Average_size=[18.73 Kb] Compression_Ratio=[23.98 %]
Total_Blocks : Table `d0` contains total 5 blocks
Total_Size: The total size of all the data blocks in table `d0` is 93.65 KB
Average_size: The average size of each block is 18.73 KB
Compression_Ratio: The data compression rate is 23.98%
*************************** 2.row ***************************
_block_dist: Total_Rows=[20000] Inmem_Rows=[0] MinRows=[3616] MaxRows=[4096] Average_Rows=[4000]
Total_Rows: Table `d0` contains 20,000 rows
Inmem_Rows The rows still in memory, i.e. not committed in disk, is 0, i.e. none such rows
MinRows The minimum number of rows in a block is 3,616
MaxRows The maximum number of rows in a block is 4,096B
Average_Rows The average number of rows in a block is 4,000
*************************** 3.row ***************************
_block_dist: Total_Tables=[1] Total_Files=[2]
Total_Tables: The number of child tables, 1 in this example
Total_Files The number of files storing the table's data, 2 in this example
*************************** 4.row ***************************
_block_dist: --------------------------------------------------------------------------------
*************************** 5.row ***************************
_block_dist: 0100 |
*************************** 6.row ***************************
_block_dist: 0299 |
*************************** 7.row ***************************
_block_dist: 0498 |
*************************** 8.row ***************************
_block_dist: 0697 |
*************************** 9.row ***************************
_block_dist: 0896 |
*************************** 10.row ***************************
_block_dist: 1095 |
*************************** 11.row ***************************
_block_dist: 1294 |
*************************** 12.row ***************************
_block_dist: 1493 |
*************************** 13.row ***************************
_block_dist: 1692 |
*************************** 14.row ***************************
_block_dist: 1891 |
*************************** 15.row ***************************
_block_dist: 2090 |
*************************** 16.row ***************************
_block_dist: 2289 |
*************************** 17.row ***************************
_block_dist: 2488 |
*************************** 18.row ***************************
_block_dist: 2687 |
*************************** 19.row ***************************
_block_dist: 2886 |
*************************** 20.row ***************************
_block_dist: 3085 |
*************************** 21.row ***************************
_block_dist: 3284 |
*************************** 22.row ***************************
_block_dist: 3483 ||||||||||||||||| 1 (20.00%)
*************************** 23.row ***************************
_block_dist: 3682 |
*************************** 24.row ***************************
_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)
Query OK, 24 row(s) in set (0.002444s)
</code></pre>
</details>
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
## SHOW TAGS
```sql

View File

@ -62,7 +62,7 @@ SHOW FUNCTIONS;
The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions. For example:
```sql
SELECT X(c1,c2) FROM table/stable;
SELECT bit_and(c1,c2) FROM table;
```
The above SQL statement invokes function X for column c1 and c2. You can use query keywords like WHERE with user-defined functions.
The above SQL statement invokes function X for column c1 and c2 on table. You can use query keywords like WHERE with user-defined functions.

View File

@ -878,8 +878,10 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
| taos-jdbcdriver version | major changes |
| :---------------------: | :--------------------------------------------: |
| 3.0.3 | fix timestamp resolution error for REST connection in jdk17+ version |
| 3.0.1 - 3.0.2 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use 3.0.2 in the JDK 8 environment |
| 3.0.0 | Support for TDengine 3.0 |
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
| 2.0.41 | fix decode method of username and password in REST connection |
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
| 2.0.38 | JDBC REST connections add bulk pull function |

View File

@ -26,14 +26,13 @@ Using REST connection can support a broader range of operating systems as it doe
TDengine version updates often add new features, and the connector versions in the list are the best-fit versions of the connector.
| **TDengine Versions** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| --------------------- | -------- | ---------- | ------------ | ------------- | --------------- | -------- |
| **3.0.0.0 and later** | 3.0.0 | current version | 3.0 branch | 3.0.0 | 3.0.0 | current version |
| **2.4.0.14 and up** | 2.0.38 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
| **2.4.0.6 and up** | 2.0.37 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
| **2.4.0.4 - 2.4.0.5** | 2.0.37 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
| **2.2.x.x ** | 2.0.36 | current version | master branch | n/a | 2.0.7 - 2.0.9 | current version |
| **2.0.x.x ** | 2.0.34 | current version | master branch | n/a | 2.0.1 - 2.0.6 | current version |
| **TDengine Versions** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| --------------------------- | -------------- | -------------- | -------------- | ------------- | --------------- | --------------- |
| **3.0.0.0 and later** | 3.0.2 + | latest version | 3.0 branch | 3.0.0 | 3.0.0 | current version |
| **2.4.0.14 and up ** | 2.0.38 | latest version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
| **2.4.0.4 - 2.4.0.13 ** | 2.0.37 | latest version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
| **2.2.x.x ** | 2.0.36 | latest version | master branch | n/a | 2.0.7 - 2.0.9 | current version |
| **2.0.x.x ** | 2.0.34 | latest version | master branch | n/a | 2.0.1 - 2.0.6 | current version |
## Functional Features
@ -60,10 +59,10 @@ The different database framework specifications for various programming language
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Not supported | Not supported | Not supported | Support | Not supported | Support |
| **Subscription (TMQ) ** | Not supported | Not supported | Not supported | Not supported | Not supported | Support |
| **Parameter Binding** | Not supported | Not supported | support | Support | Not supported | Support |
| **Subscription (TMQ) ** | Not supported | Not supported | support | Not supported | Not supported | Support |
| **Schemaless** | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Not Supported | support | Not Supported | Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | support | Support | Support |
| **DataFrame** | Not supported | Support | Not supported | Not supported | Not supported | Not supported |
:::warning

View File

@ -21,6 +21,7 @@ taosAdapter provides the following features.
- Seamless connection to collectd
- Seamless connection to StatsD
- Supports Prometheus remote_read and remote_write
- Get table's VGroup ID
## taosAdapter architecture diagram
@ -59,6 +60,7 @@ Usage of taosAdapter:
--collectd.port int collectd server port. Env "TAOS_ADAPTER_COLLECTD_PORT" (default 6045)
--collectd.user string collectd user. Env "TAOS_ADAPTER_COLLECTD_USER" (default "root")
--collectd.worker int collectd write worker. Env "TAOS_ADAPTER_COLLECTD_WORKER" (default 10)
--collectd.ttl int collectd data ttl. Env "TAOS_ADAPTER_COLLECTD_TTL" (default 0, means no ttl)
-c, --config string config path default /etc/taos/taosadapter.toml
--cors.allowAllOrigins cors allow all origins. Env "TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS" (default true)
--cors.allowCredentials cors allow credentials. Env "TAOS_ADAPTER_CORS_ALLOW_Credentials"
@ -100,6 +102,7 @@ Usage of taosAdapter:
--node_exporter.responseTimeout duration node_exporter response timeout. Env "TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT" (default 5s)
--node_exporter.urls strings node_exporter urls. Env "TAOS_ADAPTER_NODE_EXPORTER_URLS" (default [http://localhost:9100])
--node_exporter.user string node_exporter user. Env "TAOS_ADAPTER_NODE_EXPORTER_USER" (default "root")
--node_exporter.ttl int node_exporter data ttl. Env "TAOS_ADAPTER_NODE_EXPORTER_TTL"(default 0, means no ttl)
--opentsdb.enable enable opentsdb. Env "TAOS_ADAPTER_OPENTSDB_ENABLE" (default true)
--opentsdb_telnet.batchSize int opentsdb_telnet batch size. Env "TAOS_ADAPTER_OPENTSDB_TELNET_BATCH_SIZE" (default 1)
--opentsdb_telnet.dbs strings opentsdb_telnet db names. Env "TAOS_ADAPTER_OPENTSDB_TELNET_DBS" (default [opentsdb_telnet,collectd_tsdb,icinga2_tsdb,tcollector_tsdb])
@ -110,6 +113,7 @@ Usage of taosAdapter:
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"(default 0, means no ttl)
--pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT" (default 1h0m0s)
--pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT" (default 4000)
--pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE" (default 4000)
@ -131,6 +135,7 @@ Usage of taosAdapter:
--statsd.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE"
--statsd.user string statsd user. Env "TAOS_ADAPTER_STATSD_USER" (default "root")
--statsd.worker int statsd write worker. Env "TAOS_ADAPTER_STATSD_WORKER" (default 10)
--statsd.ttl int statsd data ttl. Env "TAOS_ADAPTER_STATSD_TTL" (default 0, means no ttl)
--taosConfigDir string load taos client config path. Env "TAOS_ADAPTER_TAOS_CONFIG_FILE"
--version Print the version and exit
```
@ -174,6 +179,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl
node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
- Support for Prometheus remote_read and remote_write
remote_read and remote_write are interfaces for Prometheus data read and write from/to other data storage solution. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
- Get table's VGroup ID. For more information about VGroup, please refer to [primary-logic-unit](/tdinternal/arch/#primary-logic-unit).
## Interfaces
@ -195,6 +201,7 @@ Support InfluxDB query parameters as follows.
- `precision` The time precision used by TDengine
- `u` TDengine user name
- `p` TDengine password
- `ttl` The time to live of automatically created sub-table. This value cannot be updated. TDengine will use the ttl value of the first data of sub-table to create sub-table. For more information, please refer [Create Table](/taos-sql/table/#create-table)
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
Example: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
@ -236,6 +243,10 @@ node_export is an exporter of hardware and OS metrics exposed by the \*NIX kerne
<Prometheus />
### Get table's VGroup ID
You can call `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` to get table's VGroup ID. For more information about VGroup, please refer to [primary-logic-unit](/tdinternal/arch/#primary-logic-unit).
## Memory usage optimization methods
taosAdapter will monitor its memory usage during operation and adjust it with two thresholds. Valid values are integers between 1 to 100, and represent a percentage of the system's physical memory.

View File

@ -204,6 +204,12 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **-a/--replica <replicaNum\>** :
Specify the number of replicas when creating the database. The default value is 1.
- **-k/--keep-trying <NUMBER\>** :
Keep trying if failed to insert, default is no. Available with v3.0.9+.
- **-z/--trying-interval <NUMBER\>** :
Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
- **-V/--version** :
Show version information only. Users should not use it with other parameters.
@ -217,7 +223,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
The parameters listed in this section apply to all function modes.
- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
**cfgdir**: specify the TDengine client configuration file's directory. The default path is /etc/taos.
- **host**: Specify the FQDN of the TDengine server to connect. The default value is `localhost`.
@ -231,6 +237,10 @@ The parameters listed in this section apply to all function modes.
`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
- ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+.
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
#### Database related configuration parameters
The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. The other parameters correspond to the database parameters specified when `create database` in [../../taos-sql/database].
@ -374,7 +384,11 @@ The configuration parameters for specifying super table tag columns and data col
### Query scenario configuration parameters
`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
`filetype` must be set to `query` in the query scenario.
To control the query scenario by setting `kill_slow_query_threshold` and `kill_slow_query_interval` parameters to kill the execution of slow query statements. Threshold controls exec_usec of query command will be killed by taosBenchmark after the specified time, in seconds; interval controls sleep time to avoid continuous querying of slow queries consuming CPU in seconds.
See [General Configuration Parameters](#General Configuration Parameters) for details of other general parameters.
#### Configuration parameters for executing the specified query statement

View File

@ -19,7 +19,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
There are two ways to install taosdump:
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
- Install the taosTools official installer. Please find taosTools from [Release History](https://docs.taosdata.com/releases/tools/) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.

View File

@ -14,7 +14,7 @@ Note: ● means officially tested and verified, ○ means unofficially tested an
## List of supported platforms for TDengine clients and connectors
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32/macOS development environments.
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha/LoongArch64 hardware platforms and Linux/Win64/Win32/macOS development environments.
The comparison matrix is as follows.

View File

@ -153,11 +153,11 @@ The parameters described in this document by the effect that they have on the sy
| Meaning | Execution policy for query statements |
| Unit | None |
| Default | 1 |
| Notes | 1: Run queries on vnodes and not on qnodes |
| Value Range | 1: Run queries on vnodes and not on qnodes
2: Run subtasks without scan operators on qnodes and subtasks with scan operators on vnodes.
3: Only run scan operators on vnodes; run all other operators on qnodes.
3: Only run scan operators on vnodes; run all other operators on qnodes. |
### querySmaOptimize
@ -173,6 +173,14 @@ The parameters described in this document by the effect that they have on the sy
1: Enable SMA indexing and perform queries from suitable statements on precomputation results.|
### countAlwaysReturnValue
| Attribute | Description |
| -------- | -------------------------------- |
| Applicable | Server only |
| Meaning | count()/hyperloglog() return value or not if the result data is NULL |
| Vlue Range | 0Return empty line1Return 0 |
| Default | 1 |
### maxNumOfDistinctRes
@ -307,6 +315,14 @@ The charset that takes effect is UTF-8.
| Meaning | All data files are stored in this directory |
| Default Value | /var/lib/taos |
### tempDir
| Attribute | Description |
| -------- | ------------------------------------------ |
| Applicable | Server only |
| Meaning | The directory where to put all the temporary files generated during system running |
| Default | /tmp |
### minimalTmpDirGB
| Attribute | Description |
@ -336,89 +352,6 @@ The charset that takes effect is UTF-8.
| Value Range | 0-4096 |
| Default Value | 2x the CPU cores |
## Time Parameters
### statusInterval
| Attribute | Description |
| -------- | --------------------------- |
| Applicable | Server Only |
| Meaning | the interval of dnode reporting status to mnode |
| Unit | second |
| Value Range | 1-10 |
| Default Value | 1 |
### shellActivityTimer
| Attribute | Description |
| -------- | --------------------------------- |
| Applicable | Server and Client |
| Meaning | The interval for TDengine CLI to send heartbeat to mnode |
| Unit | second |
| Value Range | 1-120 |
| Default Value | 3 |
## Performance Optimization Parameters
### numOfCommitThreads
| Attribute | Description |
| -------- | ---------------------- |
| Applicable | Server Only |
| Meaning | Maximum of threads for committing to disk |
| Default Value | |
## Compression Parameters
### compressMsgSize
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The threshold for message size to compress the message. | Set the value to 64330 bytes for good message compression. |
| Unit | bytes |
| Value Range | 0: already compress; >0: compress when message exceeds it; -1: always uncompress |
| Default Value | -1 |
### compressColData
| Attribute | Description |
| -------- | --------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The threshold for size of column data to trigger compression for the query result |
| Unit | bytes |
| Value Range | 0: always compress; >0: only compress when the size of any column data exceeds the threshold; -1: always uncompress |
| Default Value | -1 |
| Default Value | -1 |
| Note | available from version 2.3.0.0 | |
## Continuous Query Parameters |
### minSlidingTime
| Attribute | Description |
| ------------- | -------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Minimum sliding time of time window |
| Unit | millisecond or microsecond , depending on time precision |
| Value Range | 10-1000000 |
| Default Value | 10 |
### minIntervalTime
| Attribute | Description |
| ------------- | --------------------------- |
| Applicable | Server Only |
| Meaning | Minimum size of time window |
| Unit | millisecond |
| Value Range | 1-1000000 |
| Default Value | 10 |
:::info
To prevent system resource from being exhausted by multiple concurrent streams, a random delay is applied on each stream automatically. `maxFirstStreamCompDelay` is the maximum delay time before a continuous query is started the first time. `streamCompDelayRatio` is the ratio for calculating delay time, with the size of the time window as base. `maxStreamCompDelay` is the maximum delay time. The actual delay time is a random time not bigger than `maxStreamCompDelay`. If a continuous query fails, `retryStreamComDelay` is the delay time before retrying it, also not bigger than `maxStreamCompDelay`.
:::
## Log Parameters
### logDir
@ -665,6 +598,18 @@ To prevent system resource from being exhausted by multiple concurrent streams,
| Value Range | 0: not consistent; 1: consistent. |
| Default | 1 |
## Compress Parameters
### compressMsgSize
| Attribute | Description |
| -------- | ----------------------------- |
| Applicable | Both Client and Server side |
| Meaning | Whether RPC message is compressed |
| Value Range | -1: none message is compressed; 0: all messages are compressed; N (N>0): messages exceeding N bytes are compressed |
| Default | -1 |
## Other Parameters
### enableCoreFile
@ -686,174 +631,60 @@ To prevent system resource from being exhausted by multiple concurrent streams,
| Value Range | 0: disable UDF; 1: enabled UDF |
| Default Value | 1 |
## Parameter Comparison of TDengine 2.x and 3.0
| # | **Parameter** | **In 2.x** | **In 3.0** |
| --- | :-----------------: | --------------- | --------------- |
| 1 | firstEp | Yes | Yes |
| 2 | secondEp | Yes | Yes |
| 3 | fqdn | Yes | Yes |
| 4 | serverPort | Yes | Yes |
| 5 | maxShellConns | Yes | Yes |
| 6 | monitor | Yes | Yes |
| 7 | monitorFqdn | No | Yes |
| 8 | monitorPort | No | Yes |
| 9 | monitorInterval | Yes | Yes |
| 10 | monitorMaxLogs | No | Yes |
| 11 | monitorComp | No | Yes |
| 12 | telemetryReporting | Yes | Yes |
| 13 | telemetryInterval | No | Yes |
| 14 | telemetryServer | No | Yes |
| 15 | telemetryPort | No | Yes |
| 16 | queryPolicy | No | Yes |
| 17 | querySmaOptimize | No | Yes |
| 18 | queryRsmaTolerance | No | Yes |
| 19 | queryBufferSize | Yes | Yes |
| 20 | maxNumOfDistinctRes | Yes | Yes |
| 21 | minSlidingTime | Yes | Yes |
| 22 | minIntervalTime | Yes | Yes |
| 23 | countAlwaysReturnValue | Yes | Yes |
| 24 | dataDir | Yes | Yes |
| 25 | minimalDataDirGB | Yes | Yes |
| 26 | supportVnodes | No | Yes |
| 27 | tempDir | Yes | Yes |
| 28 | minimalTmpDirGB | Yes | Yes |
| 29 | compressMsgSize | Yes | Yes |
| 30 | compressColData | Yes | Yes |
| 31 | smlChildTableName | Yes | Yes |
| 32 | smlTagName | Yes | Yes |
| 33 | smlDataFormat | No | Yes |
| 34 | statusInterval | Yes | Yes |
| 35 | shellActivityTimer | Yes | Yes |
| 36 | transPullupInterval | No | Yes |
| 37 | mqRebalanceInterval | No | Yes |
| 38 | ttlUnit | No | Yes |
| 39 | ttlPushInterval | No | Yes |
| 40 | numOfTaskQueueThreads | No | Yes |
| 41 | numOfRpcThreads | No | Yes |
| 42 | numOfCommitThreads | Yes | Yes |
| 43 | numOfMnodeReadThreads | No | Yes |
| 44 | numOfVnodeQueryThreads | No | Yes |
| 45 | numOfVnodeStreamThreads | No | Yes |
| 46 | numOfVnodeFetchThreads | No | Yes |
| 47 | numOfVnodeWriteThreads | No | Yes |
| 48 | numOfVnodeSyncThreads | No | Yes |
| 49 | numOfVnodeRsmaThreads | No | Yes |
| 50 | numOfQnodeQueryThreads | No | Yes |
| 51 | numOfQnodeFetchThreads | No | Yes |
| 52 | numOfSnodeSharedThreads | No | Yes |
| 53 | numOfSnodeUniqueThreads | No | Yes |
| 54 | rpcQueueMemoryAllowed | No | Yes |
| 55 | logDir | Yes | Yes |
| 56 | minimalLogDirGB | Yes | Yes |
| 57 | numOfLogLines | Yes | Yes |
| 58 | asyncLog | Yes | Yes |
| 59 | logKeepDays | Yes | Yes |
| 60 | debugFlag | Yes | Yes |
| 61 | tmrDebugFlag | Yes | Yes |
| 62 | uDebugFlag | Yes | Yes |
| 63 | rpcDebugFlag | Yes | Yes |
| 64 | jniDebugFlag | Yes | Yes |
| 65 | qDebugFlag | Yes | Yes |
| 66 | cDebugFlag | Yes | Yes |
| 67 | dDebugFlag | Yes | Yes |
| 68 | vDebugFlag | Yes | Yes |
| 69 | mDebugFlag | Yes | Yes |
| 70 | wDebugFlag | Yes | Yes |
| 71 | sDebugFlag | Yes | Yes |
| 72 | tsdbDebugFlag | Yes | Yes |
| 73 | tqDebugFlag | No | Yes |
| 74 | fsDebugFlag | Yes | Yes |
| 75 | udfDebugFlag | No | Yes |
| 76 | smaDebugFlag | No | Yes |
| 77 | idxDebugFlag | No | Yes |
| 78 | tdbDebugFlag | No | Yes |
| 79 | metaDebugFlag | No | Yes |
| 80 | timezone | Yes | Yes |
| 81 | locale | Yes | Yes |
| 82 | charset | Yes | Yes |
| 83 | udf | Yes | Yes |
| 84 | enableCoreFile | Yes | Yes |
| 85 | arbitrator | Yes | No |
| 86 | numOfThreadsPerCore | Yes | No |
| 87 | numOfMnodes | Yes | No |
| 88 | vnodeBak | Yes | No |
| 89 | balance | Yes | No |
| 90 | balanceInterval | Yes | No |
| 91 | offlineThreshold | Yes | No |
| 92 | role | Yes | No |
| 93 | dnodeNopLoop | Yes | No |
| 94 | keepTimeOffset | Yes | No |
| 95 | rpcTimer | Yes | No |
| 96 | rpcMaxTime | Yes | No |
| 97 | rpcForceTcp | Yes | No |
| 98 | tcpConnTimeout | Yes | No |
| 99 | syncCheckInterval | Yes | No |
| 100 | maxTmrCtrl | Yes | No |
| 101 | monitorReplica | Yes | No |
| 102 | smlTagNullName | Yes | No |
| 103 | keepColumnName | Yes | No |
| 104 | ratioOfQueryCores | Yes | No |
| 105 | maxStreamCompDelay | Yes | No |
| 106 | maxFirstStreamCompDelay | Yes | No |
| 107 | retryStreamCompDelay | Yes | No |
| 108 | streamCompDelayRatio | Yes | No |
| 109 | maxVgroupsPerDb | Yes | No |
| 110 | maxTablesPerVnode | Yes | No |
| 111 | minTablesPerVnode | Yes | No |
| 112 | tableIncStepPerVnode | Yes | No |
| 113 | cache | Yes | No |
| 114 | blocks | Yes | No |
| 115 | days | Yes | No |
| 116 | keep | Yes | No |
| 117 | minRows | Yes | No |
| 118 | maxRows | Yes | No |
| 119 | quorum | Yes | No |
| 120 | comp | Yes | No |
| 121 | walLevel | Yes | No |
| 122 | fsync | Yes | No |
| 123 | replica | Yes | No |
| 124 | partitions | Yes | No |
| 125 | quorum | Yes | No |
| 126 | update | Yes | No |
| 127 | cachelast | Yes | No |
| 128 | maxSQLLength | Yes | No |
| 129 | maxWildCardsLength | Yes | No |
| 130 | maxRegexStringLen | Yes | No |
| 131 | maxNumOfOrderedRes | Yes | No |
| 132 | maxConnections | Yes | No |
| 133 | mnodeEqualVnodeNum | Yes | No |
| 134 | http | Yes | No |
| 135 | httpEnableRecordSql | Yes | No |
| 136 | httpMaxThreads | Yes | No |
| 137 | restfulRowLimit | Yes | No |
| 138 | httpDbNameMandatory | Yes | No |
| 139 | httpKeepAlive | Yes | No |
| 140 | enableRecordSql | Yes | No |
| 141 | maxBinaryDisplayWidth | Yes | No |
| 142 | stream | Yes | No |
| 143 | retrieveBlockingModel | Yes | No |
| 144 | tsdbMetaCompactRatio | Yes | No |
| 145 | defaultJSONStrType | Yes | No |
| 146 | walFlushSize | Yes | No |
| 147 | keepTimeOffset | Yes | No |
| 148 | flowctrl | Yes | No |
| 149 | slaveQuery | Yes | No |
| 150 | adjustMaster | Yes | No |
| 151 | topicBinaryLen | Yes | No |
| 152 | telegrafUseFieldNum | Yes | No |
| 153 | deadLockKillQuery | Yes | No |
| 154 | clientMerge | Yes | No |
| 155 | sdbDebugFlag | Yes | No |
| 156 | odbcDebugFlag | Yes | No |
| 157 | httpDebugFlag | Yes | No |
| 158 | monDebugFlag | Yes | No |
| 159 | cqDebugFlag | Yes | No |
| 160 | shortcutFlag | Yes | No |
| 161 | probeSeconds | Yes | No |
| 162 | probeKillSeconds | Yes | No |
| 163 | probeInterval | Yes | No |
| 164 | lossyColumns | Yes | No |
| 165 | fPrecision | Yes | No |
| 166 | dPrecision | Yes | No |
| 167 | maxRange | Yes | No |
| 168 | range | Yes | No |
## 3.0 Parameters
| # | **参数** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
| --- | :---------------------: | --------------- | --------------- | ------------------------------------------------- |
| 1 | firstEp | Yes | Yes | |
| 2 | secondEp | Yes | Yes | |
| 3 | fqdn | Yes | Yes | |
| 4 | serverPort | Yes | Yes | |
| 5 | maxShellConns | Yes | Yes | |
| 6 | monitor | Yes | Yes | |
| 7 | monitorFqdn | No | Yes | |
| 8 | monitorPort | No | Yes | |
| 9 | monitorInterval | Yes | Yes | |
| 10 | queryPolicy | No | Yes | |
| 11 | querySmaOptimize | No | Yes | |
| 12 | maxNumOfDistinctRes | Yes | Yes | |
| 15 | countAlwaysReturnValue | Yes | Yes | |
| 16 | dataDir | Yes | Yes | |
| 17 | minimalDataDirGB | Yes | Yes | |
| 18 | supportVnodes | No | Yes | |
| 19 | tempDir | Yes | Yes | |
| 20 | minimalTmpDirGB | Yes | Yes | |
| 21 | smlChildTableName | Yes | Yes | |
| 22 | smlTagName | Yes | Yes | |
| 23 | smlDataFormat | No | Yes | |
| 24 | statusInterval | Yes | Yes | |
| 25 | logDir | Yes | Yes | |
| 26 | minimalLogDirGB | Yes | Yes | |
| 27 | numOfLogLines | Yes | Yes | |
| 28 | asyncLog | Yes | Yes | |
| 29 | logKeepDays | Yes | Yes | |
| 30 | debugFlag | Yes | Yes | |
| 31 | tmrDebugFlag | Yes | Yes | |
| 32 | uDebugFlag | Yes | Yes | |
| 33 | rpcDebugFlag | Yes | Yes | |
| 34 | jniDebugFlag | Yes | Yes | |
| 35 | qDebugFlag | Yes | Yes | |
| 36 | cDebugFlag | Yes | Yes | |
| 37 | dDebugFlag | Yes | Yes | |
| 38 | vDebugFlag | Yes | Yes | |
| 39 | mDebugFlag | Yes | Yes | |
| 40 | wDebugFlag | Yes | Yes | |
| 41 | sDebugFlag | Yes | Yes | |
| 42 | tsdbDebugFlag | Yes | Yes | |
| 43 | tqDebugFlag | No | Yes | |
| 44 | fsDebugFlag | Yes | Yes | |
| 45 | udfDebugFlag | No | Yes | |
| 46 | smaDebugFlag | No | Yes | |
| 47 | idxDebugFlag | No | Yes | |
| 48 | tdbDebugFlag | No | Yes | |
| 49 | metaDebugFlag | No | Yes | |
| 50 | timezone | Yes | Yes | |
| 51 | locale | Yes | Yes | |
| 52 | charset | Yes | Yes | |
| 53 | udf | Yes | Yes | |
| 54 | enableCoreFile | Yes | Yes | |

View File

@ -8,6 +8,9 @@ will automatically add the required columns to ensure that the data written by t
The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability.
Tips:
The schemaless write will automatically create a table. You do not need to create a table manually, or an unknown error may occur.
## Schemaless Writing Line Protocol
TDengine's schemaless writing line protocol supports InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.

View File

@ -76,7 +76,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows:
```bash
GF_VERSION=3.2.2
GF_VERSION=3.2.7
# from GitHub
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
# from Grafana

View File

@ -76,7 +76,7 @@ Development: false
### Install from source code
```
git clone https://github.com:taosdata/kafka-connect-tdengine.git
git clone https://github.com/taosdata/kafka-connect-tdengine.git
cd kafka-connect-tdengine
mvn clean package
unzip -d $CONFLUENT_HOME/share/java/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip

View File

@ -38,15 +38,9 @@ Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/
## Data Connection Setup
### Download TDengine plug-in to grafana plug-in directory
### Install Grafana Plugin and Configure Data Source
```bash
1. wget -c https://github.com/taosdata/grafanaplugin/releases/download/v3.1.3/tdengine-datasource-3.1.3.zip
2. sudo unzip tdengine-datasource-3.1.3.zip -d /var/lib/grafana/plugins/
3. sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
4. echo -e "[plugins]\nallow_loading_unsigned_plugins = tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
5. sudo systemctl restart grafana-server.service
```
Please refer to [Install Grafana Plugin and Configure Data Source](/third-party/grafana/#install-grafana-plugin-and-configure-data-source)
### Modify /etc/telegraf/telegraf.conf

View File

@ -41,15 +41,9 @@ Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/
## Data Connection Setup
### Copy the TDengine plugin to the grafana plugin directory
### Install Grafana Plugin and Configure Data Source
```bash
1. wget -c https://github.com/taosdata/grafanaplugin/releases/download/v3.1.3/tdengine-datasource-3.1.3.zip
2. sudo unzip tdengine-datasource-3.1.3.zip -d /var/lib/grafana/plugins/
3. sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
4. echo -e "[plugins]\nallow_loading_unsigned_plugins = tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
5. sudo systemctl restart grafana-server.service
```
Please refer to [Install Grafana Plugin and Configure Data Source](/third-party/grafana/#install-grafana-plugin-and-configure-data-source)
### Configure collectd

View File

@ -1,11 +1,39 @@
---
sidebar_label: TDengine
title: TDengine
title: TDengine Release History and Download Links
description: TDengine release history, Release Notes and download links.
---
TDengine 3.x installation packages can be downloaded at the following links:
For TDengine 2.x installation packages by version, please visit [here](https://www.taosdata.com/all-downloads).
import Release from "/components/ReleaseV3";
## 3.0.2.2
<Release type="tdengine" version="3.0.2.2" />
## 3.0.2.1
<Release type="tdengine" version="3.0.2.1" />
## 3.0.2.0
<Release type="tdengine" version="3.0.2.0" />
## 3.0.1.8
<Release type="tdengine" version="3.0.1.8" />
## 3.0.1.7
<Release type="tdengine" version="3.0.1.7" />
## 3.0.1.6
<Release type="tdengine" version="3.0.1.6" />
## 3.0.1.5
<Release type="tdengine" version="3.0.1.5" />
@ -29,4 +57,3 @@ import Release from "/components/ReleaseV3";
## 3.0.1.0
<Release type="tdengine" version="3.0.1.0" />

View File

@ -1,11 +1,39 @@
---
sidebar_label: taosTools
title: taosTools
sidebar_label: taosTools
title: taosTools Release History and Download Links
description: taosTools release history, Release Notes, download links.
---
taosTools installation packages can be downloaded at the following links:
For other historical version installers, please visit [here](https://www.taosdata.com/all-downloads).
import Release from "/components/ReleaseV3";
## 2.4.0
<Release type="tools" version="2.4.0" />
## 2.3.3
<Release type="tools" version="2.3.3" />
## 2.3.2
<Release type="tools" version="2.3.2" />
## 2.3.0
<Release type="tools" version="2.3.0" />
## 2.2.9
<Release type="tools" version="2.2.9" />
## 2.2.7
<Release type="tools" version="2.2.7" />
## 2.2.6
<Release type="tools" version="2.2.6" />

View File

@ -5,21 +5,24 @@ namespace Examples
{
public class WSConnExample
{
static void Main(string[] args)
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
throw new Exception($"get WS connection failed,reason:{LibTaosWS.WSErrorStr(IntPtr.Zero)} code:{LibTaosWS.WSErrorNo(IntPtr.Zero)}");
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
Console.WriteLine("Establish connect success.");
// close connection.
LibTaosWS.WSClose(wsConn);
}
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
}
}
}

View File

@ -5,7 +5,7 @@ namespace Examples
{
public class WSInsertExample
{
static void Main(string[] args)
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
@ -13,7 +13,8 @@ namespace Examples
// Assert if connection is validate
if (wsConn == IntPtr.Zero)
{
throw new Exception($"get WS connection failed,reason:{LibTaosWS.WSErrorStr(IntPtr.Zero)} code:{LibTaosWS.WSErrorNo(IntPtr.Zero)}");
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
@ -36,6 +37,8 @@ namespace Examples
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
static void ValidInsert(string desc, IntPtr wsRes)
@ -43,7 +46,7 @@ namespace Examples
int code = LibTaosWS.WSErrorNo(wsRes);
if (code != 0)
{
throw new Exception($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
Console.WriteLine($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
}
else
{
@ -55,4 +58,4 @@ namespace Examples
}
// Establish connect success.
// create table success affect 0 rows, cost 3717542 nanoseconds
// insert data success affect 8 rows, cost 2613637 nanoseconds
// insert data success affect 8 rows, cost 2613637 nanoseconds

View File

@ -7,13 +7,14 @@ namespace Examples
{
public class WSQueryExample
{
static void Main(string[] args)
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
throw new Exception($"get WS connection failed,reason:{LibTaosWS.WSErrorStr(IntPtr.Zero)} code:{LibTaosWS.WSErrorNo(IntPtr.Zero)}");
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
@ -28,7 +29,9 @@ namespace Examples
int code = LibTaosWS.WSErrorNo(wsRes);
if (code != 0)
{
throw new Exception($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
Console.WriteLine($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
LibTaosWS.WSFreeResult(wsRes);
return -1;
}
// get meta data
@ -58,6 +61,8 @@ namespace Examples
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
}
}
@ -71,4 +76,4 @@ namespace Examples
// 1538548685000 | 10.3 | 219 | 0.31 | California.SanFrancisco | 2 |
// 1538548695000 | 12.6 | 218 | 0.33 | California.SanFrancisco | 2 |
// 1538548696800 | 12.3 | 221 | 0.31 | California.SanFrancisco | 2 |
// 1538548696650 | 10.3 | 218 | 0.25 | California.SanFrancisco | 3 |
// 1538548696650 | 10.3 | 218 | 0.25 | California.SanFrancisco | 3 |

View File

@ -7,7 +7,7 @@ namespace Examples
{
public class WSStmtExample
{
static void Main(string[] args)
static int Main(string[] args)
{
const string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
const string table = "meters";
@ -21,7 +21,8 @@ namespace Examples
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
throw new Exception($"get WS connection failed,reason:{LibTaosWS.WSErrorStr(IntPtr.Zero)} code:{LibTaosWS.WSErrorNo(IntPtr.Zero)}");
Console.WriteLine($"get WS connection failed");
return -1;
}
else
{
@ -66,18 +67,20 @@ namespace Examples
}
else
{
throw new Exception("Init STMT failed...");
Console.WriteLine("Init STMT failed...");
}
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
static void ValidStmtStep(int code, IntPtr wsStmt, string desc)
{
if (code != 0)
{
throw new Exception($"{desc} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
Console.WriteLine($"{desc} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
}
else
{
@ -92,4 +95,4 @@ namespace Examples
// WSStmtBindParamBatch success...
// WSStmtAddBatch success...
// WSStmtExecute success...
// WS STMT insert 5 rows...
// WS STMT insert 5 rows...

View File

@ -24,7 +24,7 @@ func main() {
if err != nil {
panic(err)
}
_, err = db.Exec("create topic if not exists example_tmq_topic with meta as DATABASE example_tmq")
_, err = db.Exec("create topic if not exists example_tmq_topic as DATABASE example_tmq")
if err != nil {
panic(err)
}
@ -84,20 +84,6 @@ func main() {
if err != nil {
panic(err)
}
for {
result, err := consumer.Poll(time.Second)
if err != nil {
panic(err)
}
if result.Type != common.TMQ_RES_TABLE_META {
panic("want message type 2 got " + strconv.Itoa(int(result.Type)))
}
data, _ := json.Marshal(result.Meta)
fmt.Println(string(data))
consumer.Commit(context.Background(), result.Message)
consumer.FreeMessage(result.Message)
break
}
_, err = db.Exec("insert into example_tmq.t1 values(now,1)")
if err != nil {
panic(err)

View File

@ -8,7 +8,7 @@ conn.execute("CREATE DATABASE test")
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
affected_row: int = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m 24.4)")
affected_row: int = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
print("affected_row", affected_row)
# output:
# affected_row 3

View File

@ -0,0 +1,192 @@
#! encoding = utf-8
import json
import time
from json import JSONDecodeError
from typing import Callable
import logging
from concurrent.futures import ThreadPoolExecutor, Future
import taos
from kafka import KafkaConsumer
from kafka.consumer.fetcher import ConsumerRecord
class Consumer(object):
DEFAULT_CONFIGS = {
'kafka_brokers': 'localhost:9092',
'kafka_topic': 'python_kafka',
'kafka_group_id': 'taos',
'taos_host': 'localhost',
'taos_user': 'root',
'taos_password': 'taosdata',
'taos_database': 'power',
'taos_port': 6030,
'timezone': None,
'clean_after_testing': False,
'bath_consume': True,
'batch_size': 1000,
'async_model': True,
'workers': 10
}
LOCATIONS = ['California.SanFrancisco', 'California.LosAngles', 'California.SanDiego', 'California.SanJose',
'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
'California.SantaClara', 'California.Cupertino']
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1'
USE_DATABASE_SQL = 'use {}'
DROP_TABLE_SQL = 'drop table if exists meters'
DROP_DATABASE_SQL = 'drop database if exists {}'
CREATE_STABLE_SQL = 'create stable meters (ts timestamp, current float, voltage int, phase float) ' \
'tags (location binary(64), groupId int)'
CREATE_TABLE_SQL = 'create table if not exists {} using meters tags (\'{}\', {})'
INSERT_SQL_HEADER = "insert into "
INSERT_PART_SQL = 'power.{} values (\'{}\', {}, {}, {})'
def __init__(self, **configs):
self.config: dict = self.DEFAULT_CONFIGS
self.config.update(configs)
self.consumer = KafkaConsumer(
self.config.get('kafka_topic'), # topic
bootstrap_servers=self.config.get('kafka_brokers'),
group_id=self.config.get('kafka_group_id'),
)
self.taos = taos.connect(
host=self.config.get('taos_host'),
user=self.config.get('taos_user'),
password=self.config.get('taos_password'),
port=self.config.get('taos_port'),
timezone=self.config.get('timezone'),
)
if self.config.get('async_model'):
self.pool = ThreadPoolExecutor(max_workers=self.config.get('workers'))
self.tasks: list[Future] = []
# tags and table mapping # key: {location}_{groupId} value:
self.tag_table_mapping = {}
i = 0
for location in self.LOCATIONS:
for j in range(1, 11):
table_name = 'd{}'.format(i)
self._cache_table(location=location, group_id=j, table_name=table_name)
i += 1
def init_env(self):
# create database and table
self.taos.execute(self.DROP_DATABASE_SQL.format(self.config.get('taos_database')))
self.taos.execute(self.CREATE_DATABASE_SQL.format(self.config.get('taos_database')))
self.taos.execute(self.USE_DATABASE_SQL.format(self.config.get('taos_database')))
self.taos.execute(self.DROP_TABLE_SQL)
self.taos.execute(self.CREATE_STABLE_SQL)
for tags, table_name in self.tag_table_mapping.items():
location, group_id = _get_location_and_group(tags)
self.taos.execute(self.CREATE_TABLE_SQL.format(table_name, location, group_id))
def consume(self):
logging.warning('## start consumer topic-[%s]', self.config.get('kafka_topic'))
try:
if self.config.get('bath_consume'):
self._run_batch(self._to_taos_batch)
else:
self._run(self._to_taos)
except KeyboardInterrupt:
logging.warning("## caught keyboard interrupt, stopping")
finally:
self.stop()
def stop(self):
# close consumer
if self.consumer is not None:
self.consumer.commit()
self.consumer.close()
# multi thread
if self.config.get('async_model'):
for task in self.tasks:
while not task.done():
pass
if self.pool is not None:
self.pool.shutdown()
# clean data
if self.config.get('clean_after_testing'):
self.taos.execute(self.DROP_TABLE_SQL)
self.taos.execute(self.DROP_DATABASE_SQL.format(self.config.get('taos_database')))
# close taos
if self.taos is not None:
self.taos.close()
def _run(self, f: Callable[[ConsumerRecord], bool]):
for message in self.consumer:
if self.config.get('async_model'):
self.pool.submit(f(message))
else:
f(message)
def _run_batch(self, f: Callable[[list[list[ConsumerRecord]]], None]):
while True:
messages = self.consumer.poll(timeout_ms=500, max_records=self.config.get('batch_size'))
if messages:
if self.config.get('async_model'):
self.pool.submit(f, messages.values())
else:
f(list(messages.values()))
if not messages:
time.sleep(0.1)
def _to_taos(self, message: ConsumerRecord) -> bool:
sql = self.INSERT_SQL_HEADER + self._build_sql(message.value)
if len(sql) == 0: # decode error, skip
return True
logging.info('## insert sql %s', sql)
return self.taos.execute(sql=sql) == 1
def _to_taos_batch(self, messages: list[list[ConsumerRecord]]):
sql = self._build_sql_batch(messages=messages)
if len(sql) == 0: # decode error, skip
return
self.taos.execute(sql=sql)
def _build_sql(self, msg_value: str) -> str:
try:
data = json.loads(msg_value)
except JSONDecodeError as e:
logging.error('## decode message [%s] error ', msg_value, e)
return ''
location = data.get('location')
group_id = data.get('groupId')
ts = data.get('ts')
current = data.get('current')
voltage = data.get('voltage')
phase = data.get('phase')
table_name = self._get_table_name(location=location, group_id=group_id)
return self.INSERT_PART_SQL.format(table_name, ts, current, voltage, phase)
def _build_sql_batch(self, messages: list[list[ConsumerRecord]]) -> str:
sql_list = []
for partition_messages in messages:
for message in partition_messages:
sql_list.append(self._build_sql(message.value))
return self.INSERT_SQL_HEADER + ' '.join(sql_list)
def _cache_table(self, location: str, group_id: int, table_name: str):
self.tag_table_mapping[_tag_table_mapping_key(location=location, group_id=group_id)] = table_name
def _get_table_name(self, location: str, group_id: int) -> str:
return self.tag_table_mapping.get(_tag_table_mapping_key(location=location, group_id=group_id))
def _tag_table_mapping_key(location: str, group_id: int):
return '{}_{}'.format(location, group_id)
def _get_location_and_group(key: str) -> (str, int):
fields = key.split('_')
return fields[0], fields[1]
if __name__ == '__main__':
consumer = Consumer(async_model=True)
consumer.init_env()
consumer.consume()

View File

@ -4,6 +4,7 @@ from taos.tmq import *
conn = taos.connect()
print("init")
conn.execute("drop topic if exists topic_ctb_column")
conn.execute("drop database if exists py_tmq")
conn.execute("create database if not exists py_tmq vgroups 2")
conn.select_db("py_tmq")
@ -15,7 +16,6 @@ conn.execute("create table if not exists tb2 using stb1 tags(2)")
conn.execute("create table if not exists tb3 using stb1 tags(3)")
print("create topic")
conn.execute("drop topic if exists topic_ctb_column")
conn.execute(
"create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1"
)

View File

@ -10,4 +10,4 @@ chrono = "0.4"
serde = { version = "1", features = ["derive"] }
tokio = { version = "1", features = ["rt", "macros", "rt-multi-thread"] }
taos = { version = "0.*" }
taos = { version = "0.4.8" }

View File

@ -12,7 +12,10 @@ async fn main() -> anyhow::Result<()> {
// bind table name and tags
stmt.set_tbname_tags(
"d1001",
&[Value::VarChar("California.SanFransico".into()), Value::Int(2)],
&[
Value::VarChar("California.SanFransico".into()),
Value::Int(2),
],
)?;
// bind values.
let values = vec![
@ -30,9 +33,9 @@ async fn main() -> anyhow::Result<()> {
ColumnView::from_floats(vec![0.33]),
];
stmt.bind(&values2)?;
stmt.add_batch()?;
// execute.
let rows = stmt.execute()?;
assert_eq!(rows, 2);

View File

@ -50,7 +50,7 @@ async fn main() -> anyhow::Result<()> {
// create super table
format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(24))"),
// create topic for subscription
format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}")
format!("CREATE TOPIC tmq_meters AS SELECT * FROM `meters`")
])
.await?;
@ -64,13 +64,9 @@ async fn main() -> anyhow::Result<()> {
let mut consumer = tmq.build()?;
consumer.subscribe(["tmq_meters"]).await?;
{
let mut stream = consumer.stream();
while let Some((offset, message)) = stream.try_next().await? {
// get information from offset
// the topic
consumer
.stream()
.try_for_each(|(offset, message)| async {
let topic = offset.topic();
// the vgroup id, like partition id in kafka.
let vgroup_id = offset.vgroup_id();
@ -78,20 +74,14 @@ async fn main() -> anyhow::Result<()> {
if let Some(data) = message.into_data() {
while let Some(block) = data.fetch_raw_block().await? {
// one block for one table, get table name if needed
let name = block.table_name();
let records: Vec<Record> = block.deserialize().try_collect()?;
println!(
"** table: {}, got {} records: {:#?}\n",
name.unwrap(),
records.len(),
records
);
println!("** read {} records: {:#?}\n", records.len(), records);
}
}
consumer.commit(offset).await?;
}
}
Ok(())
})
.await?;
consumer.unsubscribe().await;

View File

@ -5,7 +5,6 @@ async fn main() -> anyhow::Result<()> {
let dsn = "ws://";
let taos = TaosBuilder::from_dsn(dsn)?.build()?;
taos.exec_many([
"DROP DATABASE IF EXISTS power",
"CREATE DATABASE power",

View File

@ -69,7 +69,7 @@ TDengine 的主要功能如下:
- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**通过超级表、存储计算分离、分区分片、预计算和其它技术TDengine 能够高效地浏览、格式化和访问数据。
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例GitHub Star 19k且拥有一个活跃的开发者社区。
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例GitHub Star 20k且拥有一个活跃的开发者社区。
采用 TDengine可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -3,6 +3,10 @@ title: 立即开始
description: '快速设置 TDengine 环境并体验其高效写入和查询'
---
import xiaot from './xiaot.webp'
import channel from './channel.webp'
import official_account from './official-account.webp'
TDengine 完整的软件包包括服务端taosd、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动taosc、命令行程序 (CLItaos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../reference/taosadapter) 提供 [RESTful 接口](../connector/rest-api)。
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
@ -12,4 +16,32 @@ import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
```
## 学习 TDengine 知识地图
TDengine 知识地图中涵盖了 TDengine 的各种知识点,揭示了各概念实体之间的调用关系和数据流向。学习和了解 TDengine 知识地图有助于你快速掌握 TDengine 的知识体系。
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>图 1. TDengine 知识地图</figcaption>
</center>
</figure>
## 加入 TDengine 官方社区
微信扫描以下二维码,学习了解 TDengine 的最新技术与大家共同交流物联网大数据技术应用、TDengine 使用问题和技巧等话题。
<table width="100%">
<tr align="center">
<td style={{padding:'1em 3em',border:0}}><img src={xiaot} alt="小 T 的二维码" width="200" /></td>
<td style={{padding:'1em 3em',border:0}}><img src={channel} alt="TDengine 微信视频号" width="200" /></td>
<td style={{padding:'1em 3em',border:0}}><img src={official_account} alt="TDengine 微信公众号" width="200" /></td>
</tr>
<tr align="center">
<td style={{padding:'1em 3em',border:0}}>加入“物联网大数据技术前沿群”<br/>与大家进行技术交流</td>
<td style={{padding:'1em 3em',border:0}}>关注 TDengine 微信视频号<br/>收看技术直播与教学视频</td>
<td style={{padding:'1em 3em',border:0}}>关注 TDengine 微信公众号<br/>阅读核心技术与行业案例文章</td>
</tr>
</table>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -0,0 +1,47 @@
---
title: 从 Kafka 写入
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PyKafka from "./_py_kafka.mdx";
## Kafka 介绍
Apache Kafka 是开源的分布式消息分发平台被广泛应用于高性能数据管道、流式数据分析、数据集成和事件驱动类型的应用程序。Kafka 包含 Producer、Consumer 和 Topic其中 Producer 是向 Kafka 发送消息的进程Consumer 是从 Kafka 消费消息的进程。Kafka 相关概念可以参考[官方文档](https://kafka.apache.org/documentation/#gettingStarted)。
### kafka topic
Kafka 的消息按 topic 组织,每个 topic 会有一到多个 partition。可以通过 kafka 的 `kafka-topics` 管理 topic。
创建名为 `kafka-events` 的topic:
```
bin/kafka-topics.sh --create --topic kafka-events --bootstrap-server localhost:9092
```
修改 `kafka-events` 的 partition 数量为 3:
```
bin/kafka-topics.sh --alter --topic kafka-events --partitions 3 --bootstrap-server=localhost:9092
```
展示所有的 topic 和 partition:
```
bin/kafka-topics.sh --bootstrap-server=localhost:9092 --describe
```
## 写入 TDengine
TDengine 支持 Sql 方式和 Schemaless 方式的数据写入Sql 方式数据写入可以参考 [TDengine SQL 写入](/develop/insert-data/sql-writing/) 和 [TDengine 高效写入](/develop/insert-data/high-volume/)。Schemaless 方式数据写入可以参考 [TDengine Schemaless 写入](/reference/schemaless/) 文档。
## 示例代码
<Tabs defaultValue="Python" groupId="lang">
<TabItem label="Python" value="Python">
<PyKafka />
</TabItem>
</Tabs>

View File

@ -38,9 +38,8 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
- field_set 中的每个数据项都需要对自身的数据类型进行描述, 比如 1.2f32 代表 FLOAT 类型的数值 1.2, 如果不带类型后缀会被当作 DOUBLE 处理;
- timestamp 支持多种时间精度。写入数据的时候需要用参数指定时间精度,支持从小时到纳秒的 6 种时间精度。
- 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field后面的数据按照这个顺序如果顺序不一样需要配置参数 smlDataFormat 为 false否则数据写入按照相同顺序写入库中数据会异常。3.0.1.3 之后的版本 smlDataFormat 默认为 false [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
:::
- 默认产生的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
:::
要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)

View File

@ -32,8 +32,7 @@ OpenTSDB 行协议同样采用一行字符串来表示一行数据。OpenTSDB
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
```
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
举例如下:配置 smlChildTableName=tname 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略
- 默认生产的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略
参考 [OpenTSDB Telnet API 文档](http://opentsdb.net/docs/build/html/api_telnet/put.html)。
## 示例代码

View File

@ -47,10 +47,8 @@ OpenTSDB JSON 格式协议采用一个 JSON 字符串表示一行或多行数据
:::note
- 对于 JSON 格式协议TDengine 并不会自动把所有标签转成 NCHAR 类型, 字符串将将转为 NCHAR 类型, 数值将同样转换为 DOUBLE 类型。
- TDengine 只接收 JSON **数组格式**的字符串,即使一行数据也需要转换成数组形式。
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
举例如下:配置 smlChildTableName=tname 插入数据为 `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` 则创建的表名为 cpu1注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略
:::
- 默认生成的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set其他的行会忽略
:::
## 示例代码

View File

@ -0,0 +1,60 @@
### python Kafka 客户端
Kafka 的 python 客户端可以参考文档 [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python)。推荐使用 [confluent-kafka-python](https://github.com/confluentinc/confluent-kafka-python) 和 [kafka-python](http://github.com/dpkp/kafka-python)。以下示例以 [kafka-python](http://github.com/dpkp/kafka-python) 为例。
### 从 Kafka 消费数据
Kafka 客户端采用 pull 的方式从 Kafka 消费数据,可以采用单条消费的方式或批量消费的方式读取数据。使用 [kafka-python](http://github.com/dpkp/kafka-python) 客户端单条消费数据的示例如下:
```
from kafka import KafkaConsumer
consumer = KafkaConsumer('my_favorite_topic')
for msg in consumer:
print (msg)
```
单条消费的方式在数据流量大的情况下往往存在性能瓶颈,导致 Kafka 消息积压,更推荐使用批量消费的方式消费数据。使用 [kafka-python](http://github.com/dpkp/kafka-python) 客户端批量消费数据的示例如下:
```
from kafka import KafkaConsumer
consumer = KafkaConsumer('my_favorite_topic')
while True:
msgs = consumer.poll(timeout_ms=500, max_records=1000)
if msgs:
print (msgs)
```
### Python 多线程
为了提高数据写入效率,通常采用多线程的方式写入数据,可以使用 python 线程池 ThreadPoolExecutor 实现多线程。示例代码如下:
```
from concurrent.futures import ThreadPoolExecutor, Future
pool = ThreadPoolExecutor(max_workers=10)
pool.submit(...)
```
### Python 多进程
单个python进程不能充分发挥多核 CPU 的性能有时候我们会选择多进程的方式。在多进程的情况下需要注意Kafka Consumer 的数量应该小于等于 Kafka Topic Partition 数量。Python 多进程示例代码如下:
```
from multiprocessing import Process
ps = []
for i in range(5):
p = Process(target=Consumer().consume())
p.start()
ps.append(p)
for p in ps:
p.join()
```
除了 Python 内置的多线程和多进程方式,还可以通过第三方库 gunicorn 实现并发。
### 完整示例
```py
{{#include docs/examples/python/kafka_example.py}}
```

View File

@ -205,13 +205,13 @@ typedef struct SUdfInterBuf {
用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 动态链接库,之后才能载入 TDengine 系统。
例如,按照上一章节描述的规则准备好了用户定义函数的源代码 add_one.c以 Linux 为例可以执行如下指令编译得到动态链接库文件:
例如,按照上一章节描述的规则准备好了用户定义函数的源代码 bit_and.c以 Linux 为例可以执行如下指令编译得到动态链接库文件:
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so
```
这样就准备好了动态链接库 add_one.so 文件,可以供后文创建 UDF 时使用了。为了保证可靠的系统运行,编译器 GCC 推荐使用 7.5 及以上版本。
这样就准备好了动态链接库 libbitand.so 文件,可以供后文创建 UDF 时使用了。为了保证可靠的系统运行,编译器 GCC 推荐使用 7.5 及以上版本。
## 管理和使用UDF
编译好的UDF还需要将其加入到系统才能被正常的SQL调用。关于如何管理和使用UDF参见[UDF使用说明](../12-taos-sql/26-udf.md)

View File

@ -19,6 +19,7 @@ TDengine 提供了兼容 InfluxDB (v1) 和 OpenTSDB 行协议的 Schemaless API
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](taos-sql/table/#创建表)的 TTL 参数
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。

View File

@ -73,7 +73,95 @@ TDengine 客户端驱动的安装请参考 [安装指南](../#安装步骤)
```c
{{#include examples/c/demo.c}}
```
格式化输出不同类型字段函数 taos_print_row
```c
int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields) {
int32_t len = 0;
for (int i = 0; i < num_fields; ++i) {
if (i > 0) {
str[len++] = ' ';
}
if (row[i] == NULL) {
len += sprintf(str + len, "%s", TSDB_DATA_NULL_STR);
continue;
}
switch (fields[i].type) {
case TSDB_DATA_TYPE_TINYINT:
len += sprintf(str + len, "%d", *((int8_t *)row[i]));
break;
case TSDB_DATA_TYPE_UTINYINT:
len += sprintf(str + len, "%u", *((uint8_t *)row[i]));
break;
case TSDB_DATA_TYPE_SMALLINT:
len += sprintf(str + len, "%d", *((int16_t *)row[i]));
break;
case TSDB_DATA_TYPE_USMALLINT:
len += sprintf(str + len, "%u", *((uint16_t *)row[i]));
break;
case TSDB_DATA_TYPE_INT:
len += sprintf(str + len, "%d", *((int32_t *)row[i]));
break;
case TSDB_DATA_TYPE_UINT:
len += sprintf(str + len, "%u", *((uint32_t *)row[i]));
break;
case TSDB_DATA_TYPE_BIGINT:
len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
break;
case TSDB_DATA_TYPE_UBIGINT:
len += sprintf(str + len, "%" PRIu64, *((uint64_t *)row[i]));
break;
case TSDB_DATA_TYPE_FLOAT: {
float fv = 0;
fv = GET_FLOAT_VAL(row[i]);
len += sprintf(str + len, "%f", fv);
} break;
case TSDB_DATA_TYPE_DOUBLE: {
double dv = 0;
dv = GET_DOUBLE_VAL(row[i]);
len += sprintf(str + len, "%lf", dv);
} break;
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_NCHAR: {
int32_t charLen = varDataLen((char *)row[i] - VARSTR_HEADER_SIZE);
if (fields[i].type == TSDB_DATA_TYPE_BINARY) {
assert(charLen <= fields[i].bytes && charLen >= 0);
} else {
assert(charLen <= fields[i].bytes * TSDB_NCHAR_SIZE && charLen >= 0);
}
memcpy(str + len, row[i], charLen);
len += charLen;
} break;
case TSDB_DATA_TYPE_TIMESTAMP:
len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
break;
case TSDB_DATA_TYPE_BOOL:
len += sprintf(str + len, "%d", *((int8_t *)row[i]));
default:
break;
}
}
str[len] = 0;
return len;
}
```
</details>
### 异步查询示例
@ -115,6 +203,7 @@ TDengine 客户端驱动的安装请参考 [安装指南](../#安装步骤)
<summary>订阅和消费</summary>
```c
{{#include examples/c/tmq.c}}
```
</details>

View File

@ -68,39 +68,38 @@ TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对
### 安装连接器
<Tabs defaultValue="maven">
<TabItem value="maven" label="使用 Maven 安装">
<TabItem value="maven" label="使用 Maven 安装">
目前 taos-jdbcdriver 已经发布到 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
仓库,且各大仓库都已同步。
目前 taos-jdbcdriver 已经发布到 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 仓库,且各大仓库都已同步。
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
Maven 项目中,在 pom.xml 中添加以下依赖:
Maven 项目中,在 pom.xml 中添加以下依赖:
```xml-dtd
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version>
</dependency>
```
```xml-dtd
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version>
</dependency>
```
</TabItem>
<TabItem value="source" label="使用源码编译安装">
</TabItem>
<TabItem value="source" label="使用源码编译安装">
可以通过下载 TDengine 的源码,自己编译最新版本的 Java connector
可以通过下载 TDengine 的源码,自己编译最新版本的 Java connector
```shell
git clone https://github.com/taosdata/taos-connector-jdbc.git
cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true
```
```shell
git clone https://github.com/taosdata/taos-connector-jdbc.git
cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true
```
编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
</TabItem>
</TabItem>
</Tabs>
## 建立连接
@ -111,125 +110,117 @@ TDengine 的 JDBC URL 规范格式为:
对于建立连接,原生连接与 REST 连接有细微不同。
<Tabs defaultValue="rest">
<TabItem value="native" label="原生连接">
<TabItem value="native" label="原生连接">
```java
Class.forName("com.taosdata.jdbc.TSDBDriver");
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
```java
Class.forName("com.taosdata.jdbc.TSDBDriver");
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
以上示例,使用了 JDBC 原生连接的 TSDBDriver建立了到 hostname 为 taosdemo.com端口为 6030TDengine 的默认端口),数据库名为 test 的连接。这个 URL
中指定用户名user为 root密码password为 taosdata。
以上示例,使用了 JDBC 原生连接的 TSDBDriver建立了到 hostname 为 taosdemo.com端口为 6030TDengine 的默认端口),数据库名为 test 的连接。这个 URL
中指定用户名user为 root密码password为 taosdata。
**注意**:使用 JDBC 原生连接taos-jdbcdriver 需要依赖客户端驱动Linux 下是 libtaos.soWindows 下是 taos.dllmacOS 下是 libtaos.dylib
**注意**:使用 JDBC 原生连接taos-jdbcdriver 需要依赖客户端驱动Linux 下是 libtaos.soWindows 下是 taos.dllmacOS 下是 libtaos.dylib
url 中的配置参数如下:
url 中的配置参数如下:
- user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。
- cfgdir客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`。
- charset客户端使用的字符集默认值为系统字符集。
- locale客户端语言环境默认值系统当前 locale。
- timezone客户端使用的时区默认值为系统当前时区。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为true。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败将继续执行下面的 SQL。false不再执行失败 SQL
后的任何语句。默认值为false。
- user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。
- cfgdir客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`。
- charset客户端使用的字符集默认值为系统字符集。
- locale客户端语言环境默认值系统当前 locale。
- timezone客户端使用的时区默认值为系统当前时区。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为true。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败将继续执行下面的 SQL。false不再执行失败 SQL 后的任何语句。默认值为false。
JDBC 原生连接的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。
JDBC 原生连接的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。
**使用 TDengine 客户端驱动配置文件建立连接 **
**使用 TDengine 客户端驱动配置文件建立连接 **
当使用 JDBC 原生连接连接 TDengine 集群时,可以使用 TDengine 客户端驱动配置文件,在配置文件中指定集群的 firstEp、secondEp 等参数。如下所示:
当使用 JDBC 原生连接连接 TDengine 集群时,可以使用 TDengine 客户端驱动配置文件,在配置文件中指定集群的 firstEp、secondEp 等参数。如下所示:
1. 在 Java 应用中不指定 hostname 和 port
1. 在 Java 应用中不指定 hostname 和 port
```java
public Connection getConn() throws Exception{
Class.forName("com.taosdata.jdbc.TSDBDriver");
String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
Properties connProps = new Properties();
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
return conn;
}
```
```java
public Connection getConn() throws Exception{
Class.forName("com.taosdata.jdbc.TSDBDriver");
String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
Properties connProps = new Properties();
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
return conn;
}
```
2. 在配置文件中指定 firstEp 和 secondEp
2. 在配置文件中指定 firstEp 和 secondEp
```shell
# first fully qualified domain name (FQDN) for TDengine system
firstEp cluster_node1:6030
```shell
# first fully qualified domain name (FQDN) for TDengine system
firstEp cluster_node1:6030
# second fully qualified domain name (FQDN) for TDengine system, for cluster only
secondEp cluster_node2:6030
# second fully qualified domain name (FQDN) for TDengine system, for cluster only
secondEp cluster_node2:6030
# default system charset
# charset UTF-8
# default system charset
# charset UTF-8
# system locale
# locale en_US.UTF-8
```
# system locale
# locale en_US.UTF-8
```
以上示例jdbc 会使用客户端的配置文件,建立到 hostname 为 cluster_node1、端口为 6030、数据库名为 test 的连接。当集群中 firstEp 节点失效时JDBC 会尝试使用 secondEp
连接集群。
以上示例jdbc 会使用客户端的配置文件,建立到 hostname 为 cluster_node1、端口为 6030、数据库名为 test 的连接。当集群中 firstEp 节点失效时JDBC 会尝试使用 secondEp 连接集群。
TDengine 中,只要保证 firstEp 和 secondEp 中一个节点有效,就可以正常建立到集群的连接。
TDengine 中,只要保证 firstEp 和 secondEp 中一个节点有效,就可以正常建立到集群的连接。
> **注意**:这里的配置文件指的是调用 JDBC Connector 的应用程序所在机器上的配置文件Linux OS 上默认值 /etc/taos/taos.cfg Windows OS 上默认值
C://TDengine/cfg/taos.cfg。
> **注意**:这里的配置文件指的是调用 JDBC Connector 的应用程序所在机器上的配置文件Linux OS 上默认值 /etc/taos/taos.cfg Windows OS 上默认值 C://TDengine/cfg/taos.cfg。
</TabItem>
<TabItem value="rest" label="REST 连接">
</TabItem>
<TabItem value="rest" label="REST 连接">
```java
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
```java
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(jdbcUrl);
```
以上示例,使用了 JDBC REST 连接的 RestfulDriver建立了到 hostname 为 taosdemo.com端口为 6041数据库名为 test 的连接。这个 URL 中指定用户名user
root密码password为 taosdata。
以上示例,使用了 JDBC REST 连接的 RestfulDriver建立了到 hostname 为 taosdemo.com端口为 6041数据库名为 test 的连接。这个 URL 中指定用户名user为 root密码password为 taosdata。
使用 JDBC REST 连接,不需要依赖客户端驱动。与 JDBC 原生连接相比,仅需要:
使用 JDBC REST 连接,不需要依赖客户端驱动。与 JDBC 原生连接相比,仅需要:
1. driverClass 指定为“com.taosdata.jdbc.rs.RestfulDriver”
2. jdbcUrl 以“jdbc:TAOS-RS://”开头;
3. 使用 6041 作为连接端口。
1. driverClass 指定为“com.taosdata.jdbc.rs.RestfulDriver”
2. jdbcUrl 以“jdbc:TAOS-RS://”开头;
3. 使用 6041 作为连接端口。
url 中的配置参数如下:
url 中的配置参数如下:
- user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为false。逐行拉取结果集使用 HTTP 方式进行数据传输。JDBC REST
连接支持批量拉取数据功能。taos-jdbcdriver 与 TDengine 之间通过 WebSocket 连接进行数据传输。相较于 HTTPWebSocket 可以使 JDBC REST 连接支持大数据量查询,并提升查询性能。
- charset: 当开启批量拉取数据时,指定解析字符串数据的字符集。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL
后的任何语句。默认值为false。
- httpConnectTimeout: 连接超时时间,单位 ms 默认值为 5000。
- httpSocketTimeout: socket 超时时间,单位 ms默认值为 5000。仅在 batchfetch 设置为 false 时生效。
- messageWaitTimeout: 消息超时时间, 单位 ms 默认值为 3000。 仅在 batchfetch 设置为 true 时生效。
- useSSL: 连接中是否使用 SSL。
- user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为false。逐行拉取结果集使用 HTTP 方式进行数据传输。JDBC REST 连接支持批量拉取数据功能。taos-jdbcdriver 与 TDengine 之间通过 WebSocket 连接进行数据传输。相较于 HTTPWebSocket 可以使 JDBC REST 连接支持大数据量查询,并提升查询性能。
- charset: 当开启批量拉取数据时,指定解析字符串数据的字符集。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。
- httpConnectTimeout: 连接超时时间,单位 ms 默认值为 5000。
- httpSocketTimeout: socket 超时时间,单位 ms默认值为 5000。仅在 batchfetch 设置为 false 时生效。
- messageWaitTimeout: 消息超时时间, 单位 ms 默认值为 3000。 仅在 batchfetch 设置为 true 时生效。
- useSSL: 连接中是否使用 SSL。
**注意**部分配置项比如locale、timezone在 REST 连接中不生效。
**注意**部分配置项比如locale、timezone在 REST 连接中不生效。
:::note
:::note
- 与原生连接方式不同REST 接口是无状态的。在使用 JDBC REST 连接时,需要在 SQL 中指定表、超级表的数据库名称。例如:
- 与原生连接方式不同REST 接口是无状态的。在使用 JDBC REST 连接时,需要在 SQL 中指定表、超级表的数据库名称。例如:
```sql
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
```
```sql
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
```
- 如果在 url 中指定了 dbname那么JDBC REST 连接会默认使用/rest/sql/dbname 作为 restful 请求的 url在 SQL 中不需要指定 dbname。例如url 为
jdbc:TAOS-RS://127.0.0.1:6041/test那么可以执行 sqlinsert into t1 using weather(ts, temperature)
tags('California.SanFrancisco') values(now, 24.6);
- 如果在 url 中指定了 dbname那么JDBC REST 连接会默认使用/rest/sql/dbname 作为 restful 请求的 url在 SQL 中不需要指定 dbname。例如url 为 jdbc:TAOS-RS://127.0.0.1:6041/test那么可以执行 sqlinsert into t1 using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
:::
:::
</TabItem>
</TabItem>
</Tabs>
### 指定 URL 和 Properties 获取连接
@ -890,8 +881,10 @@ public static void main(String[] args) throws Exception {
| taos-jdbcdriver 版本 | 主要变化 |
| :------------------: | :----------------------------: |
| 3.0.3 | 修复 REST 连接在 jdk17+ 版本时间戳解析错误问题 |
| 3.0.1 - 3.0.2 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译JDK 8 环境下建议使用 3.0.2 版本 |
| 3.0.0 | 支持 TDengine 3.0 |
| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 |
| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 |
| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 |
| 2.0.38 | JDBC REST 连接增加批量拉取功能 |
@ -928,7 +921,7 @@ public static void main(String[] args) throws Exception {
**原因**taos-jdbcdriver 3.0.1 版本需要在 JDK 11+ 环境使用。
**解决方法** 更换 taos-jdbcdriver 3.0.2 版本。
**解决方法** 更换 taos-jdbcdriver 3.0.2+ 版本。
其它问题请参考 [FAQ](../../../train-faq/faq)

View File

@ -26,14 +26,13 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
TDengine 版本更新往往会增加新的功能特性,列表中的连接器版本为连接器最佳适配版本。
| **TDengine 版本** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| --------------------- | -------- | ---------- | ------------ | ------------- | --------------- | -------- |
| **3.0.0.0 及以上** | 3.0.0 | 当前版本 | 3.0 分支 | 3.0.0 | 3.0.0 | 当前版本 |
| **2.4.0.14 及以上** | 2.0.38 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
| **2.4.0.6 及以上** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
| **2.4.0.4 - 2.4.0.5** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
| **2.2.x.x ** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 |
| **2.0.x.x ** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 |
| **TDengine 版本** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| ---------------------- | --------- | ---------- | ------------ | ------------- | --------------- | -------- |
| **3.0.0.0 及以上** | 3.0.2以上 | 当前版本 | 3.0 分支 | 3.0.0 | 3.0.0 | 当前版本 |
| **2.4.0.14 及以上** | 2.0.38 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
| **2.4.0.4 - 2.4.0.13** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
| **2.2.x.x ** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 |
| **2.0.x.x ** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 |
## 功能特性
@ -60,10 +59,10 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 暂不支持 | 暂不支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅TMQ** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 支持 |
| **参数绑定** | 暂不支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅TMQ** | 暂不支持 | 暂不支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 |
| **批量拉取(基于 WebSocket** | 支持 | 支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **批量拉取(基于 WebSocket** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
:::warning

View File

@ -190,3 +190,16 @@ DROP DNODE dnodeId;
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
:::
## 常见问题
1、建立集群时使用 CREATE DNODE 增加新节点后,新节点始终显示 offline 状态?
```sql
1首先要检查增加的新节点上的 taosd 服务是否已经正常启动
2如果已经启动再检查到新节点的网络是否通畅可以使用 ping fqdn 验证下
3如果前面两步都没有问题这一步要检查新节点做为独立集群在运行了可以使用 taos -h fqdn 连接上后show dnodes; 命令查看.
如果显示的列表与你主节点上显示的不一致,说明此节点自己单独成立了一个集群,解决的方法是停止新节点上的服务,然后清空新节点上
taos.cfg 中配置的 dataDir 目录下的所有文件,重新启动新节点服务即可解决。
```

View File

@ -45,6 +45,7 @@ CREATE DATABASE db_name PRECISION 'ns';
:::note
- 表的每行长度不能超过 48KB注意每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
- 虽然 BINARY 类型在底层存储上支持字节型的二进制字符,但不同编程语言对二进制数据的处理方式并不保证一致,因此建议在 BINARY 类型中只存储 ASCII 可见字符,而避免存储不可见字符。多字节的数据,例如中文字符,则需要使用 NCHAR 类型进行保存。如果强行使用 BINARY 类型保存中文字符,虽然有时也能正常读写,但并不带有字符集信息,很容易出现数据乱码甚至数据损坏等情况。
- BINARY 类型理论上最长可以有 16,374 字节。BINARY 仅支持字符串输入,字符串两端需使用单引号引用。使用时须指定大小,如 BINARY(20) 定义了最长为 20 个单字节字符的字符串,每个字符占 1 字节的存储空间,总共固定占用 20 字节的空间,此时如果用户字符串超出 20 字节将会报错。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 `\'`
- SQL 语句中的数值类型将依据是否存在小数点或使用科学计数法表示来判断数值类型是否为整型或者浮点型因此在使用时要注意相应类型越界的情况。例如9999999999999999999 会认为超过长整型的上边界而溢出,而 9999999999999999999.0 会被认为是有效的浮点数。

View File

@ -27,10 +27,11 @@ database_option: {
| PRECISION {'ms' | 'us' | 'ns'}
| REPLICA value
| RETENTIONS ingestion_duration:keep_duration ...
| STRICT {'off' | 'on'}
| WAL_LEVEL {1 | 2}
| VGROUPS value
| SINGLE_STABLE {0 | 1}
| TABLE_PREFIX value
| TABLE_SUFFIX value
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
@ -61,9 +62,6 @@ database_option: {
- PRECISION数据库的时间戳精度。ms 表示毫秒us 表示微秒ns 表示纳秒,默认 ms 毫秒。
- REPLICA表示数据库副本数取值为 1 或 3默认为 1。在集群中使用副本数必须小于或等于 DNODE 的数目。
- RETENTIONS表示数据的聚合周期和保存时长如 RETENTIONS 15s:7d,1m:21d,15m:50d 表示数据原始采集周期为 15 秒,原始数据保存 7 天;按 1 分钟聚合的数据保存 21 天;按 15 分钟聚合的数据保存 50 天。目前支持且只支持三级存储周期。
- STRICT表示数据同步的一致性要求默认为 off。
- on 表示强一致,即运行标准的 raft 协议,半数提交返回成功。
- off 表示弱一致,本地提交即返回成功。
- WAL_LEVELWAL 级别,默认为 1。
- 1写 WAL但不执行 fsync。
- 2写 WAL而且执行 fsync。
@ -71,6 +69,8 @@ database_option: {
- SINGLE_STABLE表示此数据库中是否只可以创建一个超级表用于超级表列非常多的情况。
- 0表示可以创建多张超级表。
- 1表示只可以创建一张超级表。
- TABLE_PREFIX内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。
- TABLE_SUFFIX内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。
- WAL_RETENTION_PERIODwal 文件的额外保留策略用于数据订阅。wal 的保存时长,单位为 s。单副本默认为 0即落盘后立即删除。-1 表示不删除。多副本默认为 4 天。
- WAL_RETENTION_SIZEwal 文件的额外保留策略用于数据订阅。wal 的保存的最大上限,单位为 KB。单副本默认为 0即落盘后立即删除。多副本默认为-1表示不删除。
- WAL_ROLL_PERIODwal 文件切换时长,单位为 s。当 wal 文件创建并写入后,经过该时间,会自动创建一个新的 wal 文件。单副本默认为 0即仅在落盘时创建新文件。多副本默认为 1 天。
@ -142,10 +142,10 @@ SHOW CREATE DATABASE db_name;
### 查看数据库参数
```sql
SHOW DATABASES \G;
SELECT * FROM INFORMATION_SCHEMA.INS_DATABASES WHERE NAME='DBNAME' \G;
```
会列出系统中所有数据库的配置参数,并且每行只显示一个参数。
会列出指定数据库的配置参数,并且每行只显示一个参数。
## 删除过期数据

View File

@ -9,12 +9,12 @@ description: 对表的各种管理操作
`CREATE TABLE` 语句用于创建普通表和以超级表为模板创建子表。
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...) [table_options]
CREATE TABLE create_subtable_clause
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
[TAGS (create_definition [, create_definitionn] ...)]
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...)
[TAGS (create_definition [, create_definition] ...)]
[table_options]
create_subtable_clause: {

View File

@ -7,7 +7,7 @@ description: 对超级表的各种管理操作
## 创建超级表
```sql
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definition] ...) TAGS (create_definition [, create_definition] ...) [table_options]
create_definition:
col_name column_definition
@ -139,10 +139,10 @@ alter_table_option: {
- ADD COLUMN添加列。
- DROP COLUMN删除列。
- MODIFY COLUMN修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
- MODIFY COLUMN修改列的宽度,数据列的类型必须是 nchar 和 binary使用此指令可以修改其宽度,只能改大,不能改小。
- ADD TAG给超级表添加一个标签。
- DROP TAG删除超级表的一个标签。从超级表删除某个标签后该超级表下的所有子表也会自动删除该标签。
- MODIFY TAG修改超级表的一个标签的定义。如果标签的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
- MODIFY TAG修改超级表的一个标签的列宽度。标签的类型只能是 nchar 和 binary使用此指令可以修改其宽度,只能改大,不能改小。
- RENAME TAG修改超级表的一个标签的名称。从超级表修改某个标签名后该超级表下的所有子表也会自动更新该标签名。
### 增加列

View File

@ -224,7 +224,7 @@ GROUP BY 子句中的表达式可以包含表或视图中的任何列,这些
该子句对行进行分组,但不保证结果集的顺序。若要对分组进行排序,请使用 ORDER BY 子句
## PARTITON BY
## PARTITION BY
PARTITION BY 子句是 TDengine 特色语法,按 part_list 对数据进行切分,在每个切分的分片中进行计算。
@ -302,7 +302,7 @@ SELECT TIMEZONE();
### 语法
```txt
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
```
### 正则表达式规范
@ -315,6 +315,39 @@ WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
正则匹配字符串长度不能超过 128 字节。可以通过参数 _maxRegexStringLen_ 设置和调整最大允许的正则匹配字符串,该参数是客户端配置参数,需要重启才能生效。
## CASE 表达式
### 语法
```txt
CASE value WHEN compare_value THEN result [WHEN compare_value THEN result ...] [ELSE result] END
CASE WHEN condition THEN result [WHEN condition THEN result ...] [ELSE result] END
```
### 说明
TDengine 通过 CASE 表达式让用户可以在 SQL 语句中使用 IF ... THEN ... ELSE 逻辑。
第一种 CASE 语法返回第一个 value 等于 compare_value 的 result如果没有 compare_value 符合,则返回 ELSE 之后的 result如果没有 ELSE 部分,则返回 NULL。
第二种语法返回第一个 condition 为真的 result。 如果没有 condition 符合,则返回 ELSE 之后的 result如果没有 ELSE 部分,则返回 NULL。
CASE 表达式的返回类型为第一个 WHEN THEN 部分的 result 类型,其余 WHEN THEN 部分和 ELSE 部分result 类型都需要可以向其转换,否则 TDengine 会报错。
### 示例
某设备有三个状态码,显示其状态,语句如下:
```sql
SELECT CASE dev_status WHEN 1 THEN 'Running' WHEN 2 THEN 'Warning' WHEN 3 THEN 'Downtime' ELSE 'Unknown' END FROM dev_table;
```
统计智能电表的电压平均值,当电压小于 200 或大于 250 时认为是统计有误,修正其值为 220语句如下
```sql
SELECT AVG(CASE WHEN voltage < 200 or voltage > 250 THEN 220 ELSE voltage END) FROM meters;
```
## JOIN 子句
TDengine 支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间” 进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。

View File

@ -533,7 +533,12 @@ TIMEDIFF(expr1, expr2 [, time_unit])
#### TIMETRUNCATE
```sql
TIMETRUNCATE(expr, time_unit)
TIMETRUNCATE(expr, time_unit [, ignore_timezone])
ignore_timezone: {
0
| 1
}
```
**功能说明**:将时间戳按照指定时间单位 time_unit 进行截断。
@ -549,6 +554,11 @@ TIMETRUNCATE(expr, time_unit)
1b(纳秒), 1u(微秒)1a(毫秒)1s(秒)1m(分)1h(小时)1d(天), 1w(周)。
- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。
- 输入包含不符合时间日期格式的字符串则返回 NULL。
- 当使用 1d 作为时间单位对时间戳进行截断时, 可通过设置 ignore_timezone 参数指定返回结果的显示是否忽略客户端时区的影响。
例如客户端所配置时区为 UTC+0800, 则 TIMETRUNCATE('2020-01-01 23:00:00', 1d, 0) 返回结果为 '2020-01-01 08:00:00'。
而使用 TIMETRUNCATE('2020-01-01 23:00:00', 1d, 1) 设置忽略时区时,返回结果为 '2020-01-01 00:00:00'
ignore_timezone 如果忽略的话,则默认值为 1 。
#### TIMEZONE
@ -870,6 +880,7 @@ INTERP(expr)
- INTERP 根据 FILL 字段来决定在每个符合输出条件的时刻如何进行插值。
- INTERP 只能在一个时间序列内进行插值,因此当作用于超级表时必须跟 partition by tbname 一起使用。
- INTERP 可以与伪列 _irowts 一起使用,返回插值点所对应的时间戳(3.0.1.4版本以后支持)。
- INTERP 可以与伪列 _isfilled 一起使用,显示返回结果是否为原始记录或插值算法产生的数据(3.0.2.1版本以后支持)。
### LAST
@ -1085,7 +1096,7 @@ ignore_negative: {
```sql
DIFF(expr [, ignore_negative])
ignore_negative: {
0
| 1
@ -1249,4 +1260,4 @@ SELECT SERVER_VERSION();
SELECT SERVER_STATUS();
```
**说明**返回服务端当前的状态
**说明**检测服务端是否所有 dnode 都在线,如果是则返回成功,否则返回无法建立连接的错误

View File

@ -119,6 +119,12 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
TDengine 还支持将 CASE 表达式用在状态量,可以表达某个状态的开始是由满足某个条件而触发,这个状态的结束是由另外一个条件满足而触发的语义。例如,智能电表的电压正常范围是 205V 到 235V那么可以通过监控电压来判断电路是否正常。
```
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
```
### 会话窗口
会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:102019-04-28 14:22:30]和[2019-04-28 14:23:102019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒超过了连续时间间隔12 秒)。

View File

@ -8,7 +8,7 @@ description: 流式计算的相关 SQL 的详细语法
## 创建流式计算
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
@ -28,6 +28,9 @@ subquery: SELECT select_list
支持会话窗口、状态窗口与滑动窗口其中会话窗口与状态窗口搭配超级表时必须与partition by tbname一起使用
subtable 子句定义了流式计算中创建的子表的命名规则,详见 流式计算的 partition 部分。
```sql
window_clause: {
SESSION(ts_col, tol_val)
@ -49,11 +52,43 @@ SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(
## 流式计算的 partition
可以使用 PARTITION BY TBNAME 或 PARTITION BY tag,对一个流进行多分区的计算,每个分区的时间线与时间窗口是独立的,会各自聚合,并写入到目的表中的不同子表。
可以使用 PARTITION BY TBNAMEtag普通列或者表达式,对一个流进行多分区的计算,每个分区的时间线与时间窗口是独立的,会各自聚合,并写入到目的表中的不同子表。
不带 PARTITION BY 选项时,所有的数据将写入到一张子表。
不带 PARTITION BY 子句时,所有的数据将写入到一张子表。
流式计算创建的超级表有唯一的 tag 列 groupId每个 partition 会被分配唯一 groupId。与 schemaless 写入一致,我们通过 MD5 计算子表名,并自动创建它。
在创建流时不使用 SUBTABLE 子句时,流式计算创建的超级表有唯一的 tag 列 groupId每个 partition 会被分配唯一 groupId。与 schemaless 写入一致,我们通过 MD5 计算子表名,并自动创建它。
若创建流的语句中包含 SUBTABLE 子句,用户可以为每个 partition 对应的子表生成自定义的表名,例如:
```sql
CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m);
```
PARTITION 子句中,为 tbname 定义了一个别名 tname, 在PARTITION 子句中的别名可以用于 SUBTABLE 子句中的表达式计算,在上述示例中,流新创建的子表将以前缀 'new-' 连接原表名作为表名。
注意,子表名的长度若超过 TDengine 的限制,将被截断。若要生成的子表名已经存在于另一超级表,由于 TDengine 的子表名是唯一的,因此对应新子表的创建以及数据的写入将会失败。
## 流式计算读取历史数据
正常情况下,流式计算不会处理创建前已经写入源表中的数据,若要处理已经写入的数据,可以在创建流时设置 fill_history 1 选项,这样创建的流式计算会自动处理创建前、创建中、创建后写入的数据。例如:
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 interval(10s)
```
结合 fill_history 1 选项可以实现只处理特定历史时间范围的数据例如只处理某历史时刻2020年1月30日之后的数据
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' interval(10s)
```
再如,仅处理某时间段内的数据,结束时间可以是未来时间
```sql
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' and ts < '2023-01-01' interval(10s)
```
如果该流任务已经彻底过期,并且您不再想让它检测或处理数据,您可以手动删除它,被计算出的数据仍会被保留。
## 删除流式计算
@ -72,14 +107,14 @@ SHOW STREAMS;
若要展示更详细的信息,可以使用:
```sql
SELECT * from performance_schema.`perf_streams`;
SELECT * from information_schema.`ins_streams`;
```
## 流式计算的触发模式
在创建流时,可以通过 TRIGGER 指令指定流式计算的触发模式。
对于非窗口计算,流式计算的触发是实时的;对于窗口计算,目前提供 3 种触发模式:
对于非窗口计算,流式计算的触发是实时的;对于窗口计算,目前提供 3 种触发模式,默认为 AT_ONCE
1. AT_ONCE写入立即触发

View File

@ -24,19 +24,19 @@ description: 合法字符集和命名中的限制规则
## 一般限制
- 数据库名最大长度为 32
- 表名最大长度为 192不包括数据库名前缀和分隔符
- 数据库名最大长度为 64 字节
- 表名最大长度为 192 字节,不包括数据库名前缀和分隔符
- 每行数据最大长度 48KB (注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
- 列名最大长度为 64
- 列名最大长度为 64 字节
- 最多允许 4096 列,最少需要 2 列,第一列必须是时间戳。
- 标签名最大长度为 64
- 标签名最大长度为 64 字节
- 最多允许 128 个,至少要有 1 个标签,一个表中标签值的总长度不超过 16KB
- SQL 语句最大长度 1048576 个字符
- SELECT 语句的查询结果,最多允许返回 4096 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
- 数据库的副本数只能设置为 1 或 3
- 用户名的最大长度是 23 字节
- 用户密码的最大长度是 15 个字节
- 用户名的最大长度是 23 字节
- 用户密码的最大长度是 128 字节
- 总数据行数取决于可用资源
- 单个数据库的虚拟结点数上限为 1024

View File

@ -18,6 +18,7 @@ description: TDengine 保留关键字的详细列表
- ADD
- AFTER
- AGGREGATE
- ALIVE
- ALL
- ALTER
- ANALYZE

View File

@ -179,6 +179,81 @@ SHOW TABLE DISTRIBUTED table_name;
显示表的数据分布信息。
示例说明:
语句: show table distributed d0\G; 竖行显示表 d0 的 BLOCK 分布情况
<details>
<summary>显示示例</summary>
<pre><code>
*************************** 1.row ***************************
_block_dist: Total_Blocks=[5] Total_Size=[93.65 Kb] Average_size=[18.73 Kb] Compression_Ratio=[23.98 %]
Total_Blocks : 表d0 占用的 block 个数为 5 个
Total_Size. : 表 d0 所有 block 在文件中占用的大小为 93.65 KB
Average_size: 平均每个 block 在文件中占用的空间大小为 18.73 KB
Compression_Ratio: 数据压缩率为 23.98%
*************************** 2.row ***************************
_block_dist: Total_Rows=[20000] Inmem_Rows=[0] MinRows=[3616] MaxRows=[4096] Average_Rows=[4000]
Total_Rows: 统计表 d0 的所有行数 为20000 行
Inmem_Rows 表示仍然还存放在内存中的行数,即没有落盘的行数,为 0行表示没有
MinRows BLOCK 中最小的行数,为 3616 行
MaxRows BLOCK 中最大的行数,为 4096行
Average_Rows BLOCK 中的平均行数为4000 行
*************************** 3.row ***************************
_block_dist: Total_Tables=[1] Total_Files=[2]
Total_Tables: 表示子表的个数这里为1
Total_Files 表数据保存在几个文件中,这里保存在 2 个文件中
*************************** 5.row ***************************
_block_dist: 0100 |
*************************** 6.row ***************************
_block_dist: 0299 |
......
*************************** 22.row ***************************
_block_dist: 3483 ||||||||||||||||| 1 (20.00%)
*************************** 23.row ***************************
_block_dist: 3682 |
*************************** 24.row ***************************
_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)
Query OK, 24 row(s) in set (0.002444s)
</code></pre>
</details>
上面是块中包含数据行数的块儿分布情况图,这里的 0100 0299 0498 … 表示的是每个块中包含的数据行数,上面的意思就是这个表的 5 个块,分布在 3483 ~3681 行的块有 1 个,占整个块的 20%,分布在 3881 ~ 4096最大行数的块数为 4 个,占整个块的 80% 其它区域内分布块数为 0。
## SHOW TAGS
```sql
@ -211,10 +286,10 @@ SHOW USERS;
显示当前系统中所有用户的信息。包括用户自定义的用户和系统默认用户。
## SHOW VARIABLES
## SHOW CLUSTER VARIABLES(3.0.1.6 之前为 SHOW VARIABLES)
```sql
SHOW VARIABLES;
SHOW CLUSTER VARIABLES;
SHOW DNODE dnode_id VARIABLES;
```

View File

@ -63,7 +63,7 @@ SHOW FUNCTIONS;
在 SQL 指令中,可以直接以在系统中创建 UDF 时赋予的函数名来调用用户定义函数。例如:
```sql
SELECT X(c1,c2) FROM table/stable;
SELECT bit_and(c1,c2) FROM table;
```
表示对名为 c1, c2 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。
表示对表 table 上名为 c1, c2 的数据列调用名为 bit_and 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。

View File

@ -21,6 +21,7 @@ taosAdapter 提供以下功能:
- 无缝连接到 collectd
- 无缝连接到 StatsD
- 支持 Prometheus remote_read 和 remote_write
- 获取 table 所在的虚拟节点组VGroup的 VGroup ID
## taosAdapter 架构图
@ -32,7 +33,7 @@ taosAdapter 提供以下功能:
taosAdapter 是 TDengine 服务端软件 的一部分,如果您使用 TDengine server 您不需要任何额外的步骤来安装 taosAdapter。您可以从[涛思数据官方网站](https://taosdata.com/cn/all-downloads/)下载 TDengine server 安装包。如果需要将 taosAdapter 分离部署在 TDengine server 之外的服务器上,则应该在该服务器上安装完整的 TDengine 来安装 taosAdapter。如果您需要使用源代码编译生成 taosAdapter您可以参考[构建 taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD-CN.md)文档。
### start/stop taosAdapter
### 启动/停止 taosAdapter
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。
@ -59,6 +60,7 @@ Usage of taosAdapter:
--collectd.port int collectd server port. Env "TAOS_ADAPTER_COLLECTD_PORT" (default 6045)
--collectd.user string collectd user. Env "TAOS_ADAPTER_COLLECTD_USER" (default "root")
--collectd.worker int collectd write worker. Env "TAOS_ADAPTER_COLLECTD_WORKER" (default 10)
--collectd.ttl int collectd data ttl. Env "TAOS_ADAPTER_COLLECTD_TTL" (default 0, means no ttl)
-c, --config string config path default /etc/taos/taosadapter.toml
--cors.allowAllOrigins cors allow all origins. Env "TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS" (default true)
--cors.allowCredentials cors allow credentials. Env "TAOS_ADAPTER_CORS_ALLOW_Credentials"
@ -100,6 +102,7 @@ Usage of taosAdapter:
--node_exporter.responseTimeout duration node_exporter response timeout. Env "TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT" (default 5s)
--node_exporter.urls strings node_exporter urls. Env "TAOS_ADAPTER_NODE_EXPORTER_URLS" (default [http://localhost:9100])
--node_exporter.user string node_exporter user. Env "TAOS_ADAPTER_NODE_EXPORTER_USER" (default "root")
--node_exporter.ttl int node_exporter data ttl. Env "TAOS_ADAPTER_NODE_EXPORTER_TTL"(default 0, means no ttl)
--opentsdb.enable enable opentsdb. Env "TAOS_ADAPTER_OPENTSDB_ENABLE" (default true)
--opentsdb_telnet.batchSize int opentsdb_telnet batch size. Env "TAOS_ADAPTER_OPENTSDB_TELNET_BATCH_SIZE" (default 1)
--opentsdb_telnet.dbs strings opentsdb_telnet db names. Env "TAOS_ADAPTER_OPENTSDB_TELNET_DBS" (default [opentsdb_telnet,collectd_tsdb,icinga2_tsdb,tcollector_tsdb])
@ -110,6 +113,7 @@ Usage of taosAdapter:
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"(default 0, means no ttl)
--pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT" (default 1h0m0s)
--pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT" (default 4000)
--pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE" (default 4000)
@ -131,6 +135,7 @@ Usage of taosAdapter:
--statsd.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE"
--statsd.user string statsd user. Env "TAOS_ADAPTER_STATSD_USER" (default "root")
--statsd.worker int statsd write worker. Env "TAOS_ADAPTER_STATSD_WORKER" (default 10)
--statsd.ttl int statsd data ttl. Env "TAOS_ADAPTER_STATSD_TTL" (default 0, means no ttl)
--taosConfigDir string load taos client config path. Env "TAOS_ADAPTER_TAOS_CONFIG_FILE"
--version Print the version and exit
```
@ -174,6 +179,7 @@ AllowWebSockets
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
- 支持 Prometheus remote_read 和 remote_write
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
- 获取 table 所在的虚拟节点组VGroup的 VGroup ID。关于虚拟节点组VGroup的更多信息请访问[整体架构文档](/tdinternal/arch/#主要逻辑单元) 。
## 接口
@ -195,6 +201,7 @@ AllowWebSockets
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](taos-sql/table/#创建表)的 TTL 参数。
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
示例: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
@ -235,6 +242,10 @@ Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输
<Prometheus />
### 获取 table 的 VGroup ID
可以访问 http 接口 `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` 获取 table 的 VGroup ID。关于虚拟节点组VGroup的更多信息请访问[整体架构文档](/tdinternal/arch/#主要逻辑单元) 。
## 内存使用优化方法
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 -1 到 100 的整数,单位为系统物理内存的百分比。
@ -277,7 +288,7 @@ http 返回内容:
## taosAdapter 监控指标
taosAdapter 采集 http 相关指标、cpu 百分比和内存百分比。
taosAdapter 采集 http 相关指标、CPU 百分比和内存百分比。
### http 接口
@ -289,13 +300,13 @@ http://<fqdn>:6041/metrics
### 写入 TDengine
taosAdapter 支持将 http 监控、cpu 百分比和内存百分比写入 TDengine。
taosAdapter 支持将 http 监控、CPU 百分比和内存百分比写入 TDengine。
有关配置参数
| **配置项** | **描述** | **默认值** |
|-------------------------|--------------------------------------------|----------|
| monitor.collectDuration | cpu 和内存采集间隔 | 3s |
| monitor.collectDuration | CPU 和内存采集间隔 | 3s |
| monitor.identity | 当前taosadapter 的标识符如果不设置将使用 'hostname:port' | |
| monitor.incgroup | 是否是 cgroup 中运行(容器中运行设置为 true) | false |
| monitor.writeToTD | 是否写入到 TDengine | false |

View File

@ -204,6 +204,10 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **-a/--replica <replicaNum\>** :
创建数据库时指定其副本数,默认值为 1 。
- ** -k/--keep-trying <NUMBER\>** : 失败后进行重试的次数默认不重试需使用 v3.0.9 以上版本
- ** -z/--trying-interval <NUMBER\>** : 失败重试间隔时间单位为毫秒仅在 -k 指定重试后有效需使用 v3.0.9 以上版本
- **-V/--version** :
显示版本信息并退出。不能与其它参数混用。
@ -217,7 +221,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
本节所列参数适用于所有功能模式。
- **filetype** : 要测试的功能,可选值为 `insert`, `query``subscribe`。分别对应插入、查询和订阅功能。每个配置文件中只能指定其中之一。
- **cfgdir** : TDengine 集群配置文件所在的目录,默认路径是 /etc/taos 。
- **cfgdir** : TDengine 客户端配置文件所在的目录,默认路径是 /etc/taos 。
- **host** : 指定要连接的 TDengine 服务端的 FQDN默认值为 localhost。
@ -231,6 +235,10 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
插入场景下 `filetype` 必须设置为 `insert`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
- ** keep_trying ** : 失败后进行重试的次数,默认不重试。需使用 v3.0.9 以上版本。
- ** trying_interval ** : 失败重试间隔时间,单位为毫秒,仅在 keep_trying 指定重试后有效。需使用 v3.0.9 以上版本。
#### 数据库相关配置参数
创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,个别具体参数如下。其余参数均与 TDengine 中 `create database` 时所指定的数据库参数相对应,详见[../../taos-sql/database]
@ -374,7 +382,11 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
### 查询场景配置参数
查询场景下 `filetype` 必须设置为 `query`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
查询场景下 `filetype` 必须设置为 `query`
查询场景可以通过设置 `kill_slow_query_threshold``kill_slow_query_interval` 参数来控制杀掉慢查询语句的执行threshold 控制如果 exec_usec 超过指定时间的查询将被 taosBenchmark 杀掉单位为秒interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为秒。
其它通用参数详见[通用配置参数](#通用配置参数)。
#### 执行指定查询语句的配置参数

View File

@ -22,7 +22,7 @@ taosdump 是一个逻辑备份工具,它不应被用于备份任何原始数
taosdump 有两种安装方式:
- 安装 taosTools 官方安装包, 请从[所有下载链接](https://www.taosdata.com/all-downloads)页面找到 taosTools 并下载安装。
- 安装 taosTools 官方安装包, 请从[发布历史页面](https://docs.taosdata.com/releases/tools/)页面找到 taosTools 并下载安装。
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。

View File

@ -4,6 +4,9 @@ sidebar_label: TDinsight
description: 基于Grafana的TDengine零依赖监控解决方案
---
import Tabs from '@theme/Tabs'
import TabItem from '@theme/TabItem'
TDinsight 是使用监控数据库和 [Grafana] 对 TDengine 进行监控的解决方案。
TDengine 通过 [taosKeeper](../taosKeeper) 将服务器的 CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度、慢查询等信息定时写入指定数据库并对重要的系统操作比如登录、创建、删除数据库等以及各种错误报警信息进行记录。通过 [Grafana] 和 [TDengine 数据源插件](https://github.com/taosdata/grafanaplugin/releases)TDinsight 将集群状态、节点信息、插入及查询请求、资源使用情况等进行可视化展示,同时还支持 vnode、dnode、mnode 节点状态异常告警,为开发者实时监控 TDengine 集群运行状态提供了便利。本文将指导用户安装 Grafana 服务器并通过 `TDinsight.sh` 安装脚本自动安装 TDengine 数据源插件及部署 TDinsight 可视化面板。
@ -41,6 +44,7 @@ sudo apt-get install grafana
```
### 在 CentOS / RHEL 上安装 Grafana
</TabItem>
<TabItem label="redhat" value="基于 CentOS / RHEL 系统">
@ -127,20 +131,20 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
大多数命令行选项都可以通过环境变量获得同样的效果。
| 短选项 | 长选项 | 环境变量 | 说明 |
| ------ | -------------------------- | ---------------------------- | --------------------------------------------------------------------------- |
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine 数据源插件版本,默认使用最新版。 |
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana 配置目录,默认为`/etc/grafana/provisioning/` |
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana 插件目录,默认为`/var/lib/grafana/plugins`。 |
| -O | --grafana-org-id | GF_ORG_ID | Grafana 组织 ID默认为 1。 |
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine 数据源名称,默认为 TDengine。 |
| -a | --tdengine-api | TDENGINE_API | TDengine REST API 端点。默认为`http://127.0.0.1:6041`。 |
| -u | --tdengine-user | TDENGINE_USER | TDengine 用户名。 [默认值root] |
| -p | --tdengine-密码 | TDENGINE_PASSWORD | TDengine 密码。 [默认taosdata] |
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight 仪表盘`uid`。 [默认值tdinsight] |
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight 仪表盘标题。 [默认TDinsight] |
| -e | --tdinsight-可编辑 | TDINSIGHT_DASHBOARD_EDITABLE | 如果配置仪表盘可以编辑。 [默认值false] |
| -E | --external-notifier | EXTERNAL_NOTIFIER | 将外部通知程序 uid 应用于 TDinsight 仪表盘。 |
| 短选项 | 长选项 | 环境变量 | 说明 |
| ------ | -------------------------- | ---------------------------- | ------------------------------------------------------- |
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine 数据源插件版本,默认使用最新版。 |
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana 配置目录,默认为`/etc/grafana/provisioning/` |
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana 插件目录,默认为`/var/lib/grafana/plugins`。 |
| -O | --grafana-org-id | GF_ORG_ID | Grafana 组织 ID默认为 1。 |
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine 数据源名称,默认为 TDengine。 |
| -a | --tdengine-api | TDENGINE_API | TDengine REST API 端点。默认为`http://127.0.0.1:6041`。 |
| -u | --tdengine-user | TDENGINE_USER | TDengine 用户名。 [默认值root] |
| -p | --tdengine-密码 | TDENGINE_PASSWORD | TDengine 密码。 [默认taosdata] |
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight 仪表盘`uid`。 [默认值tdinsight] |
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight 仪表盘标题。 [默认TDinsight] |
| -e | --tdinsight-可编辑 | TDINSIGHT_DASHBOARD_EDITABLE | 如果配置仪表盘可以编辑。 [默认值false] |
| -E | --external-notifier | EXTERNAL_NOTIFIER | 将外部通知程序 uid 应用于 TDinsight 仪表盘。 |
假设您在主机 `tdengine` 上启动 TDengine 数据库HTTP API 端口为 `6041`,用户为 `root1`,密码为 `pass5ord`。执行脚本:
@ -196,6 +200,7 @@ sudo grafana-cli \
[plugins]
allow_loading_unsigned_plugins = tdengine-datasource
```
:::
### 启动 Grafana 服务

View File

@ -16,7 +16,7 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
## TDengine 客户端和连接器支持的平台列表
目前 TDengine 的连接器可支持的平台广泛目前包括X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32/macOS 等开发环境。
目前 TDengine 的连接器可支持的平台广泛目前包括X64/X86/ARM64/ARM32/MIPS/LoongArch64 等硬件平台,以及 Linux/Win64/Win32/macOS 等开发环境。
对照矩阵如下:

View File

@ -119,7 +119,7 @@ taos -h tdengine -P 6030
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y wget
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.tdengine.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://www.tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \

View File

@ -134,15 +134,6 @@ taos --dump-config
| 取值范围 | 1-200000 |
| 缺省值 | 30 |
### telemetryReporting
| 属性 | 说明 |
| -------- | ---------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否允许 TDengine 采集和上报基本使用信息 |
| 取值范围 | 0不允许 1允许 |
| 缺省值 | 1 |
## 查询相关
### queryPolicy
@ -191,6 +182,15 @@ taos --dump-config
| 取值范围 | 0 表示包含函数名1 表示不包含函数名。 |
| 缺省值 | 0 |
### countAlwaysReturnValue
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | count/hyperloglog函数在数据为空或者NULL的情况下是否返回值 |
| 取值范围 | 0返回空行1返回 0 |
| 缺省值 | 1 |
## 区域相关
### timezone
@ -306,12 +306,20 @@ charset 的有效值是 UTF-8。
| 含义 | 数据文件目录,所有的数据文件都将写入该目录 |
| 缺省值 | /var/lib/taos |
### tempDir
| 属性 | 说明 |
| -------- | ------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 该参数指定所有系统运行过程中的临时文件生成的目录 |
| 缺省值 | /tmp |
### minimalTmpDirGB
| 属性 | 说明 |
| -------- | ------------------------------------------------ |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 当日志文件夹的磁盘大小小于该值时,停止写临时文件 |
| 含义 | tempDir 所指定的临时文件目录所需要保留的最小空间 |
| 单位 | GB |
| 缺省值 | 1.0 |
@ -320,7 +328,7 @@ charset 的有效值是 UTF-8。
| 属性 | 说明 |
| -------- | ------------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 当日志文件夹的磁盘大小小于该值时,停止写时序数据 |
| 含义 | dataDir 指定的时序数据存储目录所需要保留的最小 |
| 单位 | GB |
| 缺省值 | 2.0 |
@ -335,27 +343,7 @@ charset 的有效值是 UTF-8。
| 取值范围 | 0-4096 |
| 缺省值 | CPU 核数的 2 倍 |
## 时间相关
### statusInterval
| 属性 | 说明 |
| -------- | --------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | dnode 向 mnode 报告状态间隔 |
| 单位 | 秒 |
| 取值范围 | 1-10 |
| 缺省值 | 1 |
### shellActivityTimer
| 属性 | 说明 |
| -------- | --------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | shell 客户端向 mnode 发送心跳间隔 |
| 单位 | 秒 |
| 取值范围 | 1-120 |
| 缺省值 | 3 |
## 时间相关 |
## 性能调优
@ -367,28 +355,6 @@ charset 的有效值是 UTF-8。
| 含义 | 设置写入线程的最大数量 |
| 缺省值 | |
## 压缩相关
### compressMsgSize
| 属性 | 说明 |
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为 64330 字节,即大于 64330 字节的消息体才进行压缩。 |
| 单位 | bytes |
| 取值范围 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 |
| 缺省值 | -1 |
### compressColData
| 属性 | 说明 |
| -------- | --------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 客户端与服务器之间进行消息通讯过程中,对服务器端查询结果进行列压缩的阈值。 |
| 单位 | bytes |
| 取值范围 | 0: 对所有查询结果均进行压缩 >0: 查询结果中任意列大小超过该值的消息才进行压缩 -1: 不压缩 |
| 缺省值 | -1 |
## 日志相关
### logDir
@ -613,7 +579,7 @@ charset 的有效值是 UTF-8。
| 属性 | 说明 |
| -------- | ------------------------- |
| 适用范围 | 仅客户端适用 |
| 含义 | schemaless 自定义的子表名 |
| 含义 | schemaless 自定义的子表名的 key |
| 类型 | 字符串 |
| 缺省值 | 无 |
@ -656,12 +622,18 @@ charset 的有效值是 UTF-8。
| 取值范围 | 0: 不启动1启动 |
| 缺省值 | 1 |
## 2.X 与 3.0 配置参数对比
## 压缩参数
:::note
对于 2.x 版本中适用但在 3.0 版本中废弃的参数,其当前行为会有特别说明
### compressMsgSize
:::
| 属性 | 说明 |
| -------- | ----------------------------- |
| 适用于 | 服务端和客户端均适用 |
| 含义 | 是否对 RPC 消息进行压缩 |
| 取值范围 | -1: 所有消息都不压缩; 0: 所有消息都压缩; N (N>0): 只有大于 N 个字节的消息才压缩 |
| 缺省值 | -1 |
## 3.0 中有效的配置参数列表
| # | **参数** | **适用于 2.X ** | **适用于 3.0 ** | 3.0 版本的当前行为 |
| --- | :---------------------: | --------------- | --------------- | ------------------------------------------------- |
@ -674,161 +646,134 @@ charset 的有效值是 UTF-8。
| 7 | monitorFqdn | 否 | 是 | |
| 8 | monitorPort | 否 | 是 | |
| 9 | monitorInterval | 是 | 是 | |
| 10 | monitorMaxLogs | 否 | 是 | |
| 11 | monitorComp | 否 | 是 | |
| 12 | telemetryReporting | 是 | 是 | |
| 13 | telemetryInterval | 否 | 是 | |
| 14 | telemetryServer | 否 | 是 | |
| 15 | telemetryPort | 否 | 是 | |
| 16 | queryPolicy | 否 | 是 | |
| 17 | querySmaOptimize | 否 | 是 | |
| 18 | queryRsmaTolerance | 否 | 是 | |
| 19 | queryBufferSize | 是 | 是 | |
| 20 | maxNumOfDistinctRes | 是 | 是 | |
| 21 | minSlidingTime | 是 | 是 | |
| 22 | minIntervalTime | 是 | 是 | |
| 23 | countAlwaysReturnValue | 是 | 是 | |
| 24 | dataDir | 是 | 是 | |
| 25 | minimalDataDirGB | 是 | 是 | |
| 26 | supportVnodes | 否 | 是 | |
| 27 | tempDir | 是 | 是 | |
| 28 | minimalTmpDirGB | 是 | 是 | |
| 29 | compressMsgSize | 是 | 是 | |
| 30 | compressColData | 是 | 是 | |
| 31 | smlChildTableName | 是 | 是 | |
| 32 | smlTagName | 是 | 是 | |
| 33 | smlDataFormat | 否 | 是 | |
| 34 | statusInterval | 是 | 是 | |
| 35 | shellActivityTimer | 是 | 是 | |
| 36 | transPullupInterval | 否 | 是 | |
| 37 | mqRebalanceInterval | 否 | 是 | |
| 38 | ttlUnit | 否 | 是 | |
| 39 | ttlPushInterval | 否 | 是 | |
| 40 | numOfTaskQueueThreads | 否 | 是 | |
| 41 | numOfRpcThreads | 否 | 是 | |
| 42 | numOfCommitThreads | 是 | 是 | |
| 43 | numOfMnodeReadThreads | 否 | 是 | |
| 44 | numOfVnodeQueryThreads | 否 | 是 | |
| 45 | numOfVnodeStreamThreads | 否 | 是 | |
| 46 | numOfVnodeFetchThreads | 否 | 是 | |
| 47 | numOfVnodeWriteThreads | 否 | 是 | |
| 48 | numOfVnodeSyncThreads | 否 | 是 | |
| 49 | numOfVnodeRsmaThreads | 否 | 是 | |
| 50 | numOfQnodeQueryThreads | 否 | 是 | |
| 51 | numOfQnodeFetchThreads | 否 | 是 | |
| 52 | numOfSnodeSharedThreads | 否 | 是 | |
| 53 | numOfSnodeUniqueThreads | 否 | 是 | |
| 54 | rpcQueueMemoryAllowed | 否 | 是 | |
| 55 | logDir | 是 | 是 | |
| 56 | minimalLogDirGB | 是 | 是 | |
| 57 | numOfLogLines | 是 | 是 | |
| 58 | asyncLog | 是 | 是 | |
| 59 | logKeepDays | 是 | 是 | |
| 60 | debugFlag | 是 | 是 | |
| 61 | tmrDebugFlag | 是 | 是 | |
| 62 | uDebugFlag | 是 | 是 | |
| 63 | rpcDebugFlag | 是 | 是 | |
| 64 | jniDebugFlag | 是 | 是 | |
| 65 | qDebugFlag | 是 | 是 | |
| 66 | cDebugFlag | 是 | 是 | |
| 67 | dDebugFlag | 是 | 是 | |
| 68 | vDebugFlag | 是 | 是 | |
| 69 | mDebugFlag | 是 | 是 | |
| 70 | wDebugFlag | 是 | 是 | |
| 71 | sDebugFlag | 是 | 是 | |
| 72 | tsdbDebugFlag | 是 | 是 | |
| 73 | tqDebugFlag | 否 | 是 | |
| 74 | fsDebugFlag | 是 | 是 | |
| 75 | udfDebugFlag | 否 | 是 | |
| 76 | smaDebugFlag | 否 | 是 | |
| 77 | idxDebugFlag | 否 | 是 | |
| 78 | tdbDebugFlag | 否 | 是 | |
| 79 | metaDebugFlag | 否 | 是 | |
| 80 | timezone | 是 | 是 | |
| 81 | locale | 是 | 是 | |
| 82 | charset | 是 | 是 | |
| 83 | udf | 是 | 是 | |
| 84 | enableCoreFile | 是 | 是 | |
| 85 | arbitrator | 是 | 否 | 通过 RAFT 协议选主 |
| 86 | numOfThreadsPerCore | 是 | 否 | 有其它参数设置多种线程池的大小 |
| 87 | numOfMnodes | 是 | 否 | 通过 create mnode 命令动态创建 mnode |
| 88 | vnodeBak | 是 | 否 | 3.0 行为未知 |
| 89 | balance | 是 | 否 | 负载均衡功能由 split/merge vgroups 实现 |
| 90 | balanceInterval | 是 | 否 | 随着 balance 参数失效 |
| 91 | offlineThreshold | 是 | 否 | 3.0 行为未知 |
| 92 | role | 是 | 否 | 由 supportVnode 决定是否能够创建 |
| 93 | dnodeNopLoop | 是 | 否 | 2.6 文档中未找到此参数 |
| 94 | keepTimeOffset | 是 | 否 | 2.6 文档中未找到此参数 |
| 95 | rpcTimer | 是 | 否 | 3.0 行为未知 |
| 96 | rpcMaxTime | 是 | 否 | 3.0 行为未知 |
| 97 | rpcForceTcp | 是 | 否 | 默认为 TCP |
| 98 | tcpConnTimeout | 是 | 否 | 3.0 行为未知 |
| 99 | syncCheckInterval | 是 | 否 | 3.0 行为未知 |
| 100 | maxTmrCtrl | 是 | 否 | 3.0 行为未知 |
| 101 | monitorReplica | 是 | 否 | 由 RAFT 协议管理多副本 |
| 102 | smlTagNullName | 是 | 否 | 3.0 行为未知 |
| 103 | keepColumnName | 是 | 否 | 3.0 行为未知 |
| 104 | ratioOfQueryCores | 是 | 否 | 由 线程池 相关配置参数决定 |
| 105 | maxStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 106 | maxFirstStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 107 | retryStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 108 | streamCompDelayRatio | 是 | 否 | 3.0 行为未知 |
| 109 | maxVgroupsPerDb | 是 | 否 | 由 create db 的参数 vgroups 指定实际 vgroups 数量 |
| 110 | maxTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 111 | minTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 112 | tableIncStepPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 113 | cache | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 114 | blocks | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 115 | days | 是 | 否 | 由 create db 的参数 duration 取代 |
| 116 | keep | 是 | 否 | 由 create db 的参数 keep 取代 |
| 117 | minRows | 是 | 否 | 由 create db 的参数 minRows 取代 |
| 118 | maxRows | 是 | 否 | 由 create db 的参数 maxRows 取代 |
| 119 | quorum | 是 | 否 | 由 RAFT 协议决定 |
| 120 | comp | 是 | 否 | 由 create db 的参数 comp 取代 |
| 121 | walLevel | 是 | 否 | 由 create db 的参数 wal_level 取代 |
| 122 | fsync | 是 | 否 | 由 create db 的参数 wal_fsync_period 取代 |
| 123 | replica | 是 | 否 | 由 create db 的参数 replica 取代 |
| 124 | partitions | 是 | 否 | 3.0 行为未知 |
| 125 | update | 是 | 否 | 允许更新部分列 |
| 126 | cachelast | 是 | 否 | 由 create db 的参数 cacheModel 取代 |
| 127 | maxSQLLength | 是 | 否 | SQL 上限为 1MB无需参数控制 |
| 128 | maxWildCardsLength | 是 | 否 | 3.0 行为未知 |
| 129 | maxRegexStringLen | 是 | 否 | 3.0 行为未知 |
| 130 | maxNumOfOrderedRes | 是 | 否 | 3.0 行为未知 |
| 131 | maxConnections | 是 | 否 | 取决于系统配置和系统处理能力,详见后面的 Note |
| 132 | mnodeEqualVnodeNum | 是 | 否 | 3.0 行为未知 |
| 133 | http | 是 | 否 | http 服务由 taosAdapter 提供 |
| 134 | httpEnableRecordSql | 是 | 否 | taosd 不提供 http 服务 |
| 135 | httpMaxThreads | 是 | 否 | taosd 不提供 http 服务 |
| 136 | restfulRowLimit | 是 | 否 | taosd 不提供 http 服务 |
| 137 | httpDbNameMandatory | 是 | 否 | taosd 不提供 http 服务 |
| 138 | httpKeepAlive | 是 | 否 | taosd 不提供 http 服务 |
| 139 | enableRecordSql | 是 | 否 | 3.0 行为未知 |
| 140 | maxBinaryDisplayWidth | 是 | 否 | 3.0 行为未知 |
| 141 | stream | 是 | 否 | 默认启用连续查询 |
| 142 | retrieveBlockingModel | 是 | 否 | 3.0 行为未知 |
| 143 | tsdbMetaCompactRatio | 是 | 否 | 3.0 行为未知 |
| 144 | defaultJSONStrType | 是 | 否 | 3.0 行为未知 |
| 145 | walFlushSize | 是 | 否 | 3.0 行为未知 |
| 146 | keepTimeOffset | 是 | 否 | 3.0 行为未知 |
| 147 | flowctrl | 是 | 否 | 3.0 行为未知 |
| 148 | slaveQuery | 是 | 否 | 3.0 行为未知: slave vnode 是否能够处理查询? |
| 149 | adjustMaster | 是 | 否 | 3.0 行为未知 |
| 150 | topicBinaryLen | 是 | 否 | 3.0 行为未知 |
| 151 | telegrafUseFieldNum | 是 | 否 | 3.0 行为未知 |
| 152 | deadLockKillQuery | 是 | 否 | 3.0 行为未知 |
| 153 | clientMerge | 是 | 否 | 3.0 行为未知 |
| 154 | sdbDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 155 | odbcDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 156 | httpDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 157 | monDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 158 | cqDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 159 | shortcutFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 160 | probeSeconds | 是 | 否 | 3.0 行为未知 |
| 161 | probeKillSeconds | 是 | 否 | 3.0 行为未知 |
| 162 | probeInterval | 是 | 否 | 3.0 行为未知 |
| 163 | lossyColumns | 是 | 否 | 3.0 行为未知 |
| 164 | fPrecision | 是 | 否 | 3.0 行为未知 |
| 165 | dPrecision | 是 | 否 | 3.0 行为未知 |
| 166 | maxRange | 是 | 否 | 3.0 行为未知 |
| 167 | range | 是 | 否 | 3.0 行为未知 |
| 10 | queryPolicy | 否 | 是 | |
| 11 | querySmaOptimize | 否 | 是 | |
| 12 | maxNumOfDistinctRes | 是 | 是 | |
| 15 | countAlwaysReturnValue | 是 | 是 | |
| 16 | dataDir | 是 | 是 | |
| 17 | minimalDataDirGB | 是 | 是 | |
| 18 | supportVnodes | 否 | 是 | |
| 19 | tempDir | 是 | 是 | |
| 20 | minimalTmpDirGB | 是 | 是 | |
| 21 | smlChildTableName | 是 | 是 | |
| 22 | smlTagName | 是 | 是 | |
| 23 | smlDataFormat | 否 | 是 | |
| 24 | statusInterval | 是 | 是 | |
| 25 | logDir | 是 | 是 | |
| 26 | minimalLogDirGB | 是 | 是 | |
| 27 | numOfLogLines | 是 | 是 | |
| 28 | asyncLog | 是 | 是 | |
| 29 | logKeepDays | 是 | 是 | |
| 30 | debugFlag | 是 | 是 | |
| 31 | tmrDebugFlag | 是 | 是 | |
| 32 | uDebugFlag | 是 | 是 | |
| 33 | rpcDebugFlag | 是 | 是 | |
| 34 | jniDebugFlag | 是 | 是 | |
| 35 | qDebugFlag | 是 | 是 | |
| 36 | cDebugFlag | 是 | 是 | |
| 37 | dDebugFlag | 是 | 是 | |
| 38 | vDebugFlag | 是 | 是 | |
| 39 | mDebugFlag | 是 | 是 | |
| 40 | wDebugFlag | 是 | 是 | |
| 41 | sDebugFlag | 是 | 是 | |
| 42 | tsdbDebugFlag | 是 | 是 | |
| 43 | tqDebugFlag | 否 | 是 | |
| 44 | fsDebugFlag | 是 | 是 | |
| 45 | udfDebugFlag | 否 | 是 | |
| 46 | smaDebugFlag | 否 | 是 | |
| 47 | idxDebugFlag | 否 | 是 | |
| 48 | tdbDebugFlag | 否 | 是 | |
| 49 | metaDebugFlag | 否 | 是 | |
| 50 | timezone | 是 | 是 | |
| 51 | locale | 是 | 是 | |
| 52 | charset | 是 | 是 | |
| 53 | udf | 是 | 是 | |
| 54 | enableCoreFile | 是 | 是 | |
## 2.x->3.0 的废弃参数
| # | **参数** | **适用于 2.X ** | **适用于 3.0 ** | 3.0 版本的当前行为 |
| --- | :---------------------: | --------------- | --------------- | ------------------------------------------------- |
| 1 | arbitrator | 是 | 否 | 通过 RAFT 协议选主 |
| 2 | numOfThreadsPerCore | 是 | 否 | 有其它参数设置多种线程池的大小 |
| 3 | numOfMnodes | 是 | 否 | 通过 create mnode 命令动态创建 mnode |
| 4 | vnodeBak | 是 | 否 | 3.0 行为未知 |
| 5 | balance | 是 | 否 | 负载均衡功能由 split/merge vgroups 实现 |
| 6 | balanceInterval | 是 | 否 | 随着 balance 参数失效 |
| 7 | offlineThreshold | 是 | 否 | 3.0 行为未知 |
| 8 | role | 是 | 否 | 由 supportVnode 决定是否能够创建 |
| 9 | dnodeNopLoop | 是 | 否 | 2.6 文档中未找到此参数 |
| 10 | keepTimeOffset | 是 | 否 | 2.6 文档中未找到此参数 |
| 11 | rpcTimer | 是 | 否 | 3.0 行为未知 |
| 12 | rpcMaxTime | 是 | 否 | 3.0 行为未知 |
| 13 | rpcForceTcp | 是 | 否 | 默认为 TCP |
| 14 | tcpConnTimeout | 是 | 否 | 3.0 行为未知 |
| 15 | syncCheckInterval | 是 | 否 | 3.0 行为未知 |
| 16 | maxTmrCtrl | 是 | 否 | 3.0 行为未知 |
| 17 | monitorReplica | 是 | 否 | 由 RAFT 协议管理多副本 |
| 18 | smlTagNullName | 是 | 否 | 3.0 行为未知 |
| 19 | keepColumnName | 是 | 否 | 3.0 行为未知 |
| 20 | ratioOfQueryCores | 是 | 否 | 由 线程池 相关配置参数决定 |
| 21 | maxStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 22 | maxFirstStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 23 | retryStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 24 | streamCompDelayRatio | 是 | 否 | 3.0 行为未知 |
| 25 | maxVgroupsPerDb | 是 | 否 | 由 create db 的参数 vgroups 指定实际 vgroups 数量 |
| 26 | maxTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 27 | minTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 28 | tableIncStepPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 29 | cache | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 30 | blocks | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 31 | days | 是 | 否 | 由 create db 的参数 duration 取代 |
| 32 | keep | 是 | 否 | 由 create db 的参数 keep 取代 |
| 33 | minRows | 是 | 否 | 由 create db 的参数 minRows 取代 |
| 34 | maxRows | 是 | 否 | 由 create db 的参数 maxRows 取代 |
| 35 | quorum | 是 | 否 | 由 RAFT 协议决定 |
| 36 | comp | 是 | 否 | 由 create db 的参数 comp 取代 |
| 37 | walLevel | 是 | 否 | 由 create db 的参数 wal_level 取代 |
| 38 | fsync | 是 | 否 | 由 create db 的参数 wal_fsync_period 取代 |
| 39 | replica | 是 | 否 | 由 create db 的参数 replica 取代 |
| 40 | partitions | 是 | 否 | 3.0 行为未知 |
| 41 | update | 是 | 否 | 允许更新部分列 |
| 42 | cachelast | 是 | 否 | 由 create db 的参数 cacheModel 取代 |
| 43 | maxSQLLength | 是 | 否 | SQL 上限为 1MB无需参数控制 |
| 44 | maxWildCardsLength | 是 | 否 | 3.0 行为未知 |
| 45 | maxRegexStringLen | 是 | 否 | 3.0 行为未知 |
| 46 | maxNumOfOrderedRes | 是 | 否 | 3.0 行为未知 |
| 47 | maxConnections | 是 | 否 | 取决于系统配置和系统处理能力,详见后面的 Note |
| 48 | mnodeEqualVnodeNum | 是 | 否 | 3.0 行为未知 |
| 49 | http | 是 | 否 | http 服务由 taosAdapter 提供 |
| 50 | httpEnableRecordSql | 是 | 否 | taosd 不提供 http 服务 |
| 51 | httpMaxThreads | 是 | 否 | taosd 不提供 http 服务 |
| 52 | restfulRowLimit | 是 | 否 | taosd 不提供 http 服务 |
| 53 | httpDbNameMandatory | 是 | 否 | taosd 不提供 http 服务 |
| 54 | httpKeepAlive | 是 | 否 | taosd 不提供 http 服务 |
| 55 | enableRecordSql | 是 | 否 | 3.0 行为未知 |
| 56 | maxBinaryDisplayWidth | 是 | 否 | 3.0 行为未知 |
| 57 | stream | 是 | 否 | 默认启用连续查询 |
| 58 | retrieveBlockingModel | 是 | 否 | 3.0 行为未知 |
| 59 | tsdbMetaCompactRatio | 是 | 否 | 3.0 行为未知 |
| 60 | defaultJSONStrType | 是 | 否 | 3.0 行为未知 |
| 61 | walFlushSize | 是 | 否 | 3.0 行为未知 |
| 62 | keepTimeOffset | 是 | 否 | 3.0 行为未知 |
| 63 | flowctrl | 是 | 否 | 3.0 行为未知 |
| 64 | slaveQuery | 是 | 否 | 3.0 行为未知: slave vnode 是否能够处理查询? |
| 65 | adjustMaster | 是 | 否 | 3.0 行为未知 |
| 66 | topicBinaryLen | 是 | 否 | 3.0 行为未知 |
| 67 | telegrafUseFieldNum | 是 | 否 | 3.0 行为未知 |
| 68 | deadLockKillQuery | 是 | 否 | 3.0 行为未知 |
| 69 | clientMerge | 是 | 否 | 3.0 行为未知 |
| 70 | sdbDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 71 | odbcDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 72 | httpDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 73 | monDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 74 | cqDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 75 | shortcutFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 76 | probeSeconds | 是 | 否 | 3.0 行为未知 |
| 77 | probeKillSeconds | 是 | 否 | 3.0 行为未知 |
| 78 | probeInterval | 是 | 否 | 3.0 行为未知 |
| 79 | lossyColumns | 是 | 否 | 3.0 行为未知 |
| 80 | fPrecision | 是 | 否 | 3.0 行为未知 |
| 81 | dPrecision | 是 | 否 | 3.0 行为未知 |
| 82 | maxRange | 是 | 否 | 3.0 行为未知 |
| 83 | range | 是 | 否 | 3.0 行为未知 |

Some files were not shown because too many files have changed in this diff Show More