|
@ -17,11 +17,12 @@ include(${TD_SUPPORT_DIR}/cmake.platform)
|
||||||
include(${TD_SUPPORT_DIR}/cmake.define)
|
include(${TD_SUPPORT_DIR}/cmake.define)
|
||||||
include(${TD_SUPPORT_DIR}/cmake.options)
|
include(${TD_SUPPORT_DIR}/cmake.options)
|
||||||
include(${TD_SUPPORT_DIR}/cmake.version)
|
include(${TD_SUPPORT_DIR}/cmake.version)
|
||||||
include(${TD_SUPPORT_DIR}/cmake.install)
|
|
||||||
|
|
||||||
# contrib
|
# contrib
|
||||||
add_subdirectory(contrib)
|
add_subdirectory(contrib)
|
||||||
|
|
||||||
|
set_property(GLOBAL PROPERTY GLOBAL_DEPENDS_NO_CYCLES OFF)
|
||||||
|
|
||||||
# api
|
# api
|
||||||
add_library(api INTERFACE)
|
add_library(api INTERFACE)
|
||||||
target_include_directories(api INTERFACE "include/client")
|
target_include_directories(api INTERFACE "include/client")
|
||||||
|
@ -36,8 +37,7 @@ add_subdirectory(source)
|
||||||
add_subdirectory(tools)
|
add_subdirectory(tools)
|
||||||
add_subdirectory(utils)
|
add_subdirectory(utils)
|
||||||
add_subdirectory(examples/c)
|
add_subdirectory(examples/c)
|
||||||
|
include(${TD_SUPPORT_DIR}/cmake.install)
|
||||||
|
|
||||||
# docs
|
# docs
|
||||||
add_subdirectory(docs/doxgen)
|
add_subdirectory(docs/doxgen)
|
||||||
|
|
||||||
# tests (TODO)
|
|
||||||
|
|
34
README-CN.md
|
@ -39,9 +39,9 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
|
||||||
|
|
||||||
# 构建
|
# 构建
|
||||||
|
|
||||||
TDengine 目前可以在 Linux、 Windows 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
|
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
|
||||||
|
|
||||||
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubenetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
||||||
|
|
||||||
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
||||||
|
|
||||||
|
@ -104,6 +104,12 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
|
||||||
sudo yum config-manager --set-enabled Powertools
|
sudo yum config-manager --set-enabled Powertools
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### macOS
|
||||||
|
|
||||||
|
```
|
||||||
|
brew install argp-standalone pkgconfig
|
||||||
|
```
|
||||||
|
|
||||||
### 设置 golang 开发环境
|
### 设置 golang 开发环境
|
||||||
|
|
||||||
TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
|
TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
|
||||||
|
@ -210,14 +216,14 @@ cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
<!-- ### macOS 系统
|
### macOS 系统
|
||||||
|
|
||||||
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
cmake .. && cmake --build .
|
cmake .. && cmake --build .
|
||||||
``` -->
|
```
|
||||||
|
|
||||||
# 安装
|
# 安装
|
||||||
|
|
||||||
|
@ -263,6 +269,24 @@ nmake install
|
||||||
sudo make install
|
sudo make install
|
||||||
```
|
```
|
||||||
|
|
||||||
|
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
|
||||||
|
|
||||||
|
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
|
||||||
|
|
||||||
|
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
launchctl start com.tdengine.taosd
|
||||||
|
```
|
||||||
|
|
||||||
|
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taos
|
||||||
|
```
|
||||||
|
|
||||||
|
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
||||||
|
|
||||||
## 快速运行
|
## 快速运行
|
||||||
|
|
||||||
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
|
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
|
||||||
|
|
35
README.md
|
@ -19,7 +19,7 @@ English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine
|
||||||
|
|
||||||
# What is TDengine?
|
# What is TDengine?
|
||||||
|
|
||||||
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
|
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages:
|
||||||
|
|
||||||
- **[High Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
- **[High Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||||
|
|
||||||
|
@ -41,7 +41,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
||||||
|
|
||||||
# Building
|
# Building
|
||||||
|
|
||||||
At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
||||||
|
|
||||||
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
|
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
|
||||||
|
|
||||||
|
@ -105,6 +105,12 @@ If the PowerTools installation fails, you can try to use:
|
||||||
sudo yum config-manager --set-enabled powertools
|
sudo yum config-manager --set-enabled powertools
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### macOS
|
||||||
|
|
||||||
|
```
|
||||||
|
brew install argp-standalone pkgconfig
|
||||||
|
```
|
||||||
|
|
||||||
### Setup golang environment
|
### Setup golang environment
|
||||||
|
|
||||||
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
||||||
|
@ -213,14 +219,14 @@ cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
<!-- ### On macOS platform
|
### On macOS platform
|
||||||
|
|
||||||
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
|
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
cmake .. && cmake --build .
|
cmake .. && cmake --build .
|
||||||
``` -->
|
```
|
||||||
|
|
||||||
# Installing
|
# Installing
|
||||||
|
|
||||||
|
@ -258,7 +264,7 @@ After building successfully, TDengine can be installed by:
|
||||||
nmake install
|
nmake install
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
|
||||||
## On macOS platform
|
## On macOS platform
|
||||||
|
|
||||||
After building successfully, TDengine can be installed by:
|
After building successfully, TDengine can be installed by:
|
||||||
|
@ -266,7 +272,24 @@ After building successfully, TDengine can be installed by:
|
||||||
```bash
|
```bash
|
||||||
sudo make install
|
sudo make install
|
||||||
```
|
```
|
||||||
-->
|
|
||||||
|
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
|
||||||
|
|
||||||
|
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
|
||||||
|
|
||||||
|
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
launchctl start com.tdengine.taosd
|
||||||
|
```
|
||||||
|
|
||||||
|
Then users can use the TDengine CLI to connect the TDengine server. In a terminal, use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taos
|
||||||
|
```
|
||||||
|
|
||||||
|
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
|
||||||
|
|
||||||
## Quick Run
|
## Quick Run
|
||||||
|
|
||||||
|
|
|
@ -45,10 +45,19 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
|
||||||
ADD_DEFINITIONS("-DDARWIN -Wno-tautological-pointer-compare")
|
ADD_DEFINITIONS("-DDARWIN -Wno-tautological-pointer-compare")
|
||||||
|
|
||||||
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
|
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
|
||||||
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64" OR ${CMAKE_SYSTEM_PROCESSOR} MATCHES "x86_64")
|
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64")
|
||||||
MESSAGE("Current system arch is 64")
|
MESSAGE("Current system arch is arm64")
|
||||||
SET(TD_DARWIN_64 TRUE)
|
SET(TD_DARWIN_64 TRUE)
|
||||||
|
SET(TD_DARWIN_ARM64 TRUE)
|
||||||
ADD_DEFINITIONS("-D_TD_DARWIN_64")
|
ADD_DEFINITIONS("-D_TD_DARWIN_64")
|
||||||
|
ADD_DEFINITIONS("-D_TD_DARWIN_ARM64")
|
||||||
|
ENDIF ()
|
||||||
|
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "x86_64")
|
||||||
|
MESSAGE("Current system arch is x86_64")
|
||||||
|
SET(TD_DARWIN_64 TRUE)
|
||||||
|
SET(TD_DARWIN_X64 TRUE)
|
||||||
|
ADD_DEFINITIONS("-D_TD_DARWIN_64")
|
||||||
|
ADD_DEFINITIONS("-D_TD_DARWIN_X64")
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
|
|
||||||
ADD_DEFINITIONS("-DHAVE_UNISTD_H")
|
ADD_DEFINITIONS("-DHAVE_UNISTD_H")
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
# taosadapter
|
# taosadapter
|
||||||
ExternalProject_Add(taosadapter
|
ExternalProject_Add(taosadapter
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
|
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
|
||||||
GIT_TAG be729ab
|
GIT_TAG cc43ef0
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
|
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
|
||||||
BINARY_DIR ""
|
BINARY_DIR ""
|
||||||
#BUILD_IN_SOURCE TRUE
|
#BUILD_IN_SOURCE TRUE
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
# taos-tools
|
# taos-tools
|
||||||
ExternalProject_Add(taos-tools
|
ExternalProject_Add(taos-tools
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||||
GIT_TAG 70f5a1c
|
GIT_TAG 2849aa4
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||||
BINARY_DIR ""
|
BINARY_DIR ""
|
||||||
#BUILD_IN_SOURCE TRUE
|
#BUILD_IN_SOURCE TRUE
|
||||||
|
|
|
@ -37,6 +37,11 @@ if(${BUILD_WITH_ICONV})
|
||||||
cat("${TD_SUPPORT_DIR}/iconv_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
cat("${TD_SUPPORT_DIR}/iconv_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
# jemalloc
|
||||||
|
if(${JEMALLOC_ENABLED})
|
||||||
|
cat("${TD_SUPPORT_DIR}/jemalloc_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
|
endif()
|
||||||
|
|
||||||
# msvc regex
|
# msvc regex
|
||||||
if(${BUILD_MSVCREGEX})
|
if(${BUILD_MSVCREGEX})
|
||||||
cat("${TD_SUPPORT_DIR}/msvcregex_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
cat("${TD_SUPPORT_DIR}/msvcregex_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
|
@ -258,6 +263,19 @@ if(${BUILD_PTHREAD})
|
||||||
target_link_libraries(pthread INTERFACE libpthreadVC3)
|
target_link_libraries(pthread INTERFACE libpthreadVC3)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
# jemalloc
|
||||||
|
if(${JEMALLOC_ENABLED})
|
||||||
|
include(ExternalProject)
|
||||||
|
ExternalProject_Add(jemalloc
|
||||||
|
PREFIX "jemalloc"
|
||||||
|
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/jemalloc
|
||||||
|
BUILD_IN_SOURCE 1
|
||||||
|
CONFIGURE_COMMAND ./autogen.sh COMMAND ./configure --prefix=${CMAKE_BINARY_DIR}/build/ --disable-initial-exec-tls --with-malloc-conf='background_thread:true,metadata_thp:auto'
|
||||||
|
BUILD_COMMAND ${MAKE}
|
||||||
|
)
|
||||||
|
INCLUDE_DIRECTORIES(${CMAKE_BINARY_DIR}/build/include)
|
||||||
|
endif()
|
||||||
|
|
||||||
# crashdump
|
# crashdump
|
||||||
if(${BUILD_CRASHDUMP})
|
if(${BUILD_CRASHDUMP})
|
||||||
add_executable(dumper "crashdump/dumper/dumper.c")
|
add_executable(dumper "crashdump/dumper/dumper.c")
|
||||||
|
|
|
@ -23,8 +23,8 @@ The major features are listed below:
|
||||||
4. [Stream Processing](../develop/stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
|
4. [Stream Processing](../develop/stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
|
||||||
5. [Data Subscription](../develop/tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
|
5. [Data Subscription](../develop/tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
|
||||||
6. Visualization
|
6. Visualization
|
||||||
- Supports seamless integration with [Grafana](../third-party/grafana/) for visualization.
|
- Supports seamless integration with [Grafana](../third-party/grafana/).
|
||||||
- Supports seamless integration with Google Data Studio.
|
- Supports seamless integration with [Google Data Studio](../third-party/google-data-studio/).
|
||||||
7. Cluster
|
7. Cluster
|
||||||
- Supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes.
|
- Supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes.
|
||||||
- Supports [deployment on Kubernetes](../deployment/k8s/).
|
- Supports [deployment on Kubernetes](../deployment/k8s/).
|
||||||
|
@ -33,7 +33,7 @@ The major features are listed below:
|
||||||
- Provides [monitoring](../operation/monitor) on running instances of TDengine.
|
- Provides [monitoring](../operation/monitor) on running instances of TDengine.
|
||||||
- Provides many ways to [import](../operation/import) and [export](../operation/export) data.
|
- Provides many ways to [import](../operation/import) and [export](../operation/export) data.
|
||||||
9. Tools
|
9. Tools
|
||||||
- Provides an interactive [Command-line Interface (CLI)](../reference/taos-shell) for management, maintenance and ad-hoc queries.
|
- Provides an interactive [Command Line Interface (CLI)](../reference/taos-shell) for management, maintenance and ad-hoc queries.
|
||||||
- Provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine.
|
- Provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine.
|
||||||
10. Programming
|
10. Programming
|
||||||
- Provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
|
- Provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
|
||||||
|
|
|
@ -11,7 +11,19 @@ This document describes how to install TDengine in a Docker container and perfor
|
||||||
|
|
||||||
## Run TDengine
|
## Run TDengine
|
||||||
|
|
||||||
If Docker is already installed on your computer, run the following command:
|
If Docker is already installed on your computer, pull the latest TDengine Docker container image:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker pull tdengine/tdengine:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Or the container image of specific version:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker pull tdengine/tdengine:3.0.1.4
|
||||||
|
```
|
||||||
|
|
||||||
|
And then run the following command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||||
|
@ -50,7 +62,7 @@ taos>
|
||||||
|
|
||||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||||
|
|
||||||
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a Linux or Windows terminal.
|
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a terminal.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taosBenchmark
|
taosBenchmark
|
||||||
|
|
|
@ -7,7 +7,7 @@ import Tabs from "@theme/Tabs";
|
||||||
import TabItem from "@theme/TabItem";
|
import TabItem from "@theme/TabItem";
|
||||||
import PkgListV3 from "/components/PkgListV3";
|
import PkgListV3 from "/components/PkgListV3";
|
||||||
|
|
||||||
This document describes how to install TDengine on Linux and Windows and perform queries and inserts.
|
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
|
||||||
|
|
||||||
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
||||||
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
|
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
|
||||||
|
@ -17,7 +17,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
|
||||||
|
|
||||||
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
|
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
|
||||||
|
|
||||||
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows.
|
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
@ -111,11 +111,18 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W
|
||||||
<PkgListV3 type={3}/>
|
<PkgListV3 type={3}/>
|
||||||
2. Run the downloaded package to install TDengine.
|
2. Run the downloaded package to install TDengine.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem label="macOS" value="macos">
|
||||||
|
|
||||||
|
1. Download the macOS installation package.
|
||||||
|
<PkgListV3 type={7}/>
|
||||||
|
2. Run the downloaded package to install TDengine.
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
For information about TDengine releases, see [Release History](../../releases/tdengine).
|
For information about TDengine other releases, check [Release History](../../releases/tdengine).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
@ -178,12 +185,33 @@ The following `systemctl` commands can help you manage TDengine service:
|
||||||
|
|
||||||
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
|
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="macOS" value="macos">
|
||||||
|
|
||||||
|
After the installation is complete, double-click the /applications/TDengine to start the program, or run `launchctl start com.tdengine.taosd` to start TDengine Server.
|
||||||
|
|
||||||
|
The following `launchctl` commands can help you manage TDengine service:
|
||||||
|
|
||||||
|
- Start TDengine Server: `launchctl start com.tdengine.taosd`
|
||||||
|
|
||||||
|
- Stop TDengine Server: `launchctl stop com.tdengine.taosd`
|
||||||
|
|
||||||
|
- Check TDengine Server status: `launchctl list | grep taosd`
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
|
- The `launchctl` command does not require _root_ privileges. You don't need to use the `sudo` command.
|
||||||
|
- The first content returned by the `launchctl list | grep taosd` command is the PID of the program, if '-' indicates that the TDengine service is not running.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Command Line Interface (CLI)
|
## Command Line Interface (CLI)
|
||||||
|
|
||||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in the Linux terminal where TDengine is installed, or you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal where TDengine is installed to start the TDengine command line.
|
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in the Linux/macOS terminal where TDengine is installed, or you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal where TDengine is installed to start the TDengine command line.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
|
@ -213,13 +241,13 @@ SELECT * FROM t;
|
||||||
Query OK, 2 row(s) in set (0.003128s)
|
Query OK, 2 row(s) in set (0.003128s)
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
|
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
|
||||||
|
|
||||||
## Test data insert performance
|
## Test data insert performance
|
||||||
|
|
||||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||||
|
|
||||||
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a Linux or Windows terminal.
|
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a terminal.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taosBenchmark
|
taosBenchmark
|
||||||
|
|
|
@ -3,7 +3,7 @@ title: Get Started
|
||||||
description: This article describes how to install TDengine and test its performance.
|
description: This article describes how to install TDengine and test its performance.
|
||||||
---
|
---
|
||||||
|
|
||||||
You can install and run TDengine on Linux and Windows machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud.
|
You can install and run TDengine on Linux/Windows/macOS machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud.
|
||||||
|
|
||||||
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
|
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
|
||||||
|
|
||||||
|
|
|
@ -138,9 +138,9 @@ Node.js connector provides different ways of establishing connections by providi
|
||||||
|
|
||||||
1. Install Node.js Native Connector
|
1. Install Node.js Native Connector
|
||||||
|
|
||||||
```
|
```
|
||||||
npm install @tdengine/client
|
npm install @tdengine/client
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
|
It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
|
||||||
|
@ -148,9 +148,9 @@ It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13
|
||||||
|
|
||||||
2. Install Node.js REST Connector
|
2. Install Node.js REST Connector
|
||||||
|
|
||||||
```
|
```
|
||||||
npm install @tdengine/rest
|
npm install @tdengine/rest
|
||||||
```
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem label="C#" value="csharp">
|
<TabItem label="C#" value="csharp">
|
||||||
|
|
|
@ -6,8 +6,6 @@ The data model employed by TDengine is similar to that of a relational database.
|
||||||
|
|
||||||
Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
|
Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Create Database
|
## Create Database
|
||||||
|
|
||||||
The characteristics of time-series data from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the size of the cache, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. An example is shown as follows:
|
The characteristics of time-series data from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the size of the cache, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. An example is shown as follows:
|
||||||
|
@ -17,10 +15,11 @@ CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
|
||||||
```
|
```
|
||||||
|
|
||||||
In the above SQL statement:
|
In the above SQL statement:
|
||||||
|
|
||||||
- a database named "power" is created
|
- a database named "power" is created
|
||||||
- the data in it is retained for 365 days, which means that data older than 365 days will be deleted automatically
|
- the data in it is retained for 365 days, which means that data older than 365 days will be deleted automatically
|
||||||
- a new data file will be created every 10 days
|
- a new data file will be created every 10 days
|
||||||
- the size of the write cache pool on each vnode is 16 MB
|
- the size of the write cache pool on each VNode is 16 MB
|
||||||
- the number of vgroups is 100
|
- the number of vgroups is 100
|
||||||
- WAL is enabled but fsync is disabled For more details please refer to [Database](/taos-sql/database).
|
- WAL is enabled but fsync is disabled For more details please refer to [Database](/taos-sql/database).
|
||||||
|
|
||||||
|
|
|
@ -34,12 +34,12 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- All the data in `tag_set` will be converted to nchar type automatically .
|
- All the data in `tag_set` will be converted to NCHAR type automatically .
|
||||||
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
|
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
|
||||||
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
|
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
|
||||||
- You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
- You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||||
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
|
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
|
||||||
:::
|
:::
|
||||||
|
|
||||||
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
||||||
|
|
||||||
|
@ -67,5 +67,9 @@ For more details please refer to [InfluxDB Line Protocol](https://docs.influxdat
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Query Examples
|
## Query Examples
|
||||||
If you want query the data of `location=California.LosAngeles,groupid=2`,here is the query sql:
|
|
||||||
select * from `meters.voltage` where location="California.LosAngeles" and groupid=2
|
If you want query the data of `location=California.LosAngeles,groupid=2`,here is the query SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2;
|
||||||
|
```
|
||||||
|
|
|
@ -24,15 +24,16 @@ A single line of text is used in OpenTSDB line protocol to represent one row of
|
||||||
- `metric` will be used as the STable name.
|
- `metric` will be used as the STable name.
|
||||||
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
|
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
|
||||||
- `value` is a metric which must be a numeric value, The corresponding column name is "value".
|
- `value` is a metric which must be a numeric value, The corresponding column name is "value".
|
||||||
- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
|
- The last part is the tag set separated by spaces, all tags will be converted to NCHAR type automatically.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
||||||
```
|
```
|
||||||
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify chile table names, for example, `smlChildTableName=tname`. You can insert `meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
|
||||||
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
|
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||||
|
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
|
@ -64,10 +65,10 @@ taos> use test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
|
meters.current |
|
||||||
meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 |
|
meters.voltage |
|
||||||
Query OK, 2 row(s) in set (0.002544s)
|
Query OK, 2 row(s) in set (0.002544s)
|
||||||
|
|
||||||
taos> select tbname, * from `meters.current`;
|
taos> select tbname, * from `meters.current`;
|
||||||
|
@ -79,6 +80,11 @@ taos> select tbname, * from `meters.current`;
|
||||||
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
||||||
Query OK, 4 row(s) in set (0.005399s)
|
Query OK, 4 row(s) in set (0.005399s)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Query Examples
|
## Query Examples
|
||||||
If you want query the data of `location=California.LosAngeles groupid=3`,here is the query sql:
|
|
||||||
select * from `meters.voltage` where location="California.LosAngeles" and groupid=3
|
If you want query the data of `location=California.LosAngeles groupid=3`,here is the query SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||||
|
```
|
||||||
|
|
|
@ -46,10 +46,10 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
|
- In JSON protocol, strings will be converted to NCHAR type and numeric values will be converted to double type.
|
||||||
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
||||||
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify chile table names, for example, `smlChildTableName=tname`. You can insert `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
|
@ -81,10 +81,10 @@ taos> use test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
|
meters.current |
|
||||||
meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 |
|
meters.voltage |
|
||||||
Query OK, 2 row(s) in set (0.001954s)
|
Query OK, 2 row(s) in set (0.001954s)
|
||||||
|
|
||||||
taos> select * from `meters.current`;
|
taos> select * from `meters.current`;
|
||||||
|
@ -94,6 +94,11 @@ taos> select * from `meters.current`;
|
||||||
2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | California.SanFrancisco |
|
2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | California.SanFrancisco |
|
||||||
Query OK, 2 row(s) in set (0.004076s)
|
Query OK, 2 row(s) in set (0.004076s)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Query Examples
|
## Query Examples
|
||||||
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1},here is the query sql:
|
|
||||||
select * from `meters.voltage` where location="California.LosAngeles" and groupid=1
|
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1},here is the query SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||||
|
```
|
||||||
|
|
|
@ -23,9 +23,9 @@ From the perspective of application program, you need to consider:
|
||||||
3. The distribution of data to be written across tables or sub-tables. Writing to single table in one batch is more efficient than writing to multiple tables in one batch.
|
3. The distribution of data to be written across tables or sub-tables. Writing to single table in one batch is more efficient than writing to multiple tables in one batch.
|
||||||
|
|
||||||
4. Data Writing Protocol.
|
4. Data Writing Protocol.
|
||||||
- Prameter binding mode is more efficient than SQL because it doesn't have the cost of parsing SQL.
|
- Parameter binding mode is more efficient than SQL because it doesn't have the cost of parsing SQL.
|
||||||
- Writing to known existing tables is more efficient than wirting to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it
|
- Writing to known existing tables is more efficient than writing to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it.
|
||||||
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creats table automatically and may alter table schema
|
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creates table automatically and may alter table schema.
|
||||||
|
|
||||||
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
|
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ Application programs need to read data from data source then write into TDengine
|
||||||
2. The speed of data generation from single data source is much higher than the speed of single writing thread. The purpose of message queue in this case is to provide buffer so that data is not lost and multiple writing threads can get data from the buffer.
|
2. The speed of data generation from single data source is much higher than the speed of single writing thread. The purpose of message queue in this case is to provide buffer so that data is not lost and multiple writing threads can get data from the buffer.
|
||||||
3. The data for single table are from multiple data source. In this case the purpose of message queues is to combine the data for single table together to improve the write efficiency.
|
3. The data for single table are from multiple data source. In this case the purpose of message queues is to combine the data for single table together to improve the write efficiency.
|
||||||
|
|
||||||
If the data source is Kafka, then the appication program is a consumer of Kafka, you can benefit from some kafka features to achieve high performance writing:
|
If the data source is Kafka, then the application program is a consumer of Kafka, you can benefit from some kafka features to achieve high performance writing:
|
||||||
|
|
||||||
1. Put the data for a table in single partition of single topic so that it's easier to put the data for each table together and write in batch
|
1. Put the data for a table in single partition of single topic so that it's easier to put the data for each table together and write in batch
|
||||||
2. Subscribe multiple topics to accumulate data together.
|
2. Subscribe multiple topics to accumulate data together.
|
||||||
|
@ -56,7 +56,7 @@ This section will introduce the sample programs to demonstrate how to write into
|
||||||
|
|
||||||
### Scenario
|
### Scenario
|
||||||
|
|
||||||
Below are the scenario for the sample programs of high performance wrting.
|
Below are the scenario for the sample programs of high performance writing.
|
||||||
|
|
||||||
- Application program reads data from data source, the sample program simulates a data source by generating data
|
- Application program reads data from data source, the sample program simulates a data source by generating data
|
||||||
- The speed of single writing thread is much slower than the speed of generating data, so the program starts multiple writing threads while each thread establish a connection to TDengine and each thread has a message queue of fixed size.
|
- The speed of single writing thread is much slower than the speed of generating data, so the program starts multiple writing threads while each thread establish a connection to TDengine and each thread has a message queue of fixed size.
|
||||||
|
@ -80,7 +80,7 @@ The sample programs assume the source data is for all the different sub tables i
|
||||||
| ---------------- | ----------------------------------------------------------------------------------------------------- |
|
| ---------------- | ----------------------------------------------------------------------------------------------------- |
|
||||||
| FastWriteExample | Main Program |
|
| FastWriteExample | Main Program |
|
||||||
| ReadTask | Read data from simulated data source and put into a queue according to the hash value of table name |
|
| ReadTask | Read data from simulated data source and put into a queue according to the hash value of table name |
|
||||||
| WriteTask | Read data from Queue, compose a wirte batch and write into TDengine |
|
| WriteTask | Read data from Queue, compose a write batch and write into TDengine |
|
||||||
| MockDataSource | Generate data for some sub tables of super table meters |
|
| MockDataSource | Generate data for some sub tables of super table meters |
|
||||||
| SQLWriter | WriteTask uses this class to compose SQL, create table automatically, check SQL length and write data |
|
| SQLWriter | WriteTask uses this class to compose SQL, create table automatically, check SQL length and write data |
|
||||||
| StmtWriter | Write in Parameter binding mode (Not finished yet) |
|
| StmtWriter | Write in Parameter binding mode (Not finished yet) |
|
||||||
|
@ -95,16 +95,16 @@ The main Program is responsible for:
|
||||||
1. Create message queues
|
1. Create message queues
|
||||||
2. Start writing threads
|
2. Start writing threads
|
||||||
3. Start reading threads
|
3. Start reading threads
|
||||||
4. Otuput writing speed every 10 seconds
|
4. Output writing speed every 10 seconds
|
||||||
|
|
||||||
The main program provides 4 parameters for tuning:
|
The main program provides 4 parameters for tuning:
|
||||||
|
|
||||||
1. The number of reading threads, default value is 1
|
1. The number of reading threads, default value is 1
|
||||||
2. The number of writing threads, default alue is 2
|
2. The number of writing threads, default value is 2
|
||||||
3. The total number of tables in the generated data, default value is 1000. These tables are distributed evenly across all writing threads. If the number of tables is very big, it will cost much time to firstly create these tables.
|
3. The total number of tables in the generated data, default value is 1000. These tables are distributed evenly across all writing threads. If the number of tables is very big, it will cost much time to firstly create these tables.
|
||||||
4. The batch size of single write, default value is 3,000
|
4. The batch size of single write, default value is 3,000
|
||||||
|
|
||||||
The capacity of message queue also impacts performance and can be tuned by modifying program. Normally it's always better to have a larger message queue. A larger message queue means lower possibility of being blocked when enqueueing and higher throughput. But a larger message queue consumes more memory space. The default value used in the sample programs is already big enoug.
|
The capacity of message queue also impacts performance and can be tuned by modifying program. Normally it's always better to have a larger message queue. A larger message queue means lower possibility of being blocked when enqueueing and higher throughput. But a larger message queue consumes more memory space. The default value used in the sample programs is already big enough.
|
||||||
|
|
||||||
```java
|
```java
|
||||||
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
|
||||||
|
@ -179,7 +179,7 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
||||||
|
|
||||||
**Launch in IDE**
|
**Launch in IDE**
|
||||||
|
|
||||||
1. Clone TDengine repolitory
|
1. Clone TDengine repository
|
||||||
```
|
```
|
||||||
git clone git@github.com:taosdata/TDengine.git --depth 1
|
git clone git@github.com:taosdata/TDengine.git --depth 1
|
||||||
```
|
```
|
||||||
|
@ -282,7 +282,7 @@ Sample programs in Python uses multi-process and cross-process message queues.
|
||||||
| run_read_task Function | Read data and distribute to message queues |
|
| run_read_task Function | Read data and distribute to message queues |
|
||||||
| MockDataSource Class | Simulate data source, return next 1,000 rows of each table |
|
| MockDataSource Class | Simulate data source, return next 1,000 rows of each table |
|
||||||
| run_write_task Function | Read as much as possible data from message queue and write in batch |
|
| run_write_task Function | Read as much as possible data from message queue and write in batch |
|
||||||
| SQLWriter Class | Write in SQL and create table utomatically |
|
| SQLWriter Class | Write in SQL and create table automatically |
|
||||||
| StmtWriter Class | Write in parameter binding mode (not finished yet) |
|
| StmtWriter Class | Write in parameter binding mode (not finished yet) |
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
@ -292,7 +292,7 @@ Sample programs in Python uses multi-process and cross-process message queues.
|
||||||
|
|
||||||
1. Monitoring process, initializes database and calculating writing speed
|
1. Monitoring process, initializes database and calculating writing speed
|
||||||
2. Reading process (n), reads data from data source
|
2. Reading process (n), reads data from data source
|
||||||
3. Writing process (m), wirtes data into TDengine
|
3. Writing process (m), writes data into TDengine
|
||||||
|
|
||||||
`main` function provides 5 parameters:
|
`main` function provides 5 parameters:
|
||||||
|
|
||||||
|
@ -311,7 +311,7 @@ Sample programs in Python uses multi-process and cross-process message queues.
|
||||||
<details>
|
<details>
|
||||||
<summary>run_monitor_process</summary>
|
<summary>run_monitor_process</summary>
|
||||||
|
|
||||||
Monitoring process initilizes database and monitoring writing speed.
|
Monitoring process initializes database and monitoring writing speed.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
{{#include docs/examples/python/fast_write_example.py:monitor}}
|
{{#include docs/examples/python/fast_write_example.py:monitor}}
|
||||||
|
@ -372,7 +372,7 @@ SQLWriter class encapsulates the logic of composing SQL and writing data. Please
|
||||||
|
|
||||||
<summary>Launch Sample Program in Python</summary>
|
<summary>Launch Sample Program in Python</summary>
|
||||||
|
|
||||||
1. Prerequisities
|
1. Prerequisites
|
||||||
|
|
||||||
- TDengine client driver has been installed
|
- TDengine client driver has been installed
|
||||||
- Python3 has been installed, the the version >= 3.8
|
- Python3 has been installed, the the version >= 3.8
|
||||||
|
|
|
@ -3,6 +3,7 @@ title: Developer Guide
|
||||||
---
|
---
|
||||||
|
|
||||||
Before creating an application to process time-series data with TDengine, consider the following:
|
Before creating an application to process time-series data with TDengine, consider the following:
|
||||||
|
|
||||||
1. Choose the method to connect to TDengine. TDengine offers a REST API that can be used with any programming language. It also has connectors for a variety of languages.
|
1. Choose the method to connect to TDengine. TDengine offers a REST API that can be used with any programming language. It also has connectors for a variety of languages.
|
||||||
2. Design the data model based on your own use cases. Consider the main [concepts](/concept/) of TDengine, including "one table per data collection point" and the supertable. Learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you decide to create one or more databases and design a supertable schema that fit your data.
|
2. Design the data model based on your own use cases. Consider the main [concepts](/concept/) of TDengine, including "one table per data collection point" and the supertable. Learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you decide to create one or more databases and design a supertable schema that fit your data.
|
||||||
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
|
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
|
||||||
|
|
|
@ -49,6 +49,55 @@ The preceding SQL statement can be used in migration scenarios. It returns the C
|
||||||
DESCRIBE [db_name.]stb_name;
|
DESCRIBE [db_name.]stb_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### View tag information for all child tables in the supertable
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SHOW TABLE TAGS FROM st1;
|
||||||
|
tbname | id | loc |
|
||||||
|
======================================================================
|
||||||
|
st1s1 | 1 | beijing |
|
||||||
|
st1s2 | 2 | shanghai |
|
||||||
|
st1s3 | 3 | guangzhou |
|
||||||
|
Query OK, 3 rows in database (0.004455s)
|
||||||
|
```
|
||||||
|
|
||||||
|
The first column of the returned result set is the subtable name, and the subsequent columns are the tag columns.
|
||||||
|
|
||||||
|
If you already know the name of the tag column, you can use the following statement to get the value of the specified tag column.
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SELECT DISTINCT TBNAME, id FROM st1;
|
||||||
|
tbname | id |
|
||||||
|
===============================================
|
||||||
|
st1s1 | 1 |
|
||||||
|
st1s2 | 2 |
|
||||||
|
st1s3 | 3 |
|
||||||
|
Query OK, 3 rows in database (0.002891s)
|
||||||
|
```
|
||||||
|
|
||||||
|
It should be noted that DISTINCT and TBNAME in the SELECT statement are essential, and TDengine will optimize the statement according to them, so that it can return the tag value correctly and quickly even when there is no data or a lot of data.
|
||||||
|
|
||||||
|
### View the tag information of a subtable
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SHOW TAGS FROM st1s1;
|
||||||
|
table_name | db_name | stable_name | tag_name | tag_type | tag_value |
|
||||||
|
============================================================================================================
|
||||||
|
st1s1 | test | st1 | id | INT | 1 |
|
||||||
|
st1s1 | test | st1 | loc | VARCHAR(20) | beijing |
|
||||||
|
Query OK, 2 rows in database (0.003684s)
|
||||||
|
```
|
||||||
|
|
||||||
|
Similarly, you can also use the SELECT statement to query the value of the specified tag column.
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SELECT DISTINCT TBNAME, id, loc FROM st1s1;
|
||||||
|
tbname | id | loc |
|
||||||
|
==================================================
|
||||||
|
st1s1 | 1 | beijing |
|
||||||
|
Query OK, 1 rows in database (0.001884s)
|
||||||
|
```
|
||||||
|
|
||||||
## Drop STable
|
## Drop STable
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
|
@ -11,7 +11,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW
|
||||||
SELECT [DISTINCT] select_list
|
SELECT [DISTINCT] select_list
|
||||||
from_clause
|
from_clause
|
||||||
[WHERE condition]
|
[WHERE condition]
|
||||||
[PARTITION BY tag_list]
|
[partition_by_clause]
|
||||||
[window_clause]
|
[window_clause]
|
||||||
[group_by_clause]
|
[group_by_clause]
|
||||||
[order_by_clasue]
|
[order_by_clasue]
|
||||||
|
@ -52,6 +52,9 @@ window_clause: {
|
||||||
| STATE_WINDOW(col)
|
| STATE_WINDOW(col)
|
||||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||||
|
|
||||||
|
partition_by_clause:
|
||||||
|
PARTITION BY expr [, expr] ...
|
||||||
|
|
||||||
group_by_clause:
|
group_by_clause:
|
||||||
GROUP BY expr [, expr] ... HAVING condition
|
GROUP BY expr [, expr] ... HAVING condition
|
||||||
|
|
||||||
|
|
|
@ -864,7 +864,7 @@ INTERP(expr)
|
||||||
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
|
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
|
||||||
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
|
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
|
||||||
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
|
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
|
||||||
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter.
|
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. The parameter `EVERY` must be an integer, with no quotes, with a time unit of: b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
|
||||||
- Interpolation is performed based on `FILL` parameter.
|
- Interpolation is performed based on `FILL` parameter.
|
||||||
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
|
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
|
||||||
- Pseudo column `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
|
- Pseudo column `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
|
||||||
|
|
|
@ -136,19 +136,3 @@ The parameters that you can modify through this statement are the same as those
|
||||||
```sql
|
```sql
|
||||||
SHOW LOCAL VARIABLES;
|
SHOW LOCAL VARIABLES;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Combine Vgroups
|
|
||||||
|
|
||||||
```sql
|
|
||||||
MERGE VGROUP vgroup_no1 vgroup_no2;
|
|
||||||
```
|
|
||||||
|
|
||||||
If load and data are not properly balanced among vgroups due to the data in different tim lines having different characteristics, you can combine or separate vgroups.
|
|
||||||
|
|
||||||
## Separate Vgroups
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SPLIT VGROUP vgroup_no;
|
|
||||||
```
|
|
||||||
|
|
||||||
This statement creates a new vgroup and migrates part of the data from the original vgroup to the new vgroup with consistent hashing. During this process, the original vgroup can continue to provide services normally.
|
|
||||||
|
|
|
@ -29,8 +29,8 @@ Provides information about dnodes. Similar to SHOW DNODES.
|
||||||
|
|
||||||
| # | **Column** | **Data Type** | **Description** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------------: | ------------ | ------------------------- |
|
| --- | :------------: | ------------ | ------------------------- |
|
||||||
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode |
|
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 2 | vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||||
| 3 | status | BINARY(10) | Current status |
|
| 3 | status | BINARY(10) | Current status |
|
||||||
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||||
| 5 | id | SMALLINT | Dnode ID |
|
| 5 | id | SMALLINT | Dnode ID |
|
||||||
|
@ -49,16 +49,6 @@ Provides information about mnodes. Similar to SHOW MNODES.
|
||||||
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||||
| 5 | create_time | TIMESTAMP | Creation time |
|
| 5 | create_time | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_MODULES
|
|
||||||
|
|
||||||
Provides information about modules. Similar to SHOW MODULES.
|
|
||||||
|
|
||||||
| # | **Column** | **Data Type** | **Description** |
|
|
||||||
| --- | :------: | ------------ | ---------- |
|
|
||||||
| 1 | id | SMALLINT | Module ID |
|
|
||||||
| 2 | endpoint | BINARY(134) | Module endpoint |
|
|
||||||
| 3 | module | BINARY(10) | Module status |
|
|
||||||
|
|
||||||
## INS_QNODES
|
## INS_QNODES
|
||||||
|
|
||||||
Provides information about qnodes. Similar to SHOW QNODES.
|
Provides information about qnodes. Similar to SHOW QNODES.
|
||||||
|
@ -88,33 +78,33 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
|
||||||
| 1| name| BINARY(32)| Database name |
|
| 1| name| BINARY(32)| Database name |
|
||||||
| 2 | create_time | TIMESTAMP | Creation time |
|
| 2 | create_time | TIMESTAMP | Creation time |
|
||||||
| 3 | ntables | INT | Number of standard tables and subtables (not including supertables) |
|
| 3 | ntables | INT | Number of standard tables and subtables (not including supertables) |
|
||||||
| 4 | vgroups | INT | Number of vgroups |
|
| 4 | vgroups | INT | Number of vgroups. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 6 | replica | INT | Number of replicas |
|
| 6 | replica | INT | Number of replicas. It should be noted that `replica` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 7 | quorum | BINARY(3) | Strong consistency |
|
| 7 | strict | BINARY(3) | Strong consistency. It should be noted that `strict` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 8 | duration | INT | Duration for storage of single files |
|
| 8 | duration | INT | Duration for storage of single files. It should be noted that `duration` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 9 | keep | INT | Data retention period |
|
| 9 | keep | INT | Data retention period. It should be noted that `keep` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 10 | buffer | INT | Write cache size per vnode, in MB |
|
| 10 | buffer | INT | Write cache size per vnode, in MB. It should be noted that `buffer` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 11 | pagesize | INT | Page size for vnode metadata storage engine, in KB |
|
| 11 | pagesize | INT | Page size for vnode metadata storage engine, in KB. It should be noted that `pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 12 | pages | INT | Number of pages per vnode metadata storage engine |
|
| 12 | pages | INT | Number of pages per vnode metadata storage engine. It should be noted that `pages` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 13 | minrows | INT | Maximum number of records per file block |
|
| 13 | minrows | INT | Maximum number of records per file block. It should be noted that `minrows` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 14 | maxrows | INT | Minimum number of records per file block |
|
| 14 | maxrows | INT | Minimum number of records per file block. It should be noted that `maxrows` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 15 | comp | INT | Compression method |
|
| 15 | comp | INT | Compression method. It should be noted that `comp` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 16 | precision | BINARY(2) | Time precision |
|
| 16 | precision | BINARY(2) | Time precision. It should be noted that `precision` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 17 | status | BINARY(10) | Current database status |
|
| 17 | status | BINARY(10) | Current database status |
|
||||||
| 18 | retention | BINARY (60) | Aggregation interval and retention period |
|
| 18 | retentions | BINARY (60) | Aggregation interval and retention period. It should be noted that `retentions` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 19 | single_stable | BOOL | Whether the database can contain multiple supertables |
|
| 19 | single_stable | BOOL | Whether the database can contain multiple supertables. It should be noted that `single_stable` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 20 | cachemodel | BINARY(60) | Caching method for the newest data |
|
| 20 | cachemodel | BINARY(60) | Caching method for the newest data. It should be noted that `cachemodel` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 21 | cachesize | INT | Memory per vnode used for caching the newest data |
|
| 21 | cachesize | INT | Memory per vnode used for caching the newest data. It should be noted that `cachesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 22 | wal_level | INT | WAL level |
|
| 22 | wal_level | INT | WAL level. It should be noted that `wal_level` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk |
|
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk. It should be noted that `wal_fsync_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 24 | wal_retention_period | INT | WAL retention period |
|
| 24 | wal_retention_period | INT | WAL retention period. It should be noted that `wal_retention_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 25 | wal_retention_size | INT | Maximum WAL size |
|
| 25 | wal_retention_size | INT | Maximum WAL size. It should be noted that `wal_retention_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 26 | wal_roll_period | INT | WAL rotation period |
|
| 26 | wal_roll_period | INT | WAL rotation period. It should be noted that `wal_roll_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 27 | wal_segment_size | BIGINT | WAL file size |
|
| 27 | wal_segment_size | BIGINT | WAL file size. It should be noted that `wal_segment_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 28 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging |
|
| 28 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 29 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name |
|
| 29 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 30 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name |
|
| 30 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_suffix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 31 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB |
|
| 31 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB. It should be noted that `tsdb_pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
|
|
||||||
## INS_FUNCTIONS
|
## INS_FUNCTIONS
|
||||||
|
|
||||||
|
@ -123,8 +113,8 @@ Provides information about user-defined functions.
|
||||||
| # | **Column** | **Data Type** | **Description** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | -------------- |
|
| --- | :---------: | ------------ | -------------- |
|
||||||
| 1 | name | BINARY(64) | Function name |
|
| 1 | name | BINARY(64) | Function name |
|
||||||
| 2 | comment | BINARY(255) | Function description |
|
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 3 | aggregate | INT | Whether the UDF is an aggregate function |
|
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 4 | output_type | BINARY(31) | Output data type |
|
| 4 | output_type | BINARY(31) | Output data type |
|
||||||
| 5 | create_time | TIMESTAMP | Creation time |
|
| 5 | create_time | TIMESTAMP | Creation time |
|
||||||
| 6 | code_len | INT | Length of the source code |
|
| 6 | code_len | INT | Length of the source code |
|
||||||
|
@ -153,12 +143,12 @@ Provides information about supertables.
|
||||||
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||||
| 3 | create_time | TIMESTAMP | Creation time |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
| 4 | columns | INT | Number of columns |
|
| 4 | columns | INT | Number of columns |
|
||||||
| 5 | tags | INT | Number of tags |
|
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 6 | last_update | TIMESTAMP | Last updated time |
|
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||||
| 7 | table_comment | BINARY(1024) | Table description |
|
| 7 | table_comment | BINARY(1024) | Table description |
|
||||||
| 8 | watermark | BINARY(64) | Window closing time |
|
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results |
|
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 10 | rollup | BINARY(128) | Rollup aggregate function |
|
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
|
|
||||||
## INS_TABLES
|
## INS_TABLES
|
||||||
|
|
||||||
|
@ -173,7 +163,7 @@ Provides information about standard tables and subtables.
|
||||||
| 5 | stable_name | BINARY(192) | Supertable name |
|
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||||
| 6 | uid | BIGINT | Table ID |
|
| 6 | uid | BIGINT | Table ID |
|
||||||
| 7 | vgroup_id | INT | Vgroup ID |
|
| 7 | vgroup_id | INT | Vgroup ID |
|
||||||
| 8 | ttl | INT | Table time-to-live |
|
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 9 | table_comment | BINARY(1024) | Table description |
|
| 9 | table_comment | BINARY(1024) | Table description |
|
||||||
| 10 | type | BINARY(20) | Table type |
|
| 10 | type | BINARY(20) | Table type |
|
||||||
|
|
||||||
|
@ -206,13 +196,13 @@ Provides information about TDengine Enterprise Edition permissions.
|
||||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||||
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||||
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||||
| 3 | dnodes | BINARY(10) | Dnodes included in license |
|
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 4 | streams | BINARY(10) | Streams included in license |
|
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 5 | users | BINARY(10) | Users included in license |
|
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 6 | streams | BINARY(10) | Accounts included in license |
|
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 7 | storage | BINARY(21) | Storage space included in license |
|
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 8 | connections | BINARY(21) | Client connections included in license |
|
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 9 | databases | BINARY(11) | Databases included in license |
|
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||||
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||||
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||||
|
@ -227,7 +217,7 @@ Provides information about vgroups.
|
||||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||||
| 1 | vgroup_id | INT | Vgroup ID |
|
| 1 | vgroup_id | INT | Vgroup ID |
|
||||||
| 2 | db_name | BINARY(32) | Database name |
|
| 2 | db_name | BINARY(32) | Database name |
|
||||||
| 3 | tables | INT | Tables in vgroup |
|
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 4 | status | BINARY(10) | Vgroup status |
|
| 4 | status | BINARY(10) | Vgroup status |
|
||||||
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||||
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||||
|
@ -246,7 +236,7 @@ Provides system configuration information.
|
||||||
| # | **Column** | **Data Type** | **Description** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | name | BINARY(32) | Parameter |
|
| 1 | name | BINARY(32) | Parameter |
|
||||||
| 2 | value | BINARY(64) | Value |
|
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
|
|
||||||
## INS_DNODE_VARIABLES
|
## INS_DNODE_VARIABLES
|
||||||
|
|
||||||
|
@ -256,7 +246,7 @@ Provides dnode configuration information.
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | dnode_id | INT | Dnode ID |
|
| 1 | dnode_id | INT | Dnode ID |
|
||||||
| 2 | name | BINARY(32) | Parameter |
|
| 2 | name | BINARY(32) | Parameter |
|
||||||
| 3 | value | BINARY(64) | Value |
|
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
|
|
||||||
## INS_TOPICS
|
## INS_TOPICS
|
||||||
|
|
||||||
|
@ -287,5 +277,5 @@ Provides dnode configuration information.
|
||||||
| 5 | source_db | BINARY(64) | Source database |
|
| 5 | source_db | BINARY(64) | Source database |
|
||||||
| 6 | target_db | BIANRY(64) | Target database |
|
| 6 | target_db | BIANRY(64) | Target database |
|
||||||
| 7 | target_table | BINARY(192) | Target table |
|
| 7 | target_table | BINARY(192) | Target table |
|
||||||
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
|
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |
|
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||||
|
|
|
@ -13,14 +13,6 @@ SHOW APPS;
|
||||||
|
|
||||||
Shows all clients (such as applications) that connect to the cluster.
|
Shows all clients (such as applications) that connect to the cluster.
|
||||||
|
|
||||||
## SHOW BNODES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW BNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows information about backup nodes (bnodes) in the system.
|
|
||||||
|
|
||||||
## SHOW CLUSTER
|
## SHOW CLUSTER
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -128,14 +120,6 @@ SHOW MNODES;
|
||||||
|
|
||||||
Shows information about mnodes in the system.
|
Shows information about mnodes in the system.
|
||||||
|
|
||||||
## SHOW MODULES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW MODULES;
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows information about modules installed in the system.
|
|
||||||
|
|
||||||
## SHOW QNODES
|
## SHOW QNODES
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -154,14 +138,6 @@ Shows information about the storage space allowed by the license.
|
||||||
|
|
||||||
Note: TDengine Enterprise Edition only.
|
Note: TDengine Enterprise Edition only.
|
||||||
|
|
||||||
## SHOW SNODES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW SNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows information about stream processing nodes (snodes) in the system.
|
|
||||||
|
|
||||||
## SHOW STABLES
|
## SHOW STABLES
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
|
|
@ -16,10 +16,10 @@ You can use the SHOW CONNECTIONS statement to find the conn_id.
|
||||||
## Terminate a Query
|
## Terminate a Query
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW QUERY query_id;
|
KILL QUERY kill_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
You can use the SHOW QUERIES statement to find the query_id.
|
You can use the SHOW QUERIES statement to find the kill_id.
|
||||||
|
|
||||||
## Terminate a Transaction
|
## Terminate a Transaction
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,7 @@ title: Problem Diagnostics
|
||||||
|
|
||||||
When a TDengine client is unable to access a TDengine server, the network connection between the client side and the server side must be checked to find the root cause and resolve problems.
|
When a TDengine client is unable to access a TDengine server, the network connection between the client side and the server side must be checked to find the root cause and resolve problems.
|
||||||
|
|
||||||
Diagnostics for network connections can be executed between Linux and Linux or between Linux and Windows.
|
Diagnostics for network connections can be executed between Linux/Windows/macOS.
|
||||||
|
|
||||||
Diagnostic steps:
|
Diagnostic steps:
|
||||||
|
|
||||||
|
|
|
@ -13,11 +13,13 @@ After TDengine server or client installation, `taos.h` is located at
|
||||||
|
|
||||||
- Linux:`/usr/local/taos/include`
|
- Linux:`/usr/local/taos/include`
|
||||||
- Windows:`C:\TDengine\include`
|
- Windows:`C:\TDengine\include`
|
||||||
|
- macOS:`/usr/local/include`
|
||||||
|
|
||||||
The dynamic libraries for the TDengine client driver are located in.
|
The dynamic libraries for the TDengine client driver are located in.
|
||||||
|
|
||||||
- Linux: `/usr/local/taos/driver/libtaos.so`
|
- Linux: `/usr/local/taos/driver/libtaos.so`
|
||||||
- Windows: `C:\TDengine\taos.dll`
|
- Windows: `C:\TDengine\taos.dll`
|
||||||
|
- macOS: `/usr/local/lib/libtaos.dylib`
|
||||||
|
|
||||||
## Supported platforms
|
## Supported platforms
|
||||||
|
|
||||||
|
@ -119,7 +121,7 @@ This section shows sample code for standard access methods to TDengine clusters
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
More example code and downloads are available at [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c).
|
More example code and downloads are available at [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c).
|
||||||
You can find it in the installation directory under the `examples/c` path. This directory has a makefile and can be compiled under Linux by executing `make` directly.
|
You can find it in the installation directory under the `examples/c` path. This directory has a makefile and can be compiled under Linux/macOS by executing `make` directly.
|
||||||
**Hint:** When compiling in an ARM environment, please remove `-msse4.2` from the makefile. This option is only supported on the x64/x86 hardware platforms.
|
**Hint:** When compiling in an ARM environment, please remove `-msse4.2` from the makefile. This option is only supported on the x64/x86 hardware platforms.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -109,7 +109,7 @@ TDengine's JDBC URL specification format is:
|
||||||
|
|
||||||
For establishing connections, native connections differ slightly from REST connections.
|
For establishing connections, native connections differ slightly from REST connections.
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
```java
|
```java
|
||||||
|
@ -120,13 +120,13 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
|
||||||
|
|
||||||
In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname `taosdemo.com`, port `6030` (the default port for TDengine), and a database named `test`. In this URL, the user name `user` is specified as `root`, and the `password` is `taosdata`.
|
In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname `taosdemo.com`, port `6030` (the default port for TDengine), and a database named `test`. In this URL, the user name `user` is specified as `root`, and the `password` is `taosdata`.
|
||||||
|
|
||||||
Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (`libtaos.so` on Linux; `taos.dll` on Windows).
|
Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (`libtaos.so` on Linux; `taos.dll` on Windows; `libtaos.dylib` on macOS).
|
||||||
|
|
||||||
The configuration parameters in the URL are as follows:
|
The configuration parameters in the URL are as follows:
|
||||||
|
|
||||||
- user: Log in to the TDengine username. The default value is 'root'.
|
- user: Log in to the TDengine username. The default value is 'root'.
|
||||||
- password: User login password, the default value is 'taosdata'.
|
- password: User login password, the default value is 'taosdata'.
|
||||||
- cfgdir: client configuration file directory path, default '/etc/taos' on Linux OS, 'C:/TDengine/cfg' on Windows OS.
|
- cfgdir: client configuration file directory path, default '/etc/taos' on Linux OS, 'C:/TDengine/cfg' on Windows OS, '/etc/taos' on macOS.
|
||||||
- charset: The character set used by the client, the default value is the system character set.
|
- charset: The character set used by the client, the default value is the system character set.
|
||||||
- locale: Client locale, by default, use the system's current locale.
|
- locale: Client locale, by default, use the system's current locale.
|
||||||
- timezone: The time zone used by the client, the default value is the system's current time zone.
|
- timezone: The time zone used by the client, the default value is the system's current time zone.
|
||||||
|
@ -172,7 +172,7 @@ In the above example, JDBC uses the client's configuration file to establish a c
|
||||||
|
|
||||||
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
|
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
|
||||||
|
|
||||||
The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
|
The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, the default path is `C://TDengine/cfg/taos.cfg` on Windows, and the default path is `/etc/taos/taos.cfg` on macOS.
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="rest" label="REST connection">
|
<TabItem value="rest" label="REST connection">
|
||||||
|
@ -261,7 +261,7 @@ The configuration parameters in properties are as follows.
|
||||||
- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
|
- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
|
||||||
- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
|
- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
|
||||||
- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sql. false: no longer execute any statement after the failed SQL. The default value is: false.
|
- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sql. false: no longer execute any statement after the failed SQL. The default value is: false.
|
||||||
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
|
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS, default value `/etc/taos` on macOS.
|
||||||
- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
|
- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
|
||||||
- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
|
- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
|
||||||
- TSDBDriver.PROPERTY_KEY_TIME_ZONE: only takes effect when using JDBC native connection. In the time zone used by the client, the default value is the system's current time zone.
|
- TSDBDriver.PROPERTY_KEY_TIME_ZONE: only takes effect when using JDBC native connection. In the time zone used by the client, the default value is the system's current time zone.
|
||||||
|
@ -896,7 +896,7 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
|
||||||
|
|
||||||
**Cause**: The program did not find the dependent native library `taos`.
|
**Cause**: The program did not find the dependent native library `taos`.
|
||||||
|
|
||||||
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
|
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work, on macOS the lib soft link will be `/usr/local/lib/libtaos.dylib`.
|
||||||
|
|
||||||
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
||||||
|
|
||||||
|
|
|
@ -113,7 +113,7 @@ username:password@protocol(address)/dbname?param=value
|
||||||
```
|
```
|
||||||
### Connecting via connector
|
### Connecting via connector
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
_taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
|
_taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
|
||||||
|
|
|
@ -55,16 +55,6 @@ taos = "*"
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
|
||||||
<TabItem value="native" label="native connection only">
|
|
||||||
|
|
||||||
In `cargo.toml`, add [taos][taos] and enable the native feature:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[dependencies]
|
|
||||||
taos = { version = "*", default-features = false, features = ["native"] }
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
<TabItem value="rest" label="Websocket only">
|
<TabItem value="rest" label="Websocket only">
|
||||||
|
|
||||||
In `cargo.toml`, add [taos][taos] and enable the ws feature:
|
In `cargo.toml`, add [taos][taos] and enable the ws feature:
|
||||||
|
@ -75,6 +65,18 @@ taos = { version = "*", default-features = false, features = ["ws"] }
|
||||||
```
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="native" label="native connection only">
|
||||||
|
|
||||||
|
In `cargo.toml`, add [taos][taos] and enable the native feature:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[dependencies]
|
||||||
|
taos = { version = "*", default-features = false, features = ["native"] }
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Establishing a connection
|
## Establishing a connection
|
||||||
|
|
|
@ -32,7 +32,7 @@ We recommend using the latest version of `taospy`, regardless of the version of
|
||||||
|
|
||||||
### Preparation
|
### Preparation
|
||||||
|
|
||||||
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
|
1. Install Python. Python >= 3.7 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
|
||||||
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
|
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
|
||||||
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
|
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
|
||||||
|
|
||||||
|
@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
|
||||||
|
|
||||||
### Verify
|
### Verify
|
||||||
|
|
||||||
<Tabs groupId="connect" default="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
|
For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
|
||||||
|
@ -118,10 +118,10 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
|
||||||
|
|
||||||
Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
|
Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
|
||||||
|
|
||||||
<Tabs>
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
|
Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a stand-alone version) can be resolved locally, by testing with the `ping` command.
|
||||||
|
|
||||||
```
|
```
|
||||||
ping <FQDN>
|
ping <FQDN>
|
||||||
|
@ -173,7 +173,7 @@ If the test is successful, it will output the server version information, e.g.
|
||||||
|
|
||||||
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
|
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
|
||||||
|
|
||||||
<Tabs>
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection" groupId="connect">
|
<TabItem value="native" label="native connection" groupId="connect">
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -186,7 +186,7 @@ All arguments of the `connect()` function are optional keyword arguments. The fo
|
||||||
- `user` : The TDengine user name. The default value is `root`.
|
- `user` : The TDengine user name. The default value is `root`.
|
||||||
- `password` : TDengine user password. The default value is `taosdata`.
|
- `password` : TDengine user password. The default value is `taosdata`.
|
||||||
- `port` : The starting port of the data node to connect to, i.e., the serverPort configuration. The default value is 6030, which will only take effect if the host parameter is provided.
|
- `port` : The starting port of the data node to connect to, i.e., the serverPort configuration. The default value is 6030, which will only take effect if the host parameter is provided.
|
||||||
- `config` : The path to the client configuration file. On Windows systems, the default is `C:\TDengine\cfg`. The default is `/etc/taos/` on Linux systems.
|
- `config` : The path to the client configuration file. On Windows systems, the default is `C:\TDengine\cfg`. The default is `/etc/taos/` on Linux/macOS.
|
||||||
- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
|
- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
@ -219,7 +219,7 @@ All arguments to the `connect()` function are optional keyword arguments. The fo
|
||||||
|
|
||||||
### Basic Usage
|
### Basic Usage
|
||||||
|
|
||||||
<Tabs default="native" groupId="connect">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
##### TaosConnection class
|
##### TaosConnection class
|
||||||
|
@ -289,7 +289,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
|
||||||
|
|
||||||
### Used with pandas
|
### Used with pandas
|
||||||
|
|
||||||
<Tabs default="native" groupId="connect">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|
|
@ -85,7 +85,7 @@ If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and
|
||||||
|
|
||||||
### Install via npm
|
### Install via npm
|
||||||
|
|
||||||
<Tabs defaultValue="install_native">
|
<Tabs defaultValue="install_rest">
|
||||||
<TabItem value="install_native" label="Install native connector">
|
<TabItem value="install_native" label="Install native connector">
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost
|
||||||
|
|
||||||
Please choose to use one of the connectors.
|
Please choose to use one of the connectors.
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
Install and import the `@tdengine/client` package.
|
Install and import the `@tdengine/client` package.
|
||||||
|
|
|
@ -97,7 +97,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj
|
||||||
## Establish a Connection
|
## Establish a Connection
|
||||||
|
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="Native Connection">
|
<TabItem value="native" label="Native Connection">
|
||||||
|
|
||||||
|
@ -173,7 +173,7 @@ ws://localhost:6041/test
|
||||||
|
|
||||||
#### SQL Write
|
#### SQL Write
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="Native Connection">
|
<TabItem value="native" label="Native Connection">
|
||||||
|
|
||||||
|
@ -204,7 +204,7 @@ ws://localhost:6041/test
|
||||||
|
|
||||||
#### Parameter Binding
|
#### Parameter Binding
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="Native Connection">
|
<TabItem value="native" label="Native Connection">
|
||||||
|
|
||||||
|
@ -227,7 +227,7 @@ ws://localhost:6041/test
|
||||||
|
|
||||||
#### Synchronous Query
|
#### Synchronous Query
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="Native Connection">
|
<TabItem value="native" label="Native Connection">
|
||||||
|
|
||||||
|
|
|
@ -13,11 +13,13 @@ After TDengine client or server is installed, `taos.h` is located at:
|
||||||
|
|
||||||
- Linux:`/usr/local/taos/include`
|
- Linux:`/usr/local/taos/include`
|
||||||
- Windows:`C:\TDengine\include`
|
- Windows:`C:\TDengine\include`
|
||||||
|
- macOS:`/usr/local/include`
|
||||||
|
|
||||||
TDengine client driver is located at:
|
TDengine client driver is located at:
|
||||||
|
|
||||||
- Linux: `/usr/local/taos/driver/libtaos.so`
|
- Linux: `/usr/local/taos/driver/libtaos.so`
|
||||||
- Windows: `C:\TDengine\taos.dll`
|
- Windows: `C:\TDengine\taos.dll`
|
||||||
|
- macOS:`/usr/local/lib/libtaos.dylib`
|
||||||
|
|
||||||
## Supported Platforms
|
## Supported Platforms
|
||||||
|
|
||||||
|
|
|
@ -6,5 +6,6 @@ Since the TDengine client driver is written in C, using the native connection re
|
||||||
|
|
||||||
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately.
|
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately.
|
||||||
- taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately.
|
- taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately.
|
||||||
|
- libtaos.dylib: After successful installation of TDengine on a mac system, the dependent macOS version of the client driver `libtaos.dylib` file will be automatically linked to `/usr/local/lib/libtaos.dylib`, which is included in the macOS scannable path and does not need to be specified separately.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -4,11 +4,11 @@ Execute TDengine CLI program `taos` directly from the Linux shell to connect to
|
||||||
$ taos
|
$ taos
|
||||||
|
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
|
name |
|
||||||
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
information_schema |
|
||||||
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
performance_schema |
|
||||||
db | 2022-08-04 14:14:49.385 | 2 | 4 | 1 | off | 14400m | 5254560m,5254560m,5254560m | 96 | 4 | 256 | 100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 |
|
db |
|
||||||
Query OK, 3 rows in database (0.019154s)
|
Query OK, 3 rows in database (0.019154s)
|
||||||
|
|
||||||
taos>
|
taos>
|
||||||
|
|
|
@ -2,12 +2,11 @@ Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `t
|
||||||
|
|
||||||
```text
|
```text
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
|
name |
|
||||||
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
information_schema |
|
||||||
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
performance_schema |
|
||||||
test | 2022-08-04 16:46:40.506 | 2 | 0 | 1 | off | 14400m | 5256000m,5256000m,5256000m | 96 | 4 | 256 |
|
test |
|
||||||
100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 |
|
|
||||||
Query OK, 3 rows in database (0.123000s)
|
Query OK, 3 rows in database (0.123000s)
|
||||||
|
|
||||||
taos>
|
taos>
|
||||||
|
|
|
@ -8,13 +8,15 @@ TDengine provides a rich set of APIs (application development interface). To fac
|
||||||
|
|
||||||
## Supported platforms
|
## Supported platforms
|
||||||
|
|
||||||
Currently, TDengine's native interface connectors can support platforms such as x64 and ARM hardware platforms and Linux and Windows development environments. The comparison matrix is as follows.
|
Currently, TDengine's native interface connectors can support platforms such as x64 and ARM hardware platforms and Linux/Windows/macOS development environments. The comparison matrix is as follows.
|
||||||
|
|
||||||
| **CPU** | **OS** | **Java** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
|
| **CPU** | **OS** | **Java** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
|
||||||
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
|
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
|
||||||
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
|
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
|
||||||
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
|
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
|
||||||
|
| **X86 64bit** | **macOS** | ○ | ● | ● | ○ | ○ | ● | ● |
|
||||||
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||||
|
| **ARM64** | **macOS** | ○ | ● | ● | ○ | ○ | ● | ● |
|
||||||
|
|
||||||
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
|
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
|
||||||
|
|
||||||
|
|
|
@ -197,6 +197,7 @@ Support InfluxDB query parameters as follows.
|
||||||
- `p` TDengine password
|
- `p` TDengine password
|
||||||
|
|
||||||
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
|
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
|
||||||
|
Example: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
||||||
|
|
||||||
### OpenTSDB
|
### OpenTSDB
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ toc_max_heading_level: 4
|
||||||
description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
|
description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
|
||||||
---
|
---
|
||||||
|
|
||||||
## Introduction
|
# Introduction
|
||||||
|
|
||||||
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
|
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ There are two ways to install taosBenchmark:
|
||||||
|
|
||||||
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
|
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
|
||||||
|
|
||||||
taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. If you want to test the performance of queries or data subscriptionm configure taosBenchmark with the configuration file. You can modify the value of the `filetype` parameter to specify the function that you want to test.
|
taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. If you want to test the performance of queries or data subscription configure taosBenchmark with the configuration file. You can modify the value of the `filetype` parameter to specify the function that you want to test.
|
||||||
|
|
||||||
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
|
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
|
||||||
|
|
||||||
|
@ -231,7 +231,7 @@ The parameters related to database creation are configured in `dbinfo` in the js
|
||||||
|
|
||||||
- **name**: specify the name of the database.
|
- **name**: specify the name of the database.
|
||||||
|
|
||||||
- **drop**: indicate whether to delete the database before inserting. The default is true.
|
- **drop**: indicate whether to delete the database before inserting. The value can be 'yes' or 'no'. No means do not drop. The default is to drop.
|
||||||
|
|
||||||
#### Stream processing related configuration parameters
|
#### Stream processing related configuration parameters
|
||||||
|
|
||||||
|
@ -334,13 +334,13 @@ The configuration parameters for specifying super table tag columns and data col
|
||||||
|
|
||||||
- **name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3.
|
- **name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3.
|
||||||
|
|
||||||
- **min**: The minimum value of the column/label of the data type.
|
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
|
||||||
|
|
||||||
- **max**: The maximum value of the column/label of the data type.
|
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maxium value.
|
||||||
|
|
||||||
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
|
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
|
||||||
|
|
||||||
- **sma**: Insert the column into the BSMA. Enter `yes` or `no`. The default is `no`.
|
- **sma**: Insert the column into the SMA. Enter `yes` or `no`. The default is `no`.
|
||||||
|
|
||||||
#### insertion behavior configuration parameters
|
#### insertion behavior configuration parameters
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ If executed on the TDengine server-side, there is no need for additional install
|
||||||
|
|
||||||
## Execution
|
## Execution
|
||||||
|
|
||||||
To access the TDengine CLI, you can execute `taos` command-line utility from a Linux terminal or Windows terminal.
|
To access the TDengine CLI, you can execute `taos` command-line utility from a terminal.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
|
|
|
@ -5,28 +5,28 @@ description: "List of platforms supported by TDengine server, client, and connec
|
||||||
|
|
||||||
## List of supported platforms for TDengine server
|
## List of supported platforms for TDengine server
|
||||||
|
|
||||||
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** |
|
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **macOS** |
|
||||||
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- |
|
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- |
|
||||||
| X64 | ● | ● | ● | ● |
|
| X64 | ● | ● | ● | ● | ● |
|
||||||
| ARM64 | | | ● | |
|
| ARM64 | | | ● | | ● |
|
||||||
|
|
||||||
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
|
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
|
||||||
|
|
||||||
## List of supported platforms for TDengine clients and connectors
|
## List of supported platforms for TDengine clients and connectors
|
||||||
|
|
||||||
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments.
|
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32/macOS development environments.
|
||||||
|
|
||||||
The comparison matrix is as follows.
|
The comparison matrix is as follows.
|
||||||
|
|
||||||
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** |
|
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** | **X64 64bit** | **ARM64** |
|
||||||
| ----------- | ------------- | ------------- | --------- |
|
| ----------- | ------------- | ------------- | --------- | ------------- | --------- |
|
||||||
| **OS** | **Linux** | **Win64** | **Linux** |
|
| **OS** | **Linux** | **Win64** | **Linux** | **macOS** | **macOS** |
|
||||||
| **C/C++** | ● | ● | ● |
|
| **C/C++** | ● | ● | ● | ● | ● |
|
||||||
| **JDBC** | ● | ● | ● |
|
| **JDBC** | ● | ● | ● | ○ | ○ |
|
||||||
| **Python** | ● | ● | ● |
|
| **Python** | ● | ● | ● | ● | ● |
|
||||||
| **Go** | ● | ● | ● |
|
| **Go** | ● | ● | ● | ● | ● |
|
||||||
| **NodeJs** | ● | ● | ● |
|
| **NodeJs** | ● | ● | ● | ○ | ○ |
|
||||||
| **C#** | ● | ● | ○ |
|
| **C#** | ● | ● | ○ | ○ | ○ |
|
||||||
| **RESTful** | ● | ● | ● |
|
| **RESTful** | ● | ● | ● | ● | ● |
|
||||||
|
|
||||||
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
|
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
|
||||||
|
|
|
@ -25,10 +25,11 @@ The TDengine client taos can be executed in this container to access TDengine us
|
||||||
$ docker exec -it tdengine taos
|
$ docker exec -it tdengine taos
|
||||||
|
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
|
information_schema |
|
||||||
Query OK, 1 row(s) in set (0.002843s)
|
performance_schema |
|
||||||
|
Query OK, 2 row(s) in set (0.002843s)
|
||||||
```
|
```
|
||||||
|
|
||||||
The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various connectors for complex scenarios.
|
The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various connectors for complex scenarios.
|
||||||
|
|
|
@ -205,7 +205,7 @@ The parameters described in this document by the effect that they have on the sy
|
||||||
:::info
|
:::info
|
||||||
To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored in TDengine. The timestamp generated from any timezones at same time is same in Unix timestamp. Note that Unix timestamps are converted and recorded on the client side. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly.
|
To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored in TDengine. The timestamp generated from any timezones at same time is same in Unix timestamp. Note that Unix timestamps are converted and recorded on the client side. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly.
|
||||||
|
|
||||||
On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below. For example:
|
On Linux/macOS, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below. For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
timezone UTC-8
|
timezone UTC-8
|
||||||
|
@ -248,9 +248,9 @@ To avoid the problems of using time strings, Unix timestamp can be used directly
|
||||||
:::info
|
:::info
|
||||||
A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. Note that the correct encoding is determined by the user. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
|
A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. Note that the correct encoding is determined by the user. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
|
||||||
|
|
||||||
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
|
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux/macOS, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
|
||||||
|
|
||||||
The locale definition standard on Linux is: <Language\>\_<Region\>.<charset\>, for example, in "zh_CN.UTF-8", "zh" means Chinese, "CN" means China mainland, "UTF-8" means charset. The charset indicates how to display the characters. On Linux and Mac OSX, the charset can be set by locale in the system. On Windows system another configuration parameter `charset` must be used to configure charset because the locale used on Windows is not POSIX standard. Of course, `charset` can also be used on Linux to specify the charset.
|
The locale definition standard on Linux/macOS is: <Language\>\_<Region\>.<charset\>, for example, in "zh_CN.UTF-8", "zh" means Chinese, "CN" means China mainland, "UTF-8" means charset. The charset indicates how to display the characters. On Linux/macOS, the charset can be set by locale in the system. On Windows system another configuration parameter `charset` must be used to configure charset because the locale used on Windows is not POSIX standard. Of course, `charset` can also be used on Linux/macOS to specify the charset.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -263,9 +263,9 @@ The locale definition standard on Linux is: <Language\>\_<Region\>.<charset\>, f
|
||||||
| Default Value | charset set in the system |
|
| Default Value | charset set in the system |
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
On Linux, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start.
|
On Linux/macOS, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start.
|
||||||
|
|
||||||
So on Linux system, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example:
|
So on Linux/macOS, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
locale zh_CN.UTF-8
|
locale zh_CN.UTF-8
|
||||||
|
@ -279,7 +279,7 @@ charset CP936
|
||||||
|
|
||||||
Refer to the documentation for your operating system before changing the charset.
|
Refer to the documentation for your operating system before changing the charset.
|
||||||
|
|
||||||
On a Linux system, if the charset contained in `locale` is not consistent with that set by `charset`, the later setting in the configuration file takes precedence.
|
On a Linux/macOS, if the charset contained in `locale` is not consistent with that set by `charset`, the later setting in the configuration file takes precedence.
|
||||||
|
|
||||||
```
|
```
|
||||||
locale zh_CN.UTF-8
|
locale zh_CN.UTF-8
|
||||||
|
@ -675,7 +675,7 @@ To prevent system resource from being exhausted by multiple concurrent streams,
|
||||||
| Meaning | Whether to generate core file when server crashes |
|
| Meaning | Whether to generate core file when server crashes |
|
||||||
| Value Range | 0: false, 1: true |
|
| Value Range | 0: false, 1: true |
|
||||||
| Default Value | 1 |
|
| Default Value | 1 |
|
||||||
| Note | The core file is generated under root directory `systemctl start taosd` is used to start, or under the working directory if `taosd` is started directly on Linux Shell. |
|
| Note | The core file is generated under root directory `systemctl start taosd`/`launchctl start com.tdengine.taosd` is used to start, or under the working directory if `taosd` is started directly on Linux/macOS Shell. |
|
||||||
|
|
||||||
### udf
|
### udf
|
||||||
|
|
||||||
|
|
|
@ -51,5 +51,6 @@ port: 8125
|
||||||
Start StatsD after adding the following (assuming the config file is modified to config.js)
|
Start StatsD after adding the following (assuming the config file is modified to config.js)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
npm install
|
||||||
node stats.js config.js &
|
node stats.js config.js &
|
||||||
```
|
```
|
||||||
|
|
|
@ -30,21 +30,20 @@ After restarting Prometheus, you can refer to the following example to verify th
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
test | 2022-04-12 08:07:58.756 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
information_schema |
|
||||||
log | 2022-04-20 07:19:50.260 | 2 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
performance_schema |
|
||||||
prometheus_data | 2022-04-20 07:21:09.202 | 158 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
prometheus_data |
|
||||||
db | 2022-04-15 06:37:08.512 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
Query OK, 3 row(s) in set (0.000585s)
|
||||||
Query OK, 4 row(s) in set (0.000585s)
|
|
||||||
|
|
||||||
taos> use prometheus_data;
|
taos> use prometheus_data;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
metrics | 2022-04-20 07:21:09.209 | 2 | 1 | 1389 |
|
metrics |
|
||||||
Query OK, 1 row(s) in set (0.000487s)
|
Query OK, 1 row(s) in set (0.000487s)
|
||||||
|
|
||||||
taos> select * from metrics limit 10;
|
taos> select * from metrics limit 10;
|
||||||
|
@ -89,3 +88,7 @@ VALUE TIMESTAMP
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine will automatically create unique IDs for sub-table names by the rule.
|
||||||
|
:::
|
||||||
|
|
|
@ -15,6 +15,7 @@ To write Telegraf data to TDengine requires the following preparations.
|
||||||
- The TDengine cluster is deployed and functioning properly
|
- The TDengine cluster is deployed and functioning properly
|
||||||
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
|
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
|
||||||
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
|
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
|
||||||
|
- Telegraf collects the running status measurements of current system. You can enable [input plugins](https://docs.influxdata.com/telegraf/v1.22/plugins/) to insert [other formats](https://docs.influxdata.com/telegraf/v1.24/data_formats/input/) data to Telegraf then forward to TDengine.
|
||||||
|
|
||||||
## Configuration steps
|
## Configuration steps
|
||||||
<Telegraf />
|
<Telegraf />
|
||||||
|
@ -31,26 +32,27 @@ Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
telegraf | 2022-04-20 08:47:53.488 | 22 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
information_schema |
|
||||||
log | 2022-04-20 07:19:50.260 | 9 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
performance_schema |
|
||||||
Query OK, 2 row(s) in set (0.002401s)
|
telegraf |
|
||||||
|
Query OK, 3 rows in database (0.010568s)
|
||||||
|
|
||||||
taos> use telegraf;
|
taos> use telegraf;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
swap | 2022-04-20 08:47:53.532 | 7 | 1 | 1 |
|
swap |
|
||||||
cpu | 2022-04-20 08:48:03.488 | 11 | 2 | 5 |
|
cpu |
|
||||||
system | 2022-04-20 08:47:53.512 | 8 | 1 | 1 |
|
system |
|
||||||
diskio | 2022-04-20 08:47:53.550 | 12 | 2 | 15 |
|
diskio |
|
||||||
kernel | 2022-04-20 08:47:53.503 | 6 | 1 | 1 |
|
kernel |
|
||||||
mem | 2022-04-20 08:47:53.521 | 35 | 1 | 1 |
|
mem |
|
||||||
processes | 2022-04-20 08:47:53.555 | 12 | 1 | 1 |
|
processes |
|
||||||
disk | 2022-04-20 08:47:53.541 | 8 | 5 | 2 |
|
disk |
|
||||||
Query OK, 8 row(s) in set (0.000521s)
|
Query OK, 8 row(s) in set (0.000521s)
|
||||||
|
|
||||||
taos> select * from telegraf.system limit 10;
|
taos> select * from telegraf.system limit 10;
|
||||||
|
@ -65,3 +67,11 @@ taos> select * from telegraf.system limit 10;
|
||||||
|
|
|
|
||||||
Query OK, 3 row(s) in set (0.013269s)
|
Query OK, 3 row(s) in set (0.013269s)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine take influxdb format data and create unique ID for table names by the rule.
|
||||||
|
The user can configure `smlChildTableName` parameter to generate specified table names if he/she needs. And he/she also need to insert data with specified data format.
|
||||||
|
For example, Add `smlChildTableName=tname` in the taos.cfg file. Insert data `st,tname=cpu1,t1=4 c1=3 1626006833639000000` then the table name will be cpu1. If there are multiple lines has same tname but different tag_set, the first line's tag_set will be used to automatically creating table and ignore other lines. Please refer to [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
||||||
|
:::
|
||||||
|
|
||||||
|
|
|
@ -32,28 +32,29 @@ Use the TDengine CLI to verify that collectd's data is written to TDengine and c
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
collectd | 2022-04-20 09:27:45.460 | 95 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
information_schema |
|
||||||
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
performance_schema |
|
||||||
Query OK, 2 row(s) in set (0.003266s)
|
collectd |
|
||||||
|
Query OK, 3 row(s) in set (0.003266s)
|
||||||
|
|
||||||
taos> use collectd;
|
taos> use collectd;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
load_1 | 2022-04-20 09:27:45.492 | 2 | 2 | 1 |
|
load_1 |
|
||||||
memory_value | 2022-04-20 09:27:45.463 | 2 | 3 | 6 |
|
memory_value |
|
||||||
df_value | 2022-04-20 09:27:45.463 | 2 | 4 | 25 |
|
df_value |
|
||||||
load_2 | 2022-04-20 09:27:45.501 | 2 | 2 | 1 |
|
load_2 |
|
||||||
load_0 | 2022-04-20 09:27:45.485 | 2 | 2 | 1 |
|
load_0 |
|
||||||
interface_1 | 2022-04-20 09:27:45.488 | 2 | 3 | 12 |
|
interface_1 |
|
||||||
irq_value | 2022-04-20 09:27:45.476 | 2 | 3 | 31 |
|
irq_value |
|
||||||
interface_0 | 2022-04-20 09:27:45.480 | 2 | 3 | 12 |
|
interface_0 |
|
||||||
entropy_value | 2022-04-20 09:27:45.473 | 2 | 2 | 1 |
|
entropy_value |
|
||||||
swap_value | 2022-04-20 09:27:45.477 | 2 | 3 | 5 |
|
swap_value |
|
||||||
Query OK, 10 row(s) in set (0.002236s)
|
Query OK, 10 row(s) in set (0.002236s)
|
||||||
|
|
||||||
taos> select * from collectd.memory_value limit 10;
|
taos> select * from collectd.memory_value limit 10;
|
||||||
|
@ -72,3 +73,7 @@ taos> select * from collectd.memory_value limit 10;
|
||||||
Query OK, 10 row(s) in set (0.010348s)
|
Query OK, 10 row(s) in set (0.010348s)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine will automatically create unique IDs for sub-table names by the rule.
|
||||||
|
:::
|
||||||
|
|
|
@ -26,7 +26,7 @@ Start StatsD:
|
||||||
```
|
```
|
||||||
$ node stats.js config.js &
|
$ node stats.js config.js &
|
||||||
[1] 8546
|
[1] 8546
|
||||||
$ 20 Apr 09:54:41 - [8546] reading config file: exampleConfig.js
|
$ 20 Apr 09:54:41 - [8546] reading config file: config.js
|
||||||
20 Apr 09:54:41 - server is up INFO
|
20 Apr 09:54:41 - server is up INFO
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -40,19 +40,20 @@ Use the TDengine CLI to verify that StatsD data is written to TDengine and can r
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
information_schema |
|
||||||
statsd | 2022-04-20 09:54:51.220 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
performance_schema |
|
||||||
Query OK, 2 row(s) in set (0.003142s)
|
statsd |
|
||||||
|
Query OK, 3 row(s) in set (0.003142s)
|
||||||
|
|
||||||
taos> use statsd;
|
taos> use statsd;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
foo | 2022-04-20 09:54:51.234 | 2 | 1 | 1 |
|
foo |
|
||||||
Query OK, 1 row(s) in set (0.002161s)
|
Query OK, 1 row(s) in set (0.002161s)
|
||||||
|
|
||||||
taos> select * from foo;
|
taos> select * from foo;
|
||||||
|
@ -63,3 +64,8 @@ Query OK, 1 row(s) in set (0.004179s)
|
||||||
|
|
||||||
taos>
|
taos>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine will automatically create unique IDs for sub-table names by the rule.
|
||||||
|
:::
|
||||||
|
|
|
@ -36,39 +36,45 @@ After waiting about 10 seconds, use the TDengine CLI to query TDengine to verify
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
information_schema |
|
||||||
icinga2 | 2022-04-20 12:11:39.697 | 13 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
performance_schema |
|
||||||
Query OK, 2 row(s) in set (0.001867s)
|
icinga2 |
|
||||||
|
Query OK, 3 row(s) in set (0.001867s)
|
||||||
|
|
||||||
taos> use icinga2;
|
taos> use icinga2;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
icinga.service.users.state_... | 2022-04-20 12:11:39.726 | 2 | 1 | 1 |
|
icinga.service.users.state_... |
|
||||||
icinga.service.users.acknow... | 2022-04-20 12:11:39.756 | 2 | 1 | 1 |
|
icinga.service.users.acknow... |
|
||||||
icinga.service.procs.downti... | 2022-04-20 12:11:44.541 | 2 | 1 | 1 |
|
icinga.service.procs.downti... |
|
||||||
icinga.service.users.users | 2022-04-20 12:11:39.770 | 2 | 1 | 1 |
|
icinga.service.users.users |
|
||||||
icinga.service.procs.procs_min | 2022-04-20 12:11:44.599 | 2 | 1 | 1 |
|
icinga.service.procs.procs_min |
|
||||||
icinga.service.users.users_min | 2022-04-20 12:11:39.809 | 2 | 1 | 1 |
|
icinga.service.users.users_min |
|
||||||
icinga.check.max_check_atte... | 2022-04-20 12:11:39.847 | 2 | 3 | 2 |
|
icinga.check.max_check_atte... |
|
||||||
icinga.service.procs.state_... | 2022-04-20 12:11:44.522 | 2 | 1 | 1 |
|
icinga.service.procs.state_... |
|
||||||
icinga.service.procs.procs_... | 2022-04-20 12:11:44.576 | 2 | 1 | 1 |
|
icinga.service.procs.procs_... |
|
||||||
icinga.service.users.users_... | 2022-04-20 12:11:39.796 | 2 | 1 | 1 |
|
icinga.service.users.users_... |
|
||||||
icinga.check.latency | 2022-04-20 12:11:39.869 | 2 | 3 | 2 |
|
icinga.check.latency |
|
||||||
icinga.service.procs.procs_... | 2022-04-20 12:11:44.588 | 2 | 1 | 1 |
|
icinga.service.procs.procs_... |
|
||||||
icinga.service.users.downti... | 2022-04-20 12:11:39.746 | 2 | 1 | 1 |
|
icinga.service.users.downti... |
|
||||||
icinga.service.users.users_... | 2022-04-20 12:11:39.783 | 2 | 1 | 1 |
|
icinga.service.users.users_... |
|
||||||
icinga.service.users.reachable | 2022-04-20 12:11:39.736 | 2 | 1 | 1 |
|
icinga.service.users.reachable |
|
||||||
icinga.service.procs.procs | 2022-04-20 12:11:44.565 | 2 | 1 | 1 |
|
icinga.service.procs.procs |
|
||||||
icinga.service.procs.acknow... | 2022-04-20 12:11:44.554 | 2 | 1 | 1 |
|
icinga.service.procs.acknow... |
|
||||||
icinga.service.procs.state | 2022-04-20 12:11:44.509 | 2 | 1 | 1 |
|
icinga.service.procs.state |
|
||||||
icinga.service.procs.reachable | 2022-04-20 12:11:44.532 | 2 | 1 | 1 |
|
icinga.service.procs.reachable |
|
||||||
icinga.check.current_attempt | 2022-04-20 12:11:39.825 | 2 | 3 | 2 |
|
icinga.check.current_attempt |
|
||||||
icinga.check.execution_time | 2022-04-20 12:11:39.898 | 2 | 3 | 2 |
|
icinga.check.execution_time |
|
||||||
icinga.service.users.state | 2022-04-20 12:11:39.704 | 2 | 1 | 1 |
|
icinga.service.users.state |
|
||||||
Query OK, 22 row(s) in set (0.002317s)
|
Query OK, 22 row(s) in set (0.002317s)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine will automatically create unique IDs for sub-table names by the rule.
|
||||||
|
:::
|
||||||
|
|
|
@ -33,35 +33,41 @@ Wait for a few seconds and then use the TDengine CLI to query whether the corres
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
name |
|
||||||
====================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
tcollector | 2022-04-20 12:44:49.604 | 88 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
information_schema |
|
||||||
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
|
performance_schema |
|
||||||
Query OK, 2 row(s) in set (0.002679s)
|
tcollector |
|
||||||
|
Query OK, 3 rows in database (0.001647s)
|
||||||
|
|
||||||
taos> use tcollector;
|
taos> use tcollector;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
proc.meminfo.hugepages_rsvd | 2022-04-20 12:44:53.945 | 2 | 1 | 1 |
|
proc.meminfo.hugepages_rsvd |
|
||||||
proc.meminfo.directmap1g | 2022-04-20 12:44:54.110 | 2 | 1 | 1 |
|
proc.meminfo.directmap1g |
|
||||||
proc.meminfo.vmallocchunk | 2022-04-20 12:44:53.724 | 2 | 1 | 1 |
|
proc.meminfo.vmallocchunk |
|
||||||
proc.meminfo.hugepagesize | 2022-04-20 12:44:54.004 | 2 | 1 | 1 |
|
proc.meminfo.hugepagesize |
|
||||||
tcollector.reader.lines_dro... | 2022-04-20 12:44:49.675 | 2 | 1 | 1 |
|
tcollector.reader.lines_dro... |
|
||||||
proc.meminfo.sunreclaim | 2022-04-20 12:44:53.437 | 2 | 1 | 1 |
|
proc.meminfo.sunreclaim |
|
||||||
proc.stat.ctxt | 2022-04-20 12:44:55.363 | 2 | 1 | 1 |
|
proc.stat.ctxt |
|
||||||
proc.meminfo.swaptotal | 2022-04-20 12:44:53.158 | 2 | 1 | 1 |
|
proc.meminfo.swaptotal |
|
||||||
proc.uptime.total | 2022-04-20 12:44:52.813 | 2 | 1 | 1 |
|
proc.uptime.total |
|
||||||
tcollector.collector.lines_... | 2022-04-20 12:44:49.895 | 2 | 2 | 51 |
|
tcollector.collector.lines_... |
|
||||||
proc.meminfo.vmallocused | 2022-04-20 12:44:53.704 | 2 | 1 | 1 |
|
proc.meminfo.vmallocused |
|
||||||
proc.meminfo.memavailable | 2022-04-20 12:44:52.939 | 2 | 1 | 1 |
|
proc.meminfo.memavailable |
|
||||||
sys.numa.foreign_allocs | 2022-04-20 12:44:57.929 | 2 | 2 | 1 |
|
sys.numa.foreign_allocs |
|
||||||
proc.meminfo.committed_as | 2022-04-20 12:44:53.639 | 2 | 1 | 1 |
|
proc.meminfo.committed_as |
|
||||||
proc.vmstat.pswpin | 2022-04-20 12:44:54.177 | 2 | 1 | 1 |
|
proc.vmstat.pswpin |
|
||||||
proc.meminfo.cmafree | 2022-04-20 12:44:53.865 | 2 | 1 | 1 |
|
proc.meminfo.cmafree |
|
||||||
proc.meminfo.mapped | 2022-04-20 12:44:53.349 | 2 | 1 | 1 |
|
proc.meminfo.mapped |
|
||||||
proc.vmstat.pgmajfault | 2022-04-20 12:44:54.251 | 2 | 1 | 1 |
|
proc.vmstat.pgmajfault |
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
- TDengine will automatically create unique IDs for sub-table names by the rule.
|
||||||
|
:::
|
||||||
|
|
|
@ -5,7 +5,7 @@ title: Architecture
|
||||||
|
|
||||||
## Cluster and Primary Logic Unit
|
## Cluster and Primary Logic Unit
|
||||||
|
|
||||||
The design of TDengine is based on the assumption that any hardware or software system is not 100% reliable and that no single node can provide sufficient computing and storage resources to process massive data. Therefore, since day one, TDengine has been designed as a natively distributed system, with high-reliability architecture. Hardware failure or software failure of a single, or even multiple servers will not affect the availability and reliability of the system. At the same time, through node virtualization and automatic load-balancing technology, TDengine can make the most efficient use of computing and storage resources in heterogeneous clusters to reduce hardware resource needs, significantly.
|
The design of TDengine is based on the assumption that any hardware or software system is not 100% reliable and that no single node can provide sufficient computing and storage resources to process massive data. Therefore, since day one, TDengine has been designed as a natively distributed system, with high-reliability architecture, and can be scaled out easily. Hardware failure or software failure of a single, or even multiple servers will not affect the availability and reliability of the system. At the same time, through node virtualization and automatic load-balancing technology, TDengine can make the most efficient use of computing and storage resources in heterogeneous clusters to reduce hardware resource needs significantly.
|
||||||
|
|
||||||
### Primary Logic Unit
|
### Primary Logic Unit
|
||||||
|
|
||||||
|
@ -15,44 +15,50 @@ Logical structure diagram of TDengine's distributed architecture is as follows:
|
||||||
|
|
||||||
<center> Figure 1: TDengine architecture diagram </center>
|
<center> Figure 1: TDengine architecture diagram </center>
|
||||||
|
|
||||||
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
|
A complete TDengine system runs on one or more physical nodes. Logically, a complete system includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TDengine client driver (TAOSC). The following is a brief introduction to each logical unit.
|
||||||
|
|
||||||
**Physical node (pnode)**: A pnode is a computer that runs independently and has its own computing, storage and network capabilities. It can be a physical machine, virtual machine, or Docker container installed with OS. The physical node is identified by its configured FQDN (Fully Qualified Domain Name). TDengine relies entirely on FQDN for network communication. If you don't know about FQDN, please check [wikipedia](https://en.wikipedia.org/wiki/Fully_qualified_domain_name).
|
**Physical node (pnode)**: A pnode is a computer that runs independently and has its own computing, storage and network capabilities. It can be a physical machine, virtual machine, or Docker container installed with OS. The physical node is identified by its configured FQDN (Fully Qualified Domain Name). TDengine relies entirely on FQDN for network communication. If you don't know about FQDN, please check [wikipedia](https://en.wikipedia.org/wiki/Fully_qualified_domain_name).
|
||||||
|
|
||||||
**Data node (dnode):** A dnode is a running instance of the TDengine server-side execution code taosd on a physical node (pnode). A working system must have at least one data node. A dnode contains zero to multiple logical virtual nodes (VNODE) and zero or at most one logical management node (mnode). The unique identification of a dnode in the system is determined by the instance's End Point (EP). EP is a combination of FQDN (Fully Qualified Domain Name) of the physical node where the dnode is located and the network port number (Port) configured by the system. By configuring different ports, a physical node (a physical machine, virtual machine or container) can run multiple instances or have multiple data nodes.
|
**Data node (dnode):** A dnode is a running instance of the TDengine server `taosd` on a physical node (pnode). A working system must have at least one data node. A dnode contains zero to multiple virtual nodes (VNODE) and zero or at most one management node (mnode). The unique identification of a dnode in the system is determined by the instance's End Point (EP). EP is a combination of FQDN (Fully Qualified Domain Name) of the physical node where the dnode is located and the network port number (Port) configured by the system. By configuring different ports, a physical node (a physical machine, virtual machine or container) can run multiple instances or have multiple data nodes.
|
||||||
|
|
||||||
**Virtual node (vnode)**: To better support data sharding, load balancing and prevent data from overheating or skewing, data nodes are virtualized into multiple virtual nodes (vnode, V2, V3, V4, etc. in the figure). Each vnode is a relatively independent work unit, which is the basic unit of time-series data storage and has independent running threads, memory space and persistent storage path. A vnode contains a certain number of tables (data collection points). When a new table is created, the system checks whether a new vnode needs to be created. The number of vnodes that can be created on a data node depends on the capacity of the hardware of the physical node where the data node is located. A vnode belongs to only one DB, but a DB can have multiple vnodes. In addition to the stored time-series data, a vnode also stores the schema and tag values of the included tables. A virtual node is uniquely identified in the system by the EP of the data node and the VGroup ID to which it belongs and is created and managed by the management node.
|
**Virtual node (vnode)**: To better support data sharding, load balancing and prevent data from overheating or skewing, data nodes are virtualized into multiple virtual nodes (vnode, V2, V3, V4, etc. in the figure). Each vnode is a relatively independent work unit, which is the basic unit of time-series data storage and has independent running threads, memory space and persistent storage path. A vnode contains a certain number of tables (data collection points). When a database is created, some vnodes are created for the database. The number of vnodes that can be created on a specific dnode depends on the available system resources. Each vnode must belong to a single DB, while each DB can have multiple vnodes. Each vnodes stores time series data plus the schema, tags of the tables hosted by it. A vnode is identified by the EP of the dnode it belongs to and the unique ID of the vgruop it belongs to. Vgroups are created and managed by mnode.
|
||||||
|
|
||||||
**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 3) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The leader/follower mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the leader. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction.
|
**Management node (mnode)**: A virtual logical unit (M in the figure) responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes. At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 3) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). mnode adopts RAFT protocol to guarantee high data availability and high data reliability. Any data operation can only be performed through the Leader in the RAFT group. The first mnode in the mnode RAFT group is created automatically when the first dnode of the cluster is deployed. Other two follower mnodes need to be created through SQL command in TDengine CLI. There can be at most one mnode in a single dnode, and the mnode is identified by the EP of the dnode where it's located. Each dnode can communicate with each other to automatically get the EP of all mnodes.
|
||||||
|
|
||||||
**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed in a leader/follower mechanism. Write operations can only be performed on the leader vnode, and then replicated to follower vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `“replica”` when creating a DB, and the default is 1. Using the multi-replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID. If two virtual nodes have the same vnode group ID, it means that they belong to the same group and the data is backed up to each other. The number of virtual nodes in a virtual node group can be dynamically changed, allowing only one, that is, no data replication. VGroup ID is never changed. Even if a virtual node group is deleted, its ID will not be reused.
|
**Computation node (qnode)**: A virtual logical unit (Q in the figure) responsible for executing query and computing tasks including the `show` commands based on system built-in tables. There can be multiple qnodes configured in a TDengine cluster to share the query and computing tasks. A qnode is not coupled with a specific database, that means each qnode can execute the query tasks for multiple databases in parallel. There can be at most one qnode in a single dnode, and the qnode is identified by the EP of the dnode. TDengine client driver can get the list of qnodes through the communication with mnode. If there is no qnode available in the system, query and computing tasks are executed by vnodes. When a query task is executed, according to the execution plan, one or more qnodes may be scheduled by the scheduler to execute the task. qnode can get data from vnode, and send the execution result to other qnodes for further processing. With introducing qnodes, TDengine achieves the separation between storage and computing.
|
||||||
|
|
||||||
**TAOSC**: TAOSC is the driver provided by TDengine to applications. It is responsible for dealing with the interaction between application and cluster, and provides the native interface for the C/C++ language. It is also embedded in the JDBC, C #, Python, Go, Node.js language connection libraries. Applications interact with the whole cluster through TAOSC instead of directly connecting to data nodes in the cluster. This module is responsible for obtaining and caching metadata; forwarding requests for insertion, query, etc. to the correct data node; when returning the results to the application, TAOSC also needs to be responsible for the final level of aggregation, sorting, filtering and other operations. For JDBC, C/C++/C#/Python/Go/Node.js interfaces, this module runs on the physical node where the application is located. At the same time, in order to support the fully distributed RESTful interface, TAOSC has a running instance on each dnode of TDengine cluster.
|
**Stream Processing node (snode)**: A virtual logical unit (S in the figure) responsible for stream processing tasks is introduced in TDengine. There can be multiple snodes configured in a TDengine cluster to share the burden of stream processing tasks. snode is not coupled with a specific stream, that means a single snode can execute the tasks of multiple streams. There can be at most one snode in a single dnode, it's identified by the EP of the dnode. mnode schedules available snodes to perform the stream processing tasks. If there is no snode available in the system, stream processing tasks are executed in vnodes.
|
||||||
|
|
||||||
|
**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed using RAFT protocol. Write operations can only be performed on the leader vnode, and then replicated to follower vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `replica` when creating a DB, and the default is 1. Using the multiple replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID, to each vgroup. Virtual nodes with the same vnode group ID belong to the same vgroup. If `replica` is set to 1, it means no data replication. The number of replication for a database can be dynamically changed to 3 for high data reliability. Even if a virtual node group is deleted, its ID will not be reused.
|
||||||
|
|
||||||
|
**TDengine client driver**: TAOSC is the abbreviation for TDengine client driver provided by TDengine to applications. It is responsible for dealing with the interaction between applications and the cluster, and provides the native interface for the C/C++ language. It is also embedded in the JDBC, C #, Python, Go, Node.js language connection libraries. Applications interact with the whole cluster through TDengine client driver instead of directly connecting to data nodes in the cluster. This module is responsible for obtaining and caching metadata; forwarding requests for insertion, query, etc, to the correct data node; when returning the results to the application, TAOSC also needs to be responsible for the final aggregation, sorting, filtering and other operations. For JDBC, C/C++/C#/Python/Go/Node.js interfaces, this module runs on the physical node where the application is located. Another critical component in TDengine product, named `taosAdapter` which provides fully distributed RESTful interface, also invokes TDengine client driver to communicate with TDengine cluster.
|
||||||
|
|
||||||
### Node Communication
|
### Node Communication
|
||||||
|
|
||||||
**Communication mode**: The communication among each data node of TDengine system, and among the client driver and each data node is carried out through TCP/UDP. Considering an IoT scenario, the data writing packets are generally not large, so TDengine uses UDP in addition to TCP for transmission, because UDP is more efficient and is not limited by the number of connections. TDengine implements its own timeout, retransmission, confirmation and other mechanisms to ensure reliable transmission of UDP. For packets with a data volume of less than 15K, UDP is adopted for transmission, and TCP is automatically adopted for transmission of packets with a data volume of more than 15K or query operations. At the same time, TDengine will automatically compress/decompress the data, digitally sign/authenticate the data according to the configuration and data packet. For data replication among data nodes, only TCP is used for data transportation.
|
**Communication mode**: The communication among data nodes of TDengine system, and among the client driver and each data node is carried out through TCP. TDengine automatically compress/decompress data and sign/authorize according to configuration and data packets.
|
||||||
|
|
||||||
**FQDN configuration:** A data node has one or more FQDNs, which can be specified in the system configuration file taos.cfg with the parameter “fqdn”. If it is not specified, the system will automatically use the hostname of the computer as its FQDN. If the node is not configured with FQDN, you can directly set the configuration parameter “fqdn” of the node to its IP address. However, IP is not recommended because IP address may be changed, and once it changes, the cluster will not work properly. The EP (End Point) of a data node consists of FQDN + Port. With FQDN, it is necessary to ensure the DNS service is running, or hosts files on nodes are configured properly.
|
**FQDN configuration:** A data node may have one or more FQDNs, which can be specified with the parameter `fqdn` in the system configuration file `taos.cfg`. If it is not specified, TDengine will automatically use the hostname of the computer as its FQDN. IP address also can be used to configure `fqdn` but it's not a recommended way because IP address may vary. Once the IP address is changed, the whole TDengine cluster will not work. The end point of a data node is composed of FQDN and prot number. It is necessary to ensure the DNS service is running or hosts files on nodes are configured properly to make sure FQDN works.
|
||||||
|
|
||||||
**Port configuration**: The external port of a data node is determined by the system configuration parameter “serverPort” in TDengine, and the port for internal communication of cluster is serverPort+5. The data replication operation among data nodes in the cluster also occupies a TCP port, which is serverPort+10. In order to support multithreading and efficient processing of UDP data, each internal and external UDP connection needs to occupy 5 consecutive ports. Therefore, the total port range of a data node will be serverPort to serverPort + 10, for a total of 11 TCP/UDP ports. To run the system, make sure that the firewall keeps these ports open. Each data node can be configured with a different serverPort.
|
**Port configuration**: The port of a data node is configured with parameter `serverPort` in `taosc.cfg`.
|
||||||
|
|
||||||
**Cluster external connection**: TDengine cluster can accommodate a single, multiple or even thousands of data nodes. The application only needs to initiate a connection to any data node in the cluster. The network parameter required for connection is the End Point (FQDN plus configured port number) of a data node. When starting the application taos through CLI, the FQDN of the data node can be specified through the option `-h`, and the configured port number can be specified through `-p`. If the port is not configured, the system configuration parameter “serverPort” of TDengine will be adopted.
|
**Cluster external connection**: TDengine cluster can accommodate a single, multiple or even thousands of data nodes. The application only needs to initiate a connection to any data node in the cluster. The network parameter required for connection is the End Point (FQDN plus configured port number) of a data node. When starting TDengine CLI `taos`, the FQDN of the data node can be specified through the option `-h`, and the configured port number can be specified through `-p`. If the port is not configured, the configuration parameter `serverPort` of TDengine will be used.
|
||||||
|
|
||||||
**Inter-cluster communication**: Data nodes connect with each other through TCP/UDP. When a data node starts, it will obtain the EP information of the dnode where the mnode is located, and then establish a connection with the mnode in the system to exchange information. There are three steps to obtain EP information of the mnode:
|
**Inter-cluster communication**: Data nodes connect with each other through TCP. When a data node starts, it will obtain the EP of the dnode where the mnode is located, and then establish a connection with the mnode to exchange information. There are three steps to obtain EP information of the mnode:
|
||||||
|
|
||||||
1. Check whether the mnodeEpList file exists, if it does not exist or cannot be opened normally to obtain EP information of the mnode, skip to the second step;
|
1. Check whether `dnode.json` file exists, if it does not exist or cannot be opened normally, skip to the second step;
|
||||||
2. Check the system configuration file taos.cfg to obtain node configuration parameters “firstEp” and “secondEp” (the node specified by these two parameters can be a normal node without mnode, in this case, the node will try to redirect to the mnode node when connected). If these two configuration parameters do not exist or do not exist in taos.cfg, or are invalid, skip to the third step;
|
2. Check the system configuration file `taos.cfg` to obtain node configuration parameters `firstEp` and `secondEp` (the nodes specified by these two parameters can be a normal node without mnode, in this case the node will try to redirect to the mnode node when connected). If these two configuration parameters do not exist or do not exist in taos.cfg or are invalid, skip to the third step;
|
||||||
3. Set your own EP as a mnode EP and run it independently. After obtaining the mnode EP list, the data node initiates the connection. It will successfully join the working cluster after connection. If not successful, it will try the next item in the mnode EP list. If all attempts are made, but the connection still fails, sleep for a few seconds before trying again.
|
3. Set your own EP as a mnode EP and run it independently.
|
||||||
|
|
||||||
**The choice of MNODE**: TDengine logically has a management node, but there is no separate execution code. The server-side only has one set of execution code, taosd. So which data node will be the management node? This is determined automatically by the system without any manual intervention. The principle is as follows: when a data node starts, it will check its End Point and compare it with the obtained mnode EP List. If its EP exists in it, the data node shall start the mnode module and become a mnode. If your own EP is not in the mnode EP List, the mnode module will not start. During the system operation, due to load balancing, downtime and other reasons, mnode may migrate to the new dnode, totally transparently and without manual intervention. The modification of configuration parameters is the decision made by mnode itself according to resources usage.
|
After obtaining the mnode EP list, the data node initiates the connection. It will successfully join the working cluster after connection is established successfully. If not successful, it will try the next item in the mnode EP list. If all attempts failed, the dnode will sleep for a few seconds and try again.
|
||||||
|
|
||||||
**Add new data nodes:** After the system has a data node, it has become a working system. There are two steps to add a new node into the cluster.
|
**Create MNODE**: The management node (mnode) in TDengine is a logical node without specific process. In other words, mnode also runs in a dnode, which is a real process on operating system. So which data node will be the management node? This is determined automatically by the system without any manual intervention. The principle is as follows: when the first dnode in the cluster starts, it becomes mnode automatically, and the other mnodes need to be created using SQL in TDengine CLI.
|
||||||
|
|
||||||
- Step1: Connect to the existing working data node using TDengine CLI, and then add the End Point of the new data node with the command "create dnode"
|
**Add new data nodes:** After the first data node starts successfully, the system can begin to work. There are two steps to add a new data node into the cluster.
|
||||||
- Step 2: In the system configuration parameter file taos.cfg of the new data node, set the “firstEp” and “secondEp” parameters to the EP of any two data nodes in the existing cluster. Please refer to the user tutorial for detailed steps. In this way, the cluster will be established step by step.
|
|
||||||
|
|
||||||
**Redirection**: Regardless of dnode or TAOSC, the connection to the mnode is initiated first. The mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it’s not an mnode itself, it will reply to the mnode with the EP List. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
|
- Step : Connect to the existing working data node using TDengine CLI, and then add the End Point of the new data node with the command "create dnode"
|
||||||
|
- Step 2: In the system configuration parameter file `taos.cfg` of the new data node, set the `firstEp` and `secondEp` parameters to the EP of any two data nodes in the existing cluster. If there is only one existing data node in the system, skip parameter `secondEp`. Please refer to the user tutorial for detailed steps. In this way, the cluster will be established step by step.
|
||||||
|
|
||||||
|
**Redirection**: Regardless of dnode or TAOSC, the connection to the mnode is initiated first. The mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it’s not an mnode itself, it will reply to the connection initiator with the mnode EP List. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again with mnode. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
|
||||||
|
|
||||||
### A Typical Data Writing Process
|
### A Typical Data Writing Process
|
||||||
|
|
||||||
|
@ -62,18 +68,20 @@ To explain the relationship between vnode, mnode, TAOSC and application and thei
|
||||||
|
|
||||||
<center> Figure 2: Typical process of TDengine </center>
|
<center> Figure 2: Typical process of TDengine </center>
|
||||||
|
|
||||||
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
|
1. Application initiates a request to insert data through JDBC, or other APIs.
|
||||||
2. TAOSC checks the cache to see if meta data exists for the table. If it does, it goes straight to Step 4. If not, TAOSC sends a get meta-data request to mnode.
|
2. TAOSC checks the cache to see if the vgroups-info for the database being requested to insert data exists. If the vgroups-info exists, it goes straight to Step 4. Otherwise, TAOSC sends a get meta-data request to mnode.
|
||||||
3. Mnode returns the meta-data of the table to TAOSC. Meta-data contains the schema of the table, and also the vgroup information to which the table belongs (the vnode ID and the End Point of the dnode where the table belongs. If the number of replicas is N, there will be N groups of End Points). If TAOSC does not receive a response from the mnode for a long time, and there are multiple mnodes, TAOSC will send a request to the next mnode.
|
3. Mnode returns the vgroups-info of the database to TAOSC. The vgroups-info contains the distribution of the vgroups of the database, and also the vgroup information to which the table belongs (the vnode ID and the End Point of the dnode where the table belongs. If the number of replicas is N, there will be N groups of End Points). If TAOSC does not receive a response from the mnode for a long time, and there are multiple mnodes, TAOSC will send a request to the next mnode.
|
||||||
4. TAOSC initiates an insert request to leader vnode.
|
4. TAOSC checks to see whether the metadata for the table to be inserted is in cache. If yes, skip to step 6; otherwise taosc sends a request to corresponding to get the metadata for the table.
|
||||||
5. After vnode inserts the data, it gives a reply to TAOSC, indicating that the insertion is successful. If TAOSC doesn't get a response from vnode for a long time, TAOSC will treat this node as offline. In this case, if there are multiple replicas of the inserted database, TAOSC will issue an insert request to the next vnode in vgroup.
|
5. vnode returns the metadata for the table to TAOSC, the metadata includes the table's schema.
|
||||||
6. TAOSC notifies APP that writing is successful.
|
6. TAOSC initiates an insert request to leader vnode of the table.
|
||||||
|
7. After vnode inserts the data, it gives a reply to TAOSC, indicating that the insertion is successful. If TAOSC doesn't get a response from vnode for a long time, TAOSC will treat this node as offline. In this case, if there are multiple replicas of the inserted database, TAOSC will issue an insert request to the next vnode in vgroup.
|
||||||
|
8. TAOSC notifies APP that writing is successful.
|
||||||
|
|
||||||
For Step 2 and 3, when TAOSC starts, it does not know the End Point of mnode, so it will directly initiate a request to the configured serving End Point of the cluster. If the dnode that receives the request does not have a mnode configured, it will reply with the mnode EP list, so that TAOSC will re-issue a request to obtain meta-data to the EP of another mnode.
|
For Step 2, when TAOSC starts, it does not know the End Point of mnode, so it will directly initiate a request to the configured serving End Point of the cluster. If the dnode that receives the request does not have a mnode configured, it will reply with the mnode EP list, so that TAOSC will re-issue a request to the EP of another mnode to obtain meta-data .
|
||||||
|
|
||||||
For Step 4 and 5, without caching, TAOSC can't recognize the leader in the virtual node group, so assumes that the first vnode is the leader and sends a request to it. If this vnode is not the leader, it will reply to the actual leader as a new target to which TAOSC shall send a request. Once a response of successful insertion is obtained, TAOSC will cache the information of leader node.
|
For Step 4 and 6, without caching, TAOSC can't recognize the leader in the virtual node group, so assumes that the first vnode is the leader and sends a request to it. If this vnode is not the leader, it will reply to TAOSC with the actual leader, then TAOC will send a request to the true leader. Once a response of successful insertion is obtained, TAOSC will cache the information of leader node for further use.
|
||||||
|
|
||||||
The above describes the process of inserting data. The processes of querying and computing are the same. TAOSC encapsulates and hides all these complicated processes, and it is transparent to applications.
|
The above flow describes the process of inserting data. The process of querying and computing are similar. TAOSC encapsulates and hides all these complicated processes so that it is transparent to applications.
|
||||||
|
|
||||||
Through TAOSC caching mechanism, mnode needs to be accessed only when a table is accessed for the first time, so mnode will not become a system bottleneck. However, because schema and vgroup may change (such as load balancing), TAOSC will interact with mnode regularly to automatically update the cache.
|
Through TAOSC caching mechanism, mnode needs to be accessed only when a table is accessed for the first time, so mnode will not become a system bottleneck. However, because schema and vgroup may change (such as load balancing), TAOSC will interact with mnode regularly to automatically update the cache.
|
||||||
|
|
||||||
|
@ -81,15 +89,15 @@ Through TAOSC caching mechanism, mnode needs to be accessed only when a table is
|
||||||
|
|
||||||
### Storage Model
|
### Storage Model
|
||||||
|
|
||||||
The data stored by TDengine includes collected time-series data, metadata related to database and tables, tag data, etc. All of the data is specifically divided into three parts:
|
The data stored by TDengine includes collected time-series data, metadata and tag data related to database and tablesetc. All of the data is specifically divided into three parts:
|
||||||
|
|
||||||
- Time-series data: stored in vnode and composed of data, head and last files. The amount of data is large and query amount depends on the application scenario. Out-of-order writing is allowed, but delete operation is not supported for the time being, and update operation is only allowed when database “update” parameter is set to 1. By adopting the model with **one table for each data collection point**, the data of a given time period is continuously stored, and the writing against one single table is a simple appending operation. Multiple records can be read at one time, thus ensuring the best performance for both insert and query operations of a single data collection point.
|
- Time-series data: stored in vnode and composed of data, head and last files. Normally the amount of time series data is very huge and query amount depends on the application scenario. Out-of-order writing is allowed. By adopting the model with **one table for each data collection point**, the data of a given time period is continuously stored, and the writing against one single table is a simple appending operation. Multiple records can be read at one time, thus ensuring the best performance for both insert and query operations of a single data collection point.
|
||||||
- Tag data: meta files stored in vnode. Four standard operations of create, read, update and delete are supported. The amount of data is not large. If there are N tables, there are N records, so all can be stored in memory. To make tag filtering efficient, TDengine supports multi-core and multi-threaded concurrent queries. As long as the computing resources are sufficient, even with millions of tables, the tag filtering results will return in milliseconds.
|
- Table Metadata: table meta data includes tags and table schema and is stored in meta file in each vnode. CRUD can be operated on table metadata. There is a specific record for each table, so the amount of table meta data depends on the number of tables. Table meta data is stored in LRU model and supports index for tag data. TDengine can support multiple queries in parallel. As long as the memory resource is enough, meta data is all stored in memory for quick access. The filtering on tens of millions of tags can be finished in a few milliseconds. Even though when the memory resource is not sufficient, TDengine can still perform high speed query on tens of millions of tables.
|
||||||
- Metadata: stored in mnode and includes system node, user, DB, table schema and other information. Four standard operations of create, delete, update and read are supported. The amount of this data is not large and can be stored in memory. Moreover, the number of queries is not large because of client cache. Even though TDengine uses centralized storage management, because of the architecture, there is no performance bottleneck.
|
- Database Metadata: stored in mnode and includes system node, user, DB, table schema and other information. Four standard operations of create, delete, update and read are supported. The amount of this data is not large and can be stored in memory. Moreover, the number of queries is not large because of client cache. Even though TDengine uses centralized storage management, because of the architecture, there is no performance bottleneck.
|
||||||
|
|
||||||
Compared with the typical NoSQL storage model, TDengine stores tag data and time-series data completely separately. This has two major advantages:
|
Compared with the typical NoSQL storage model, TDengine stores tag data and time-series data completely separately. This has two major advantages:
|
||||||
|
|
||||||
- Reduces the redundancy of tag data storage significantly. General NoSQL database or time-series database adopts K-V (key-value) storage, in which the key includes a timestamp, a device ID and various tags. Each record carries these duplicated tags, so storage space is wasted. Moreover, if the application needs to add, modify or delete tags on historical data, it has to traverse the data and rewrite them again, which is an extremely expensive operation.
|
- Reduces the redundancy of tag data storage significantly. General NoSQL database or time-series database adopts K-V (key-value) storage, in which the key includes a timestamp, a device ID and various tags. Each record carries these duplicated tags, so much storage space is wasted. Moreover, if the application needs to add, modify or delete tags on historical data, it has to traverse the data and rewrite them again, which is an extremely expensive operation.
|
||||||
- Aggregate data efficiently between multiple tables: when aggregating data between multiple tables, it first finds the tables which satisfy the filtering conditions, and then finds the corresponding data blocks of these tables. This greatly reduces the data sets to be scanned which in turn improves the aggregation efficiency. Moreover, tag data is managed and maintained in a full-memory structure, and tag data queries in tens of millions can return in milliseconds.
|
- Aggregate data efficiently between multiple tables: when aggregating data between multiple tables, it first finds the tables which satisfy the filtering conditions, and then finds the corresponding data blocks of these tables. This greatly reduces the data sets to be scanned which in turn improves the aggregation efficiency. Moreover, tag data is managed and maintained in a full-memory structure, and tag data queries in tens of millions can return in milliseconds.
|
||||||
|
|
||||||
### Data Sharding
|
### Data Sharding
|
||||||
|
@ -106,36 +114,26 @@ The meta data of each table (including schema, tags, etc.) is also stored in vno
|
||||||
|
|
||||||
### Data Partitioning
|
### Data Partitioning
|
||||||
|
|
||||||
In addition to vnode sharding, TDengine partitions the time-series data by time range. Each data file contains only one time range of time-series data, and the length of the time range is determined by the database configuration parameter `“days”`. This method of partitioning by time range is also convenient to efficiently implement data retention policies. As long as the data file exceeds the specified number of days (system configuration parameter `“keep”`), it will be automatically deleted. Moreover, different time ranges can be stored in different paths and storage media, so as to facilitate tiered-storage. Cold/hot data can be stored in different storage media to significantly reduce storage costs.
|
In addition to vnode sharding, TDengine partitions the time-series data by time range. Each data file contains only one time range of time-series data, and the length of the time range is determined by the database configuration parameter `duration`. This method of partitioning by time range is also convenient to efficiently implement data retention policies. As long as the data file exceeds the specified number of days (system configuration parameter `keep`), it will be automatically deleted. Moreover, different time ranges can be stored in different paths and storage media, so as to facilitate tiered-storage. Cold/hot data can be stored in different storage media to significantly reduce storage costs.
|
||||||
|
|
||||||
In general, **TDengine splits big data by vnode and time range in two dimensions** to manage the data efficiently with horizontal scalability.
|
In general, **TDengine splits big data by vnode and time range in two dimensions** to manage the data efficiently with horizontal scalability.
|
||||||
|
|
||||||
### Load Balancing
|
|
||||||
|
|
||||||
Each dnode regularly reports its status (including hard disk space, memory size, CPU, network, number of virtual nodes, etc.) to the mnode (virtual management node) so that the mnode knows the status of the entire cluster. Based on the overall status, when the mnode finds a dnode is overloaded, it will migrate one or more vnodes to other dnodes. During the process, TDengine services keep running and the data insertion, query and computing operations are not affected.
|
|
||||||
|
|
||||||
If the mnode has not received the dnode status for a period of time, the dnode will be treated as offline. If the dnode stays offline beyond the time configured by parameter `“offlineThreshold”`, the dnode will be forcibly removed from the cluster by mnode. If the number of replicas of vnodes on this dnode is greater than one, the system will automatically create new replicas on other dnodes to ensure the replica number. If there are other mnodes on this dnode and the number of mnodes replicas is greater than one, the system will automatically create new mnodes on other dnodes to ensure the replica number.
|
|
||||||
|
|
||||||
When new data nodes are added to the cluster, with new computing and storage resources, the system will automatically start the load balancing process.
|
|
||||||
|
|
||||||
The load balancing process does not require any manual intervention, and it is transparent to the application. **Note: load balancing is controlled by parameter “balance”, which determines to turn on/off automatic load balancing.**
|
|
||||||
|
|
||||||
## Data Writing and Replication Process
|
## Data Writing and Replication Process
|
||||||
|
|
||||||
If a database has N replicas, a virtual node group has N virtual nodes. But only one is the Leader and all others are slaves. When the application writes a new record to system, only the Leader vnode can accept the writing request. If a follower vnode receives a writing request, the system will notifies TAOSC to redirect.
|
TDengine utilizes RAFT protocol to replicate data. If a database has N replicas, a virtual node group has N virtual nodes, N can be either 1 or 3. In each vnode group, only one is the Leader and all others are followers. When the application writes a new record to system, only the Leader vnode can accept the writing request. If a follower vnode receives a writing request, the system will notify TAOSC to redirect the request to the leader.
|
||||||
|
|
||||||
### Leader vnode Writing Process
|
### Leader vnode Writing Process
|
||||||
|
|
||||||
Leader Vnode uses a writing process as follows:
|
Leader Vnode uses a writing process as follows:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<center> Figure 3: TDengine Leader writing process </center>
|
<center> Figure 3: TDengine Leader writing process </center>
|
||||||
|
|
||||||
1. Leader vnode receives the application data insertion request, verifies, and moves to next step;
|
1. Leader vnode receives the application data insertion request, verifies, and moves to next step;
|
||||||
2. If the system configuration parameter `“walLevel”` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
|
2. Leader vnode will write the original request packet into database log file WAL. If the database configuration parameter `“wal_level”` is set to 1, vnode doesn't invoked fsync. If `wal_level` is set to 2, fsync is invoked according to another database parameter `wal_fsync_period`.
|
||||||
3. If there are multiple replicas, vnode will forward data packet to follower vnodes in the same virtual node group, and the forwarded packet has a version number with data;
|
3. If there are multiple replicas, the leader vnode will forward data packet to follower vnodes in the same virtual node group, and the forwarded packet has a version number with data;
|
||||||
4. Write into memory and add the record to “skip list”;
|
4. Leader vnode Writes the data into memory and add the record to “skip list”;
|
||||||
5. Leader vnode returns a confirmation message to the application, indicating a successful write.
|
5. Leader vnode returns a confirmation message to the application, indicating a successful write.
|
||||||
6. If any of Step 2, 3 or 4 fails, the error will directly return to the application.
|
6. If any of Step 2, 3 or 4 fails, the error will directly return to the application.
|
||||||
|
|
||||||
|
@ -143,74 +141,53 @@ Leader Vnode uses a writing process as follows:
|
||||||
|
|
||||||
For a follower vnode, the write process as follows:
|
For a follower vnode, the write process as follows:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<center> Figure 4: TDengine Follower Writing Process </center>
|
<center> Figure 4: TDengine Follower Writing Process </center>
|
||||||
|
|
||||||
1. Follower vnode receives a data insertion request forwarded by Leader vnode;
|
1. Follower vnode receives a data insertion request forwarded by Leader vnode;
|
||||||
2. If the system configuration parameter `“walLevel”` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
|
2. The behavior regarding `wal_level` and `wal_fsync_period` in a follower vnode is same as the leader vnode.
|
||||||
3. Write into memory and add the record to “skip list”.
|
3. Write into memory and add the record to “skip list”.
|
||||||
|
|
||||||
Compared with Leader vnode, follower vnode has no forwarding or reply confirmation step, means two steps less. But writing into memory and WAL is exactly the same.
|
Compared with Leader vnode, follower vnode has no forwarding or reply confirmation step. But writing into memory and WAL is exactly the same.
|
||||||
|
|
||||||
### Remote Disaster Recovery and IDC (Internet Data Center) Migration
|
|
||||||
|
|
||||||
As discussed above, TDengine writes using Leader and Follower processes. TDengine adopts asynchronous replication for data synchronization. This method can greatly improve write performance, with no obvious impact from network delay. By configuring IDC and rack number for each physical node, it can be ensured that for a virtual node group, virtual nodes are composed of physical nodes from different IDC and different racks, thus implementing remote disaster recovery without other tools.
|
|
||||||
|
|
||||||
On the other hand, TDengine supports dynamic modification of the replica number. Once the number of replicas increases, the newly added virtual nodes will immediately enter the data synchronization process. After synchronization is complete, added virtual nodes can provide services. In the synchronization process, leader and other synchronized virtual nodes keep serving. With this feature, TDengine can provide IDC migration without service interruption. It is only necessary to add new physical nodes to the existing IDC cluster, and then remove old physical nodes after the data synchronization is completed.
|
|
||||||
|
|
||||||
However, the asynchronous replication has a very low probability scenario where data may be lost. The specific scenario is as follows:
|
|
||||||
|
|
||||||
1. Leader vnode has finished its 5-step operations, confirmed the success of writing to APP, and then goes down;
|
|
||||||
2. Follower vnode receives the write request, then processing fails before writing to the log in Step 2;
|
|
||||||
3. Follower vnode will become the new leader, thus losing one record.
|
|
||||||
|
|
||||||
In theory, for asynchronous replication, there is no guarantee to prevent data loss. However, this is an extremely low probability scenario as described above.
|
|
||||||
|
|
||||||
Note: Remote disaster recovery and no-downtime IDC migration are only supported by Enterprise Edition. **Hint: This function is not available yet**
|
|
||||||
|
|
||||||
### Leader/follower Selection
|
### Leader/follower Selection
|
||||||
|
|
||||||
Vnode maintains a version number. When memory data is persisted, the version number will also be persisted. For each data update operation, whether it is time-series data or metadata, this version number will be increased by one.
|
Vnode maintains a version number. When memory data is persisted, the version number is also persisted. For each data update operation, whether it is time-series data or metadata, this version number will be increased by one.
|
||||||
|
|
||||||
When a vnode starts, the roles (leader, follower) are uncertain, and the data is in an unsynchronized state. It’s necessary to establish TCP connections with other nodes in the virtual node group and exchange status, including version and its own roles. Through the exchange, the system implements a leader-selection process. The rules are as follows:
|
When a vnode starts, its role (leader, follower) is uncertain, and the data is in an unsynchronized state. It’s necessary to establish TCP connections with other vnodes in the virtual node group and exchange status, including version and its own role. Through the exchange, the system implements a leader-selection process according to standard RAFT protocol.
|
||||||
|
|
||||||
1. If there’s only one replica, it’s always leader
|
|
||||||
2. When all replicas are online, the one with latest version is leader
|
|
||||||
3. Over half of online nodes are virtual nodes, and some virtual node is follower, it will automatically become leader
|
|
||||||
4. For 2 and 3, if multiple virtual nodes meet the requirement, the first vnode in virtual node group list will be selected as leader.
|
|
||||||
|
|
||||||
### Synchronous Replication
|
### Synchronous Replication
|
||||||
|
|
||||||
For scenarios with strong data consistency requirements, asynchronous data replication is not applicable, because there is a small probability of data loss. So, TDengine provides a synchronous replication mechanism for users. When creating a database, in addition to specifying the number of replicas, user also needs to specify a new parameter “quorum”. If quorum is greater than one, it means that every time the Leader forwards a message to the replica, it needs to wait for “quorum-1” reply confirms before informing the application that data has been successfully written in follower. If “quorum-1” reply confirms are not received within a certain period of time, the leader vnode will return an error to the application.
|
For scenarios with strong data consistency requirements, asynchronous data replication is not enough, because there is a small probability of data loss. So, TDengine provides a synchronous replication mechanism for users to choose. When creating a database, in addition to specifying the number of replicas by parameter `replica`, user also needs to specify a new parameter `strict`. If `strict` is set to 1, it means the leader vnode can return success to the client only after over half of the followers vnodes have confirmed the data has been replicated to them. If any follower vnode is offline and the leader vnode can't get confirmation from over half of follower vnodes, the leader vnode will return failure to the client.
|
||||||
|
|
||||||
With synchronous replication, performance of system will decrease and latency will increase. Because metadata needs strong consistency, the default for data synchronization between mnodes is synchronous replication.
|
With synchronous replication, the system performance will decrease and latency will increase. Because metadata needs strong consistency, the default policy for data replication between mnodes is synchronous mode.
|
||||||
|
|
||||||
## Caching and Persistence
|
## Caching and Persistence
|
||||||
|
|
||||||
### Caching
|
### Caching
|
||||||
|
|
||||||
TDengine adopts a time-driven cache management strategy (First-In-First-Out, FIFO), also known as a Write-driven Cache Management Mechanism. This strategy is different from the read-driven data caching mode (Least-Recent-Used, LRU), which directly puts the most recently written data in the system buffer. When the buffer reaches a threshold, the earliest data are written to disk in batches. Generally speaking, for the use of IoT data, users are most concerned about the most recently generated data, that is, the current status. TDengine takes full advantage of this feature to put the most recently arrived (current state) data in the buffer.
|
TDengine adopts a time-driven cache management strategy (First-In-First-Out, FIFO), also known as a Write-driven Cache Management Mechanism. This strategy is different from the read-driven data caching mode (Least-Recent-Used, LRU), it directly puts the most recently written data in the system buffer. When the buffer reaches a threshold, the earliest data are written to disk in batches. Generally speaking, for the use of IoT data, users are most concerned about the most recently generated data, that is, the current status. TDengine takes full advantage of this feature to put the most recently arrived (current state) data in the buffer.
|
||||||
|
|
||||||
TDengine provides millisecond-level data collecting capability to users through query functions. Putting the recently arrived data directly in the buffer can respond to users' analysis query for the latest piece or batch of data more quickly, and provide faster database query response capability as a whole. In this sense, **TDengine can be used as a data cache by setting appropriate configuration parameters without deploying Redis or other additional cache systems**. This can effectively simplify the system architecture and reduce operational costs. It should be noted that after TDengine is restarted, the buffer of the system will be emptied, the previously cached data will be written to disk in batches, and the previously cached data will not be reloaded into the buffer. In this sense, TDengine's cache differs from proprietary key-value cache systems.
|
TDengine provides millisecond-level data collecting capability to users through query functions. Putting the recently arrived data directly in the buffer can respond to users' analysis query for the latest piece or batch of data more quickly, and provide faster database query response capability as a whole. In this sense, **TDengine can be used as a data cache by setting appropriate configuration parameters without deploying Redis or other additional cache systems**. This can significantly simplify the system architecture and reduce operational costs. It should be noted that after TDengine is restarted, the buffer of the system will be emptied, the previously cached data will be written to disk in batches, and the previously cached data will not be reloaded into the buffer. In this sense, TDengine's cache differs from proprietary key-value cache systems.
|
||||||
|
|
||||||
Each vnode has its own independent memory, and it is composed of multiple memory blocks of fixed size, and different vnodes are completely isolated. When writing data, similar to the writing of logs, data is sequentially added to memory, but each vnode maintains its own skip list for quick search. When more than one third of the memory block are used, the disk writing operation will start, and the subsequent writing operation is carried out in a new memory block. By this design, one third of the memory blocks in a vnode keep the latest data, so as to achieve the purpose of caching and quick search. The number of memory blocks of a vnode is determined by the configuration parameter “blocks”, and the size of memory blocks is determined by the configuration parameter “cache”.
|
Each vnode has its own independent memory composed of multiple memory blocks of fixed size, and the memory of different vnodes are completely isolated. When writing data, similar to the writing of logs, data is sequentially added to memory, but each vnode maintains its own skip list for quick search. When more than one third of the memory block are used, the data will be persisted to disk storage, and the subsequent writing operation will be carried out in a new memory block. By this design, one third of the memory blocks in a vnode keeps the latest data, so as to achieve the purpose of caching and quick search. The number of memory blocks of a vnode is determined by the configuration parameter `buffer`.
|
||||||
|
|
||||||
### Persistent Storage
|
### Persistent Storage
|
||||||
|
|
||||||
TDengine uses a data-driven method to write the data from buffer into hard disk for persistent storage. When the cached data in vnode reaches a certain volume, TDengine will pull up the disk-writing thread to write the cached data into persistent storage so that subsequent data writing is not blocked. TDengine will open a new database log file when the data is written, and delete the old database log file after successfull persistence, to avoid unlimited log growth.
|
TDengine uses a data-driven method to write the data from buffer into hard disk for persistent storage. When the cached data in vnode reaches a certain amount, TDengine will pull up the disk-writing thread to write the cached data into persistent storage so that subsequent data writing is not blocked. TDengine will open a new database log file when the data is written, and delete the old database log file after successful persistence, to avoid unlimited log growth.
|
||||||
|
|
||||||
To make full use of the characteristics of time-series data, TDengine splits the data stored in persistent storage by a vnode into multiple files, each file only saves data for a fixed number of days, which is determined by the system configuration parameter `“days”`. Thus for given start and end dates of a query, you can locate the data files to open immediately without any index. This greatly speeds up read operations.
|
To make full use of the characteristics of time-series data, TDengine splits the data stored in persistent storage by a vnode into multiple files, each file only saves data for a fixed number of days, which is determined by the system configuration parameter `duration`. Thus for given start and end dates of a query, you can locate the data files to open immediately without any index. This greatly speeds up read operations.
|
||||||
|
|
||||||
For time-series data, there is generally a retention policy, which is determined by the system configuration parameter `“keep”`. Data files exceeding this set number of days will be automatically deleted by the system to free up storage space.
|
For time-series data, there is generally a retention policy, which is determined by the system configuration parameter `keep`. Data files exceeding this set number of days will be automatically deleted by the system to free up storage space.
|
||||||
|
|
||||||
Given “days” and “keep” parameters, the total number of data files in a vnode is: keep/days. The total number of data files should not be too large or too small. 10 to 100 is appropriate. Based on this principle, reasonable days can be set. In the current version, parameter “keep” can be modified, but parameter “days” cannot be modified once it is set.
|
Given `duration` and `keep` parameters, the total number of data files in a vnode is: round up of (keep/duration+1). The total number of data files should not be too large or too small. 10 to 100 is appropriate. Based on this principle, reasonable `duration` can be set. In the current version, parameter `keep` can be modified, but parameter `duration` cannot be modified once it is set.
|
||||||
|
|
||||||
In each data file, the data of a table is stored in blocks. A table can have one or more data file blocks. In a file block, data is stored in columns, occupying a continuous storage space, thus greatly improving the reading speed. The size of file block is determined by the system parameter `“maxRows”` (the maximum number of records per block), and the default value is 4096. This value should not be too large or too small. If it is too large, data location for queries will take a longer tim. If it is too small, the index of data block is too large, and the compression efficiency will be low with slower reading speed.
|
In each data file, the data of a table is stored in blocks. A table can have one or more data file blocks. In a file block, data is stored in columns, occupying a continuous storage space, thus greatly improving the reading speed. The size of file block is determined by the system parameter `maxRows` (the maximum number of records per block), and the default value is 4096. This value should not be too large or too small. If it is too large, data location for queries will take a longer time. If it is too small, the index of data block is too large, and the compression efficiency will be low with slower reading speed.
|
||||||
|
|
||||||
Each data file (with a .data postfix) has a corresponding index file (with a .head postfix). The index file has summary information of a data block for each table, recording the offset of each data block in the data file, start and end time of data and other information which allows the system to locate the data to be found very quickly. Each data file also has a corresponding last file (with a .last postfix), which is designed to prevent data block fragmentation when written in disk. If the number of written records from a table does not reach the system configuration parameter `“minRows”` (minimum number of records per block), it will be stored in the last file first. At the next write operation to the disk, the newly written records will be merged with the records in last file and then written into data file.
|
Each data file (with a .data postfix) has a corresponding index file (with a .head postfix). The index file has summary information of a data block for each table, recording the offset of each data block in the data file, start and end time of data and other information which allows the system to locate the data to be found very quickly. Each data file also has a corresponding last file (with a .last postfix), which is designed to prevent data block fragmentation when written in disk. If the number of written records from a table does not reach the system configuration parameter `minRows` (minimum number of records per block), it will be stored in the last file first. At the next write operation to the disk, the newly written records will be merged with the records in last file and then written into data file.
|
||||||
|
|
||||||
When data is written to disk, the system decideswhether to compress the data based on the system configuration parameter `“comp”`. TDengine provides three compression options: no compression, one-stage compression and two-stage compression, corresponding to comp values of 0, 1 and 2 respectively. One-stage compression is carried out according to the type of data. Compression algorithms include delta-delta coding, simple 8B method, zig-zag coding, LZ4 and other algorithms. Two-stage compression is based on one-stage compression and compressed by general compression algorithm, which has higher compression ratio.
|
When data is written to disk, the system decides whether to compress the data based on the database configuration parameter `comp`. TDengine provides three compression options: no compression, one-stage compression and two-stage compression, corresponding to comp values of 0, 1 and 2 respectively. One-stage compression is carried out according to the type of data. Compression algorithms include delta-delta coding, simple 8B method, zig-zag coding, LZ4 and other algorithms. Two-stage compression is based on one-stage compression and compressed by general compression algorithm, which has higher compression ratio.
|
||||||
|
|
||||||
### Tiered Storage
|
### Tiered Storage
|
||||||
|
|
||||||
|
@ -241,19 +218,20 @@ Note: Tiered Storage is only supported in Enterprise Edition
|
||||||
|
|
||||||
## Data Query
|
## Data Query
|
||||||
|
|
||||||
TDengine provides a variety of query processing functions for tables and STables. In addition to common aggregation queries, TDengine also provides window queries and statistical aggregation functions for time-series data. Query processing in TDengine needs the collaboration of client, vnode and mnode.
|
TDengine provides a variety of query processing functions for tables and STables. In addition to common aggregation queries, TDengine also provides window queries and statistical aggregation functions for time-series data. Query processing in TDengine needs the collaboration of client, vnode, qnode and mnode. A complex aggregate query on a super table may need multiple vnodes and multiple qnodes to share the query and computing tasks.
|
||||||
|
|
||||||
### Single Table Query
|
### Query Process
|
||||||
|
|
||||||
The parsing and verification of SQL statements are completed on the client side. SQL statements are parsed and generate an Abstract Syntax Tree (AST), which is then checksummed. Then metadata information (table metadata) for the table specified is requested in the query from management node (mnode).
|
1. TDEngine client driver `taosc` parses the SQL statement and generates an abstract syntax tree (AST), then check and verify the AST according to metadata. During this stage, the metadata management module in `taosc` (Catalog) requests the metadata of the involved database and table from mnode and vnode.
|
||||||
|
2. After the verification passes, `taosc` generates distributed query plan and optimizes the plan.
|
||||||
According to the End Point information in metadata information, the query request is serialized and sent to the data node (dnode) where the table is located. After receiving the query, the dnode identifies the virtual node (vnode) pointed to and forwards the message to the query execution queue of the vnode. The query execution thread of vnode establishes the basic query execution environment, immediately returns the query request and starts executing the query at the same time.
|
3. `taosc` schedules the tasks according to configured query policy, a query sub-task may be scheduled to a vnode or qnode according to data relative and system load. Please be noted that both vnode and qnode are logic execution unit, the physical execution node is dnode (data node).
|
||||||
|
4. When a dnode receives a query request, it identifies which vnode or qnode this query request is targeted, and forwards the request to the query execution queue of the identified vnode or qnode.
|
||||||
When client obtains query result, the worker thread in query execution queue of dnode will wait for the execution of vnode execution thread to complete before returning the query result to the requesting client.
|
5. The query execution thread of the vnode or qnode establishes fundamental query execution context, and executes the query, and notifies the client once obtaining a part of result data.
|
||||||
|
6. TDengine client driver `taosc` will initiates next level query tasks or obtain the result simply.
|
||||||
|
|
||||||
### Aggregation by Time Axis, Downsampling, Interpolation
|
### Aggregation by Time Axis, Downsampling, Interpolation
|
||||||
|
|
||||||
Time-series data is different from ordinary data in that each record has a timestamp. So aggregating data by timestamps on the time axis is an important and distinct feature of time-series databases which is different from that of common databases. It is similar to the window query of stream computing engines.
|
Time-series data is different from ordinary data in that each record has a timestamp. So aggregating data by timestamps on the time axis is an important and distinct feature of time-series databases compared with common databases. It is similar to the window query of stream computing engines.
|
||||||
|
|
||||||
The keyword `interval` is introduced into TDengine to split fixed length time windows on the time axis. The data is aggregated based on time windows, and the data within time window ranges is aggregated as needed. For example:
|
The keyword `interval` is introduced into TDengine to split fixed length time windows on the time axis. The data is aggregated based on time windows, and the data within time window ranges is aggregated as needed. For example:
|
||||||
|
|
||||||
|
@ -269,24 +247,32 @@ In application scenarios where query results need to be obtained continuously, i
|
||||||
select count(*) from d1001 interval(1h) fill(prev);
|
select count(*) from d1001 interval(1h) fill(prev);
|
||||||
```
|
```
|
||||||
|
|
||||||
For the data collected by device D1001, the number of records per hour is counted. If there is no data in a certain hour, statistical data of the previous hour is returned. TDengine provides forward interpolation (prev), linear interpolation (linear), NULL value populating (NULL), and specific value populating (value).
|
In case that the query result needs to be obtained continuously, if there is data loss in a given time range, the resulting data for the time range may be lost too. TDengine provides interpolation for the aggregation result by time window, using `fill` keyword. For example:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14 23:59:59' INTERVAL(1h) FILL(PREV);
|
||||||
|
```
|
||||||
|
|
||||||
|
For the data collected by device D1001, the number of records per hour is counted. If there is no data in a certain hour, statistical data of the previous hour is returned. TDengine provides forward interpolation (prev), linear interpolation (linear), NULL value filling (NULL), and specific value filling (value).
|
||||||
|
|
||||||
### Multi-table Aggregation Query
|
### Multi-table Aggregation Query
|
||||||
|
|
||||||
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable (super table). STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. There can be multiple tags which can be added, deleted and modified at any time. Applications can aggregate or statistically operate on all or a subset of tables under a STABLE by specifying tag filters. This greatly simplifies the development of applications. The process is shown in the following figure:
|
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable (super table). STable is used to represent a specific type of data collection points. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tags. There can be multiple tags which can be added, deleted and modified at any time. Applications can aggregate or statistically operate on all or a subset of tables under a STable by specifying tag filters. This greatly simplifies the development of applications. The process for aggregation across multiple tables is shown in the following figure:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<center> Figure 5: Diagram of multi-table aggregation query </center>
|
<center> Figure 5: Diagram of multi-table aggregation query </center>
|
||||||
|
|
||||||
1. Application sends a query condition to system;
|
1. Client requests the metadata for the database and tables from mnode
|
||||||
2. TAOSC sends the STable name to Meta Node(management node);
|
2. mnode returns the requested metadata
|
||||||
3. Management node sends the vnode list owned by the STable back to TAOSC;
|
3. Client sends query requests to every vnode of the STable
|
||||||
4. TAOSC sends the computing request together with tag filters to multiple data nodes corresponding to these vnodes;
|
4. Each vnode performs query locally, and returns the query response to client
|
||||||
5. Each vnode first finds the set of tables within its own node that meet the tag filters from memory, then scans the stored time-series data, completes corresponding aggregation calculations, and returns result to TAOSC;
|
5. Client sends query request to aggregation node, i.e. qnode
|
||||||
6. TAOSC finally aggregates the results returned by multiple data nodes and send them back to application.
|
6. qnode requests the query result data from the vnodes involved
|
||||||
|
7. Each vnode returns its local query result data
|
||||||
|
8. qnode aggregates the result and returns the final result to the client
|
||||||
|
|
||||||
Since TDengine stores tag data and time-series data separately in vnode, by filtering tag data in memory, the set of tables that need to participate in aggregation operation is first found, which reduces the volume of data to be scanned and improves aggregation speed. At the same time, because the data is distributed in multiple vnodes/dnodes, the aggregation operation is carried out concurrently in multiple vnodes, which further improves the aggregation speed. Aggregation functions for ordinary tables and most operations are applicable to STables. The syntax is exactly the same. Please see TDengine SQL for details.
|
Since TDengine stores tag data and time-series data separately in vnode, filtering tag data in memory and finding the set of tables that need to participate in aggregation operation can reduce the volume of data to be scanned and improves aggregation speed. At the same time, because the data is distributed in multiple vnodes/dnodes, the aggregation operation is carried out concurrently in multiple vnodes, which further improves the aggregation speed. Aggregation functions and most operations for ordinary tables are applicable to STables. The syntax is exactly the same. Please see TDengine SQL for details.
|
||||||
|
|
||||||
### Precomputation
|
### Precomputation
|
||||||
|
|
||||||
|
|
Before Width: | Height: | Size: 13 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 192 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 27 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 19 KiB |
|
@ -55,14 +55,16 @@ This error indicates that the client could not connect to the server. Perform th
|
||||||
|
|
||||||
7. If you are using the Python, Java, Go, Rust, C#, or Node.js connector on Linux to connect to the server, verify that `libtaos.so` is in the `/usr/local/taos/driver` directory and `/usr/local/taos/driver` is in the `LD_LIBRARY_PATH` environment variable.
|
7. If you are using the Python, Java, Go, Rust, C#, or Node.js connector on Linux to connect to the server, verify that `libtaos.so` is in the `/usr/local/taos/driver` directory and `/usr/local/taos/driver` is in the `LD_LIBRARY_PATH` environment variable.
|
||||||
|
|
||||||
8. If you are using Windows, verify that `C:\TDengine\driver\taos.dll` is in the `PATH` environment variable. If possible, move `taos.dll` to the `C:\Windows\System32` directory.
|
8. If you are using macOS, verify that `libtaos.dylib` is in the `/usr/local/lib` directory and `/usr/local/lib` is in the `LD_LIBRARY_PATH` environment variable..
|
||||||
|
|
||||||
9. On Linux systems, you can use the `nc` tool to check whether a port is accessible:
|
9. If you are using Windows, verify that `C:\TDengine\driver\taos.dll` is in the `PATH` environment variable. If possible, move `taos.dll` to the `C:\Windows\System32` directory.
|
||||||
|
|
||||||
|
10. On Linux/macOS, you can use the `nc` tool to check whether a port is accessible:
|
||||||
- To check whether a UDP port is open, run `nc -vuz {hostIP} {port}`.
|
- To check whether a UDP port is open, run `nc -vuz {hostIP} {port}`.
|
||||||
- To check whether a TCP port on the server side is open, run `nc -l {port}`.
|
- To check whether a TCP port on the server side is open, run `nc -l {port}`.
|
||||||
- To check whether a TCP port on client side is open, run `nc {hostIP} {port}`.
|
- To check whether a TCP port on client side is open, run `nc {hostIP} {port}`.
|
||||||
|
|
||||||
10. On Windows systems, you can run `Test-NetConnection -ComputerName {fqdn} -Port {port}` in PowerShell to check whether a port on the server side is accessible.
|
On Windows systems, you can run `Test-NetConnection -ComputerName {fqdn} -Port {port}` in PowerShell to check whether a port on the server side is accessible.
|
||||||
|
|
||||||
11. You can also use the TDengine CLI to diagnose network issues. For more information, see [Problem Diagnostics](https://docs.tdengine.com/operation/diagnose/).
|
11. You can also use the TDengine CLI to diagnose network issues. For more information, see [Problem Diagnostics](https://docs.tdengine.com/operation/diagnose/).
|
||||||
|
|
||||||
|
|
|
@ -6,6 +6,10 @@ description: TDengine release history, Release Notes and download links.
|
||||||
|
|
||||||
import Release from "/components/ReleaseV3";
|
import Release from "/components/ReleaseV3";
|
||||||
|
|
||||||
|
## 3.0.1.4
|
||||||
|
|
||||||
|
<Release type="tdengine" version="3.0.1.4" />
|
||||||
|
|
||||||
## 3.0.1.3
|
## 3.0.1.3
|
||||||
|
|
||||||
<Release type="tdengine" version="3.0.1.3" />
|
<Release type="tdengine" version="3.0.1.3" />
|
||||||
|
|
|
@ -6,6 +6,10 @@ description: taosTools release history, Release Notes, download links.
|
||||||
|
|
||||||
import Release from "/components/ReleaseV3";
|
import Release from "/components/ReleaseV3";
|
||||||
|
|
||||||
|
## 2.2.4
|
||||||
|
|
||||||
|
<Release type="tools" version="2.2.4" />
|
||||||
|
|
||||||
## 2.2.3
|
## 2.2.3
|
||||||
|
|
||||||
<Release type="tools" version="2.2.3" />
|
<Release type="tools" version="2.2.3" />
|
||||||
|
|
|
@ -184,22 +184,54 @@ void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
|
||||||
tmq_t* build_consumer() {
|
tmq_t* build_consumer() {
|
||||||
tmq_conf_res_t code;
|
tmq_conf_res_t code;
|
||||||
tmq_conf_t* conf = tmq_conf_new();
|
tmq_conf_t* conf = tmq_conf_new();
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "enable.auto.commit", "true");
|
code = tmq_conf_set(conf, "enable.auto.commit", "true");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
|
code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "group.id", "cgrpName");
|
code = tmq_conf_set(conf, "group.id", "cgrpName");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "client.id", "user defined name");
|
code = tmq_conf_set(conf, "client.id", "user defined name");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "td.connect.user", "root");
|
code = tmq_conf_set(conf, "td.connect.user", "root");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
|
code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
|
||||||
if (TMQ_CONF_OK != code) return NULL;
|
if (TMQ_CONF_OK != code) {
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,19 @@ description: 使用 Docker 快速体验 TDengine 的高效写入和查询
|
||||||
|
|
||||||
## 启动 TDengine
|
## 启动 TDengine
|
||||||
|
|
||||||
如果已经安装了 Docker,只需执行下面的命令:
|
如果已经安装了 Docker,首先拉取最新的 TDengine 容器镜像:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker pull tdengine/tdengine:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
或者指定版本的容器镜像:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker pull tdengine/tdengine:3.0.1.4
|
||||||
|
```
|
||||||
|
|
||||||
|
然后只需执行下面的命令:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||||
|
@ -46,7 +58,7 @@ taos>
|
||||||
|
|
||||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
||||||
|
|
||||||
启动 TDengine 的服务,在 Linux 或 Windows 终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
|
启动 TDengine 的服务,在终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ taosBenchmark
|
$ taosBenchmark
|
||||||
|
|
|
@ -10,11 +10,11 @@ import PkgListV3 from "/components/PkgListV3";
|
||||||
|
|
||||||
您可以[用 Docker 立即体验](../../get-started/docker/) TDengine。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
|
您可以[用 Docker 立即体验](../../get-started/docker/) TDengine。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
|
||||||
|
|
||||||
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/)。
|
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 taosdump、TDinsight 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/)。
|
||||||
|
|
||||||
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
|
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
|
||||||
|
|
||||||
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。TDengine 也提供 Windows x64 平台的安装包。
|
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTools 包获得。TDengine 也提供 Windows x64 平台和 macOS x64/m1 平台的安装包。
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
|
@ -110,6 +110,13 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。
|
||||||
<PkgListV3 type={3}/>
|
<PkgListV3 type={3}/>
|
||||||
2. 运行可执行程序来安装 TDengine。
|
2. 运行可执行程序来安装 TDengine。
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem label="macOS 安装" value="macos">
|
||||||
|
|
||||||
|
1. 从列表中下载获得 pkg 安装程序;
|
||||||
|
<PkgListV3 type={7}/>
|
||||||
|
2. 运行可执行程序来安装 TDengine。
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
@ -177,12 +184,33 @@ Active: inactive (dead)
|
||||||
|
|
||||||
安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
|
安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="macOS 系统" value="macos">
|
||||||
|
|
||||||
|
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `launchctl start com.tdengine.taosd` 来启动 TDengine 服务进程。
|
||||||
|
|
||||||
|
如下 `launchctl` 命令可以帮助你管理 TDengine 服务:
|
||||||
|
|
||||||
|
- 启动服务进程:`launchctl start com.tdengine.taosd`
|
||||||
|
|
||||||
|
- 停止服务进程:`launchctl stop com.tdengine.taosd`
|
||||||
|
|
||||||
|
- 查看服务状态:`launchctl list | grep taosd`
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
|
- `launchctl` 命令不需要管理员权限,请不要在前面加 `sudo`。
|
||||||
|
- `launchctl list | grep taosd` 指令返回的第一个内容是程序的 PID,若为 `-` 则说明 TDengine 服务未运行。
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## TDengine 命令行(CLI)
|
## TDengine 命令行(CLI)
|
||||||
|
|
||||||
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
|
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux、macOS 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
|
@ -212,13 +240,13 @@ SELECT * FROM t;
|
||||||
Query OK, 2 row(s) in set (0.003128s)
|
Query OK, 2 row(s) in set (0.003128s)
|
||||||
```
|
```
|
||||||
|
|
||||||
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/)。
|
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/)。
|
||||||
|
|
||||||
## 使用 taosBenchmark 体验写入速度
|
## 使用 taosBenchmark 体验写入速度
|
||||||
|
|
||||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
||||||
|
|
||||||
启动 TDengine 的服务,在 Linux 或 Windows 终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
|
启动 TDengine 服务,然后在终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ taosBenchmark
|
$ taosBenchmark
|
||||||
|
@ -249,7 +277,7 @@ SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
|
||||||
查询 location = "California.SanFrancisco" 的记录总条数:
|
查询 location = "California.SanFrancisco" 的记录总条数:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT COUNT(*) FROM test.meters WHERE location = "Calaifornia.SanFrancisco";
|
SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
|
查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
|
||||||
|
|
|
@ -32,7 +32,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
||||||
|
|
||||||
关键不同点在于:
|
关键不同点在于:
|
||||||
|
|
||||||
1. 使用 REST 连接,用户无需安装客户端驱动程序 taosc,具有跨平台易用的优势,但性能要下降 30%左右。
|
1. 使用 REST 连接,用户无需安装客户端驱动程序 taosc,具有跨平台易用的优势,但性能要下降 30% 左右。
|
||||||
2. 使用原生连接可以体验 TDengine 的全部功能,如[参数绑定接口](../../connector/cpp/#参数绑定-api)、[订阅](../../connector/cpp/#订阅和消费-api)等等。
|
2. 使用原生连接可以体验 TDengine 的全部功能,如[参数绑定接口](../../connector/cpp/#参数绑定-api)、[订阅](../../connector/cpp/#订阅和消费-api)等等。
|
||||||
|
|
||||||
## 安装客户端驱动 taosc
|
## 安装客户端驱动 taosc
|
||||||
|
@ -68,7 +68,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
||||||
<Tabs groupId="lang">
|
<Tabs groupId="lang">
|
||||||
<TabItem label="Java" value="java">
|
<TabItem label="Java" value="java">
|
||||||
|
|
||||||
如果使用 maven 管理项目,只需在 pom.xml 中加入以下依赖。
|
如果使用 Maven 管理项目,只需在 pom.xml 中加入以下依赖。
|
||||||
|
|
||||||
```xml
|
```xml
|
||||||
<dependency>
|
<dependency>
|
||||||
|
@ -107,7 +107,7 @@ require github.com/taosdata/driver-go/v3 latest
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
driver-go 使用 cgo 封装了 taosc 的 API。cgo 需要使用 gcc 编译 C 的源码。因此需要确保你的系统上有 gcc。
|
driver-go 使用 cgo 封装了 taosc 的 API。cgo 需要使用 GCC 编译 C 的源码。因此需要确保你的系统上有 GCC。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -137,9 +137,9 @@ Node.js 连接器通过不同的包提供不同的连接方式。
|
||||||
|
|
||||||
1. 安装 Node.js 原生连接器
|
1. 安装 Node.js 原生连接器
|
||||||
|
|
||||||
```
|
```
|
||||||
npm install @tdengine/client
|
npm install @tdengine/client
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
推荐 Node 版本大于等于 `node-v12.8.0` 小于 `node-v13.0.0`
|
推荐 Node 版本大于等于 `node-v12.8.0` 小于 `node-v13.0.0`
|
||||||
|
@ -147,9 +147,9 @@ Node.js 连接器通过不同的包提供不同的连接方式。
|
||||||
|
|
||||||
2. 安装 Node.js REST 连接器
|
2. 安装 Node.js REST 连接器
|
||||||
|
|
||||||
```
|
```
|
||||||
npm install @tdengine/rest
|
npm install @tdengine/rest
|
||||||
```
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem label="C#" value="csharp">
|
<TabItem label="C#" value="csharp">
|
||||||
|
|
|
@ -10,13 +10,13 @@ TDengine 采用类关系型数据模型,需要建库、建表。因此对于
|
||||||
|
|
||||||
## 创建库
|
## 创建库
|
||||||
|
|
||||||
不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能最大效率的工作,TDengine 建议将不同数据特征的表创建在不同的库里,因为每个库可以配置不同的存储策略。创建一个库时,除 SQL 标准的选项外,还可以指定保留时长、副本数、缓存大小、时间精度、文件块里最大最小记录条数、是否压缩、一个数据文件覆盖的天数等多种参数。比如:
|
不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能以最大效率工作,TDengine 建议将不同数据特征的表创建在不同的库里,因为每个库可以配置不同的存储策略。创建一个库时,除 SQL 标准的选项外,还可以指定保留时长、副本数、缓存大小、时间精度、文件块里最大最小记录条数、是否压缩、一个数据文件覆盖的天数等多种参数。比如:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
|
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
|
||||||
```
|
```
|
||||||
|
|
||||||
上述语句将创建一个名为 power 的库,这个库的数据将保留 365 天(超过 365 天将被自动删除),每 10 天一个数据文件,每个 VNODE 的写入内存池的大小为 16 MB,对该数据库入会写 WAL 但不执行 FSYNC。详细的语法及参数请见 [数据库管理](/taos-sql/database) 章节。
|
上述语句将创建一个名为 power 的库,这个库的数据将保留 365 天(超过 365 天将被自动删除),每 10 天一个数据文件,每个 VNode 的写入内存池的大小为 16 MB,对该数据库入会写 WAL 但不执行 FSYNC。详细的语法及参数请见 [数据库管理](/taos-sql/database) 章节。
|
||||||
|
|
||||||
创建库之后,需要使用 SQL 命令 `USE` 将当前库切换过来,例如:
|
创建库之后,需要使用 SQL 命令 `USE` 将当前库切换过来,例如:
|
||||||
|
|
||||||
|
@ -35,39 +35,39 @@ USE power;
|
||||||
|
|
||||||
## 创建超级表
|
## 创建超级表
|
||||||
|
|
||||||
一个物联网系统,往往存在多种类型的设备,比如对于电网,存在智能电表、变压器、母线、开关等等。为便于多表之间的聚合,使用 TDengine, 需要对每个类型的数据采集点创建一个超级表。以[表 1](/tdinternal/arch#model_table1) 中的智能电表为例,可以使用如下的 SQL 命令创建超级表:
|
一个物联网系统,往往存在多种类型的设备,比如对于电网,存在智能电表、变压器、母线、开关等等。为便于多表之间的聚合,使用 TDengine, 需要对每个类型的数据采集点创建一个超级表。以 [表 1](/concept) 中的智能电表为例,可以使用如下的 SQL 命令创建超级表:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
||||||
```
|
```
|
||||||
|
|
||||||
与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 schema 可以事后增加、删除、修改。具体定义以及细节请见 [TDengine SQL 的超级表管理](/taos-sql/stable) 章节。
|
与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 Schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 Schema 可以事后增加、删除、修改。具体定义以及细节请见 [TDengine SQL 的超级表管理](/taos-sql/stable) 章节。
|
||||||
|
|
||||||
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。
|
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。
|
||||||
|
|
||||||
一张超级表最多容许 4096 列,如果一个采集点采集的物理量个数超过 4096,需要建多张超级表来处理。一个系统可以有多个 DB,一个 DB 里可以有一到多个超级表。
|
一张超级表最多容许 4096 列,如果一个采集点采集的物理量个数超过 4096,需要建多张超级表来处理。一个系统可以有多个 Database,一个 Database 里可以有一到多个超级表。
|
||||||
|
|
||||||
## 创建表
|
## 创建表
|
||||||
|
|
||||||
TDengine 对每个数据采集点需要独立建表。与标准的关系型数据库一样,一张表有表名,Schema,但除此之外,还可以带有一到多个标签。创建时,需要使用超级表做模板,同时指定标签的具体值。以[表 1](/tdinternal/arch#model_table1)中的智能电表为例,可以使用如下的 SQL 命令建表:
|
TDengine 对每个数据采集点需要独立建表。与标准的关系型数据库一样,一张表有表名,Schema,但除此之外,还可以带有一到多个标签。创建时,需要使用超级表做模板,同时指定标签的具体值。以 [表 1](/concept) 中的智能电表为例,可以使用如下的 SQL 命令建表:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
||||||
```
|
```
|
||||||
|
|
||||||
其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值 "California.SanFrancisco",标签 groupId 的具体标签值 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TDengine SQL 的表管理](/taos-sql/table) 章节。
|
其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值为 "California.SanFrancisco",标签 groupId 的具体标签值为 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TDengine SQL 的表管理](/taos-sql/table) 章节。
|
||||||
|
|
||||||
TDengine 建议将数据采集点的全局唯一 ID 作为表名(比如设备序列号)。但对于有的场景,并没有唯一的 ID,可以将多个 ID 组合成一个唯一的 ID。不建议将具有唯一性的 ID 作为标签值。
|
TDengine 建议将数据采集点的全局唯一 ID 作为表名(比如设备序列号)。但对于有的场景,并没有唯一的 ID,可以将多个 ID 组合成一个唯一的 ID。不建议将具有唯一性的 ID 作为标签值。
|
||||||
|
|
||||||
### 自动建表
|
### 自动建表
|
||||||
|
|
||||||
在某些特殊场景中,用户在写数据时并不确定某个数据采集点的表是否存在,此时可在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表且后面的 USING 语句被忽略。比如:
|
在某些特殊场景中,用户在写数据时并不确定某个数据采集点的表是否存在,此时可在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表且后面的 USING 语句被忽略。比如:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
|
INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (NOW, 10.2, 219, 0.32);
|
||||||
```
|
```
|
||||||
|
|
||||||
上述 SQL 语句将记录`(now, 10.2, 219, 0.32)`插入表 d1001。如果表 d1001 还未创建,则使用超级表 meters 做模板自动创建,同时打上标签值 `"California.SanFrancisco", 2`。
|
上述 SQL 语句将记录`(NOW, 10.2, 219, 0.32)`插入表 d1001。如果表 d1001 还未创建,则使用超级表 meters 做模板自动创建,同时打上标签值 `"California.SanFrancisco", 2`。
|
||||||
|
|
||||||
关于自动建表的详细语法请参见 [插入记录时自动建表](/taos-sql/insert#插入记录时自动建表) 章节。
|
关于自动建表的详细语法请参见 [插入记录时自动建表](/taos-sql/insert#插入记录时自动建表) 章节。
|
||||||
|
|
||||||
|
|
|
@ -53,15 +53,15 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
- 要提高写入效率,需要批量写入。一般来说一批写入的记录条数越多,插入效率就越高。但一条记录不能超过 48K,一条 SQL 语句总长度不能超过 1M 。
|
- 要提高写入效率,需要批量写入。一般来说一批写入的记录条数越多,插入效率就越高。但一条记录不能超过 48KB,一条 SQL 语句总长度不能超过 1MB。
|
||||||
- TDengine 支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开多个同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程频繁切换,会带来额外开销,合适的线程数量与服务端的处理能力,服务端的具体配置,数据库的参数,数据定义的 Schema,写入数据的 Batch Size 等很多因素相关。一般来说,服务端和客户端处理能力越强,所能支持的并发写入的线程可以越多;数据库配置时的 vgroups 越多(但仍然要在服务端的处理能力以内)则所能支持的并发写入越多;数据定义的 Schema 越简单,所能支持的并发写入越多。
|
- TDengine 支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开多个同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程频繁切换,会带来额外开销,合适的线程数量与服务端的处理能力,服务端的具体配置,数据库的参数,数据定义的 Schema,写入数据的 Batch Size 等很多因素相关。一般来说,服务端和客户端处理能力越强,所能支持的并发写入的线程可以越多;数据库配置时的 vgroups 参数值越多(但仍然要在服务端的处理能力以内)则所能支持的并发写入越多;数据定义的 Schema 越简单,所能支持的并发写入越多。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
- 对同一张表,如果新插入记录的时间戳已经存在,则指定了新值的列会用新值覆盖旧值,而没有指定新值的列则不受影响。
|
- 对同一张表,如果新插入记录的时间戳已经存在,则指定了新值的列会用新值覆盖旧值,而没有指定新值的列则不受影响。
|
||||||
- 写入的数据的时间戳必须大于当前时间减去配置参数 keep 的时间。如果 keep 配置为 3650 天,那么无法写入比 3650 天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数 duration。如果 duration 为 2,那么无法写入比当前时间还晚 2 天的数据。
|
- 写入的数据的时间戳必须大于当前时间减去数据库配置参数 KEEP 的时间。如果 KEEP 配置为 3650 天,那么无法写入比 3650 天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数 DURATION。如果 DURATION 为 2,那么无法写入比当前时间还晚 2 天的数据。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -99,7 +99,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
1. 无论 RESTful 方式建立连接还是本地驱动方式建立连接,以上示例代码都能正常工作。
|
1. 无论 RESTful 方式建立连接还是本地驱动方式建立连接,以上示例代码都能正常工作。
|
||||||
2. 唯一需要注意的是:由于 RESTful 接口无状态, 不能使用 `use db` 语句来切换数据库, 所以在上面示例中使用了`dbName.tbName`指定表名。
|
2. 唯一需要注意的是:由于 RESTful 接口无状态, 不能使用 `USE db;` 语句来切换数据库, 所以在上面示例中使用了`dbName.tbName`指定表名。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
|
@ -34,13 +34,13 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- tag_set 中的所有的数据自动转化为 nchar 数据类型;
|
- tag_set 中的所有的数据自动转化为 NCHAR 数据类型;
|
||||||
- field_set 中的每个数据项都需要对自身的数据类型进行描述, 比如 1.2f32 代表 float 类型的数值 1.2, 如果不带类型后缀会被当作 double 处理;
|
- field_set 中的每个数据项都需要对自身的数据类型进行描述, 比如 1.2f32 代表 FLOAT 类型的数值 1.2, 如果不带类型后缀会被当作 DOUBLE 处理;
|
||||||
- timestamp 支持多种时间精度。写入数据的时候需要用参数指定时间精度,支持从小时到纳秒的 6 种时间精度。
|
- timestamp 支持多种时间精度。写入数据的时候需要用参数指定时间精度,支持从小时到纳秒的 6 种时间精度。
|
||||||
- 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。(3.0.1.3之后的版本 smlDataFormat 默认为 false) [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
- 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。(3.0.1.3 之后的版本 smlDataFormat 默认为 false) [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||||
- 默认生产的子表名是根据规则生成的唯一ID值。为了让用户可以指定生成的表名,可以通过在taos.cfg里配置 smlChildTableName 参数来指定。
|
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
|
||||||
举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||||
:::
|
:::
|
||||||
|
|
||||||
要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||||
|
|
||||||
|
@ -67,6 +67,12 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## 查询示例
|
## SQL 查询示例
|
||||||
比如查询 location=California.LosAngeles,groupid=2 子表的数据可以通过如下sql:
|
|
||||||
select * from meters where location=California.LosAngeles and groupid=2
|
`meters` 是插入数据的超级表名。
|
||||||
|
|
||||||
|
可以通过超级表的 TAG 来过滤数据,比如查询 `location=California.LosAngeles,groupid=2` 可以通过如下 SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2;
|
||||||
|
```
|
||||||
|
|
|
@ -21,10 +21,10 @@ OpenTSDB 行协议同样采用一行字符串来表示一行数据。OpenTSDB
|
||||||
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
||||||
```
|
```
|
||||||
|
|
||||||
- metric 将作为超级表名。
|
- metric 将作为超级表名;
|
||||||
- timestamp 本行数据对应的时间戳。根据时间戳的长度自动识别时间精度。支持秒和毫秒两种时间精度
|
- timestamp 本行数据对应的时间戳。根据时间戳的长度自动识别时间精度。支持秒和毫秒两种时间精度;
|
||||||
- value 度量值,必须为一个数值。对应的列名是 “_value”。
|
- value 度量值,必须为一个数值。对应的列名是 “\_value”;
|
||||||
- 最后一部分是标签集, 用空格分隔不同标签, 所有标签自动转化为 nchar 数据类型;
|
- 最后一部分是标签集, 用空格分隔不同标签, 所有标签自动转化为 NCHAR 数据类型。
|
||||||
|
|
||||||
例如:
|
例如:
|
||||||
|
|
||||||
|
@ -32,9 +32,9 @@ OpenTSDB 行协议同样采用一行字符串来表示一行数据。OpenTSDB
|
||||||
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
||||||
```
|
```
|
||||||
|
|
||||||
- 默认生产的子表名是根据规则生成的唯一ID值。为了让用户可以指定生成的表名,可以通过在taos.cfg里配置 smlChildTableName 参数来指定。
|
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
|
||||||
举例如下:配置 smlChildTableName=tname 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。
|
举例如下:配置 smlChildTableName=tname 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。
|
||||||
参考[OpenTSDB Telnet API 文档](http://opentsdb.net/docs/build/html/api_telnet/put.html)。
|
参考 [OpenTSDB Telnet API 文档](http://opentsdb.net/docs/build/html/api_telnet/put.html)。
|
||||||
|
|
||||||
## 示例代码
|
## 示例代码
|
||||||
|
|
||||||
|
@ -62,17 +62,17 @@ meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
||||||
以上示例代码会自动创建 2 个超级表, 每个超级表有 4 条数据。
|
以上示例代码会自动创建 2 个超级表, 每个超级表有 4 条数据。
|
||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
taos> use test;
|
taos> USE test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> SHOW STABLES;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
|
meters.current |
|
||||||
meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 |
|
meters.voltage |
|
||||||
Query OK, 2 row(s) in set (0.002544s)
|
Query OK, 2 row(s) in set (0.002544s)
|
||||||
|
|
||||||
taos> select tbname, * from `meters.current`;
|
taos> SELECT TBNAME, * FROM `meters.current`;
|
||||||
tbname | _ts | _value | groupid | location |
|
tbname | _ts | _value | groupid | location |
|
||||||
==================================================================================================================================
|
==================================================================================================================================
|
||||||
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LosAngeles |
|
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LosAngeles |
|
||||||
|
@ -81,6 +81,13 @@ taos> select tbname, * from `meters.current`;
|
||||||
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
||||||
Query OK, 4 row(s) in set (0.005399s)
|
Query OK, 4 row(s) in set (0.005399s)
|
||||||
```
|
```
|
||||||
## 查询示例:
|
|
||||||
想要查询 location=California.LosAngeles groupid=3 的数据,可以通过如下sql:
|
## SQL 查询示例
|
||||||
select * from `meters.voltage` where location="California.LosAngeles" and groupid=3
|
|
||||||
|
`meters.current` 是插入数据的超级表名。
|
||||||
|
|
||||||
|
可以通过超级表的 TAG 来过滤数据,比如查询 `location=California.LosAngeles groupid=3` 可以通过如下 SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||||
|
```
|
||||||
|
|
|
@ -46,11 +46,11 @@ OpenTSDB JSON 格式协议采用一个 JSON 字符串表示一行或多行数据
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- 对于 JSON 格式协议,TDengine 并不会自动把所有标签转成 nchar 类型, 字符串将将转为 nchar 类型, 数值将同样转换为 double 类型。
|
- 对于 JSON 格式协议,TDengine 并不会自动把所有标签转成 NCHAR 类型, 字符串将将转为 NCHAR 类型, 数值将同样转换为 DOUBLE 类型。
|
||||||
- TDengine 只接收 JSON **数组格式**的字符串,即使一行数据也需要转换成数组形式。
|
- TDengine 只接收 JSON **数组格式**的字符串,即使一行数据也需要转换成数组形式。
|
||||||
- 默认生产的子表名是根据规则生成的唯一ID值。为了让用户可以指定生成的表名,可以通过在taos.cfg里配置 smlChildTableName 参数来指定。
|
- 默认生产的子表名是根据规则生成的唯一 ID 值。为了让用户可以指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。
|
||||||
举例如下:配置 smlChildTableName=tname 插入数据为 "tags": { "host": "web02","dc": "lga","tname":"cpu1"} 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。
|
举例如下:配置 smlChildTableName=tname 插入数据为 `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## 示例代码
|
## 示例代码
|
||||||
|
|
||||||
|
@ -78,17 +78,17 @@ OpenTSDB JSON 格式协议采用一个 JSON 字符串表示一行或多行数据
|
||||||
以上示例代码会自动创建 2 个超级表, 每个超级表有 2 条数据。
|
以上示例代码会自动创建 2 个超级表, 每个超级表有 2 条数据。
|
||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
taos> use test;
|
taos> USE test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show stables;
|
taos> SHOW STABLES;
|
||||||
name | created_time | columns | tags | tables |
|
name |
|
||||||
============================================================================================
|
=================================
|
||||||
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
|
meters.current |
|
||||||
meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 |
|
meters.voltage |
|
||||||
Query OK, 2 row(s) in set (0.001954s)
|
Query OK, 2 row(s) in set (0.001954s)
|
||||||
|
|
||||||
taos> select * from `meters.current`;
|
taos> SELECT * FROM `meters.current`;
|
||||||
_ts | _value | groupid | location |
|
_ts | _value | groupid | location |
|
||||||
===================================================================================================================
|
===================================================================================================================
|
||||||
2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | California.SanFrancisco |
|
2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | California.SanFrancisco |
|
||||||
|
@ -96,6 +96,12 @@ taos> select * from `meters.current`;
|
||||||
Query OK, 2 row(s) in set (0.004076s)
|
Query OK, 2 row(s) in set (0.004076s)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 查询示例
|
## SQL 查询示例
|
||||||
想要查询"tags": {"location": "California.LosAngeles", "groupid": 1} 的数据,可以通过如下sql:
|
|
||||||
select * from `meters.voltage` where location="California.LosAngeles" and groupid=1
|
`meters.voltage` 是插入数据的超级表名。
|
||||||
|
|
||||||
|
可以通过超级表的 TAG 来过滤数据,比如查询 `location=California.LosAngeles groupid=1` 可以通过如下 SQL:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||||
|
```
|
||||||
|
|
|
@ -11,11 +11,11 @@ import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
从客户端程序的角度来说,高效写入数据要考虑以下几个因素:
|
从客户端程序的角度来说,高效写入数据要考虑以下几个因素:
|
||||||
|
|
||||||
1. 单次写入的数据量。一般来讲,每批次写入的数据量越大越高效(但超过一定阈值其优势会消失)。使用 SQL 写入 TDengine 时,尽量在一条 SQL 中拼接更多数据。目前,TDengine 支持的一条 SQL 的最大长度为 1,048,576(1M)个字符。
|
1. 单次写入的数据量。一般来讲,每批次写入的数据量越大越高效(但超过一定阈值其优势会消失)。使用 SQL 写入 TDengine 时,尽量在一条 SQL 中拼接更多数据。目前,TDengine 支持的一条 SQL 的最大长度为 1,048,576(1MB)个字符
|
||||||
2. 并发连接数。一般来讲,同时写入数据的并发连接数越多写入越高效(但超过一定阈值反而会下降,取决于服务端处理能力)。
|
2. 并发连接数。一般来讲,同时写入数据的并发连接数越多写入越高效(但超过一定阈值反而会下降,取决于服务端处理能力)
|
||||||
3. 数据在不同表(或子表)之间的分布,即要写入数据的相邻性。一般来说,每批次只向同一张表(或子表)写入数据比向多张表(或子表)写入数据要更高效;
|
3. 数据在不同表(或子表)之间的分布,即要写入数据的相邻性。一般来说,每批次只向同一张表(或子表)写入数据比向多张表(或子表)写入数据要更高效
|
||||||
4. 写入方式。一般来讲:
|
4. 写入方式。一般来讲:
|
||||||
- 参数绑定写入比 SQL 写入更高效。因参数绑定方式避免了 SQL 解析。(但增加了 C 接口的调用次数,对于连接器也有性能损耗)。
|
- 参数绑定写入比 SQL 写入更高效。因参数绑定方式避免了 SQL 解析。(但增加了 C 接口的调用次数,对于连接器也有性能损耗)
|
||||||
- SQL 写入不自动建表比自动建表更高效。因自动建表要频繁检查表是否存在
|
- SQL 写入不自动建表比自动建表更高效。因自动建表要频繁检查表是否存在
|
||||||
- SQL 写入比无模式写入更高效。因无模式写入会自动建表且支持动态更改表结构
|
- SQL 写入比无模式写入更高效。因无模式写入会自动建表且支持动态更改表结构
|
||||||
|
|
||||||
|
@ -34,7 +34,7 @@ import TabItem from "@theme/TabItem";
|
||||||
1. 将同一张表的数据写到同一个 Topic 的同一个 Partition,增加数据的相邻性
|
1. 将同一张表的数据写到同一个 Topic 的同一个 Partition,增加数据的相邻性
|
||||||
2. 通过订阅多个 Topic 实现数据汇聚
|
2. 通过订阅多个 Topic 实现数据汇聚
|
||||||
3. 通过增加 Consumer 线程数增加写入的并发度
|
3. 通过增加 Consumer 线程数增加写入的并发度
|
||||||
4. 通过增加每次 fetch 的最大数据量来增加单次写入的最大数据量
|
4. 通过增加每次 Fetch 的最大数据量来增加单次写入的最大数据量
|
||||||
|
|
||||||
### 服务器配置的角度 {#setting-view}
|
### 服务器配置的角度 {#setting-view}
|
||||||
|
|
||||||
|
@ -59,7 +59,7 @@ import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
这一部分是针对以上场景的示例代码。对于其它场景高效写入原理相同,不过代码需要适当修改。
|
这一部分是针对以上场景的示例代码。对于其它场景高效写入原理相同,不过代码需要适当修改。
|
||||||
|
|
||||||
本示例代码假设源数据属于同一张超级表(meters)的不同子表。程序在开始写入数据之前已经在 test 库创建了这个超级表。对于子表,将根据收到的数据,由应用程序自动创建。如果实际场景是多个超级表,只需修改写任务自动建表的代码。
|
本示例代码假设源数据属于同一张超级表(meters)的不同子表。程序在开始写入数据之前已经在 test 库创建了这个超级表。对于子表,将根据收到的数据,由应用程序自动创建。如果实际场景是多个超级表,只需修改写任务自动建表的代码。
|
||||||
|
|
||||||
<Tabs defaultValue="java" groupId="lang">
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
<TabItem label="Java" value="java">
|
<TabItem label="Java" value="java">
|
||||||
|
@ -69,14 +69,13 @@ import TabItem from "@theme/TabItem";
|
||||||
| 类名 | 功能说明 |
|
| 类名 | 功能说明 |
|
||||||
| ---------------- | --------------------------------------------------------------------------- |
|
| ---------------- | --------------------------------------------------------------------------- |
|
||||||
| FastWriteExample | 主程序 |
|
| FastWriteExample | 主程序 |
|
||||||
| ReadTask | 从模拟源中读取数据,将表名经过 hash 后得到 Queue 的 index,写入对应的 Queue |
|
| ReadTask | 从模拟源中读取数据,将表名经过 Hash 后得到 Queue 的 Index,写入对应的 Queue |
|
||||||
| WriteTask | 从 Queue 中获取数据,组成一个 Batch,写入 TDengine |
|
| WriteTask | 从 Queue 中获取数据,组成一个 Batch,写入 TDengine |
|
||||||
| MockDataSource | 模拟生成一定数量 meters 子表的数据 |
|
| MockDataSource | 模拟生成一定数量 meters 子表的数据 |
|
||||||
| SQLWriter | WriteTask 依赖这个类完成 SQL 拼接、自动建表、 SQL 写入、SQL 长度检查 |
|
| SQLWriter | WriteTask 依赖这个类完成 SQL 拼接、自动建表、 SQL 写入、SQL 长度检查 |
|
||||||
| StmtWriter | 实现参数绑定方式批量写入(暂未完成) |
|
| StmtWriter | 实现参数绑定方式批量写入(暂未完成) |
|
||||||
| DataBaseMonitor | 统计写入速度,并每隔 10 秒把当前写入速度打印到控制台 |
|
| DataBaseMonitor | 统计写入速度,并每隔 10 秒把当前写入速度打印到控制台 |
|
||||||
|
|
||||||
|
|
||||||
以下是各类的完整代码和更详细的功能说明。
|
以下是各类的完整代码和更详细的功能说明。
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
@ -92,10 +91,10 @@ import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
1. 读线程个数。默认为 1。
|
1. 读线程个数。默认为 1。
|
||||||
2. 写线程个数。默认为 3。
|
2. 写线程个数。默认为 3。
|
||||||
3. 模拟生成的总表数。默认为 1000。将会平分给各个读线程。如果总表数较大,建表需要花费较长,开始统计的写入速度可能较慢。
|
3. 模拟生成的总表数。默认为 1,000。将会平分给各个读线程。如果总表数较大,建表需要花费较长,开始统计的写入速度可能较慢。
|
||||||
4. 每批最多写入记录数量。默认为 3000。
|
4. 每批最多写入记录数量。默认为 3,000。
|
||||||
|
|
||||||
队列容量(taskQueueCapacity)也是与性能有关的参数,可通过修改程序调节。一般来讲,队列容量越大,入队被阻塞的概率越小,队列的吞吐量越大,但是内存占用也会越大。 示例程序默认值已经设置地足够大。
|
队列容量(taskQueueCapacity)也是与性能有关的参数,可通过修改程序调节。一般来讲,队列容量越大,入队被阻塞的概率越小,队列的吞吐量越大,但是内存占用也会越大。示例程序默认值已经设置地足够大。
|
||||||
|
|
||||||
```java
|
```java
|
||||||
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
|
||||||
|
@ -208,14 +207,14 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
||||||
|
|
||||||
以上使用的是本地部署 TDengine Server 时默认的 JDBC URL。你需要根据自己的实际情况更改。
|
以上使用的是本地部署 TDengine Server 时默认的 JDBC URL。你需要根据自己的实际情况更改。
|
||||||
|
|
||||||
5. 用 java 命令启动示例程序,命令模板:
|
5. 用 Java 命令启动示例程序,命令模板:
|
||||||
|
|
||||||
```
|
```
|
||||||
java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample <read_thread_count> <white_thread_count> <total_table_count> <max_batch_size>
|
java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample <read_thread_count> <white_thread_count> <total_table_count> <max_batch_size>
|
||||||
```
|
```
|
||||||
|
|
||||||
6. 结束测试程序。测试程序不会自动结束,在获取到当前配置下稳定的写入速度后,按 <kbd>CTRL</kbd> + <kbd>C</kbd> 结束程序。
|
6. 结束测试程序。测试程序不会自动结束,在获取到当前配置下稳定的写入速度后,按 <kbd>CTRL</kbd> + <kbd>C</kbd> 结束程序。
|
||||||
下面是一次实际运行的日志输出,机器配置 16核 + 64G + 固态硬盘。
|
下面是一次实际运行的日志输出,机器配置 16 核 + 64G + 固态硬盘。
|
||||||
|
|
||||||
```
|
```
|
||||||
root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
|
root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
|
||||||
|
@ -271,12 +270,11 @@ Python 示例程序中采用了多进程的架构,并使用了跨进程的消
|
||||||
| main 函数 | 程序入口, 创建各个子进程和消息队列 |
|
| main 函数 | 程序入口, 创建各个子进程和消息队列 |
|
||||||
| run_monitor_process 函数 | 创建数据库,超级表,统计写入速度并定时打印到控制台 |
|
| run_monitor_process 函数 | 创建数据库,超级表,统计写入速度并定时打印到控制台 |
|
||||||
| run_read_task 函数 | 读进程主要逻辑,负责从其它数据系统读数据,并分发数据到为之分配的队列 |
|
| run_read_task 函数 | 读进程主要逻辑,负责从其它数据系统读数据,并分发数据到为之分配的队列 |
|
||||||
| MockDataSource 类 | 模拟数据源, 实现迭代器接口,每次批量返回每张表的接下来 1000 条数据 |
|
| MockDataSource 类 | 模拟数据源, 实现迭代器接口,每次批量返回每张表的接下来 1,000 条数据 |
|
||||||
| run_write_task 函数 | 写进程主要逻辑。每次从队列中取出尽量多的数据,并批量写入 |
|
| run_write_task 函数 | 写进程主要逻辑。每次从队列中取出尽量多的数据,并批量写入 |
|
||||||
| SQLWriter类 | SQL 写入和自动建表 |
|
| SQLWriter 类 | SQL 写入和自动建表 |
|
||||||
| StmtWriter 类 | 实现参数绑定方式批量写入(暂未完成) |
|
| StmtWriter 类 | 实现参数绑定方式批量写入(暂未完成) |
|
||||||
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>main 函数</summary>
|
<summary>main 函数</summary>
|
||||||
|
|
||||||
|
@ -290,9 +288,9 @@ main 函数可以接收 5 个启动参数,依次是:
|
||||||
|
|
||||||
1. 读任务(进程)数, 默认为 1
|
1. 读任务(进程)数, 默认为 1
|
||||||
2. 写任务(进程)数, 默认为 1
|
2. 写任务(进程)数, 默认为 1
|
||||||
3. 模拟生成的总表数,默认为 1000
|
3. 模拟生成的总表数,默认为 1,000
|
||||||
4. 队列大小(单位字节),默认为 1000000
|
4. 队列大小(单位字节),默认为 1,000,000
|
||||||
5. 每批最多写入记录数量, 默认为 3000
|
5. 每批最多写入记录数量, 默认为 3,000
|
||||||
|
|
||||||
```python
|
```python
|
||||||
{{#include docs/examples/python/fast_write_example.py:main}}
|
{{#include docs/examples/python/fast_write_example.py:main}}
|
||||||
|
@ -348,7 +346,7 @@ main 函数可以接收 5 个启动参数,依次是:
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
|
||||||
SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提前创建,而是在发生表不存在错误的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它错误会记录当时执行的 SQL, 以便排查错误和故障恢复。这个类也对 SQL 是否超过最大长度限制做了检查,根据 TDengine 3.0 的限制由输入参数 maxSQLLength 传入了支持的最大 SQL 长度,即 1048576 。
|
SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提前创建,而是在发生表不存在错误的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它错误会记录当时执行的 SQL, 以便排查错误和故障恢复。这个类也对 SQL 是否超过最大长度限制做了检查,根据 TDengine 3.0 的限制由输入参数 maxSQLLength 传入了支持的最大 SQL 长度,即 1,048,576 。
|
||||||
|
|
||||||
<summary>SQLWriter</summary>
|
<summary>SQLWriter</summary>
|
||||||
|
|
||||||
|
@ -384,7 +382,7 @@ SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提
|
||||||
python3 fast_write_example.py <READ_TASK_COUNT> <WRITE_TASK_COUNT> <TABLE_COUNT> <QUEUE_SIZE> <MAX_BATCH_SIZE>
|
python3 fast_write_example.py <READ_TASK_COUNT> <WRITE_TASK_COUNT> <TABLE_COUNT> <QUEUE_SIZE> <MAX_BATCH_SIZE>
|
||||||
```
|
```
|
||||||
|
|
||||||
下面是一次实际运行的输出, 机器配置 16核 + 64G + 固态硬盘。
|
下面是一次实际运行的输出, 机器配置 16 核 + 64G + 固态硬盘。
|
||||||
|
|
||||||
```
|
```
|
||||||
root@vm85$ python3 fast_write_example.py 8 8
|
root@vm85$ python3 fast_write_example.py 8 8
|
||||||
|
@ -432,5 +430,3 @@ SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -70,7 +70,7 @@ insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
|
||||||
### 查询以观察结果
|
### 查询以观察结果
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select start, end, max_current from current_stream_output_stb;
|
taos> select start, wend, max_current from current_stream_output_stb;
|
||||||
start | wend | max_current |
|
start | wend | max_current |
|
||||||
===========================================================================
|
===========================================================================
|
||||||
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
|
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
|
||||||
|
|
|
@ -4,19 +4,20 @@ sidebar_label: 开发指南
|
||||||
description: 让开发者能够快速上手的指南
|
description: 让开发者能够快速上手的指南
|
||||||
---
|
---
|
||||||
|
|
||||||
开发一个应用,如果你准备采用TDengine作为时序数据处理的工具,那么有如下几个事情要做:
|
开发一个应用,如果你准备采用 TDengine 作为时序数据处理的工具,那么有如下几个事情要做:
|
||||||
1. 确定应用到TDengine的链接方式。无论你使用何种编程语言,你总可以使用REST接口, 但也可以使用每种编程语言独有的连接器方便的进行链接。
|
|
||||||
|
1. 确定应用到 TDengine 的连接方式。无论你使用何种编程语言,你总是可以使用 REST 接口, 但也可以使用每种编程语言独有的连接器进行方便的连接。
|
||||||
2. 根据自己的应用场景,确定数据模型。根据数据特征,决定建立一个还是多个库;分清静态标签、采集量,建立正确的超级表,建立子表。
|
2. 根据自己的应用场景,确定数据模型。根据数据特征,决定建立一个还是多个库;分清静态标签、采集量,建立正确的超级表,建立子表。
|
||||||
3. 决定插入数据的方式。TDengine支持使用标准的SQL写入,但同时也支持schemaless模式写入,这样不用手工建表,可以将数据直接写入。
|
3. 决定插入数据的方式。TDengine 支持使用标准的 SQL 写入,但同时也支持 Schemaless 模式写入,这样不用手工建表,可以将数据直接写入。
|
||||||
4. 根据业务要求,看需要撰写哪些SQL查询语句。
|
4. 根据业务要求,看需要撰写哪些 SQL 查询语句。
|
||||||
5. 如果你要基于时序数据做轻量级的实时统计分析,包括各种监测看板,那么建议你采用 TDengine 3.0 的流式计算功能,而不用额外部署 Spark, Flink 等复杂的流式计算系统。
|
5. 如果你要基于时序数据做轻量级的实时统计分析,包括各种监测看板,那么建议你采用 TDengine 3.0 的流式计算功能,而不用额外部署 Spark, Flink 等复杂的流式计算系统。
|
||||||
6. 如果你的应用有模块需要消费插入的数据,希望有新的数据插入时,就能获取通知,那么建议你采用TDengine提供的数据订阅功能,而无需专门部署Kafka或其他消息队列软件。
|
6. 如果你的应用有模块需要消费插入的数据,希望有新的数据插入时,就能获取通知,那么建议你采用 TDengine 提供的数据订阅功能,而无需专门部署 Kafka 或其他消息队列软件。
|
||||||
7. 在很多场景下(如车辆管理),应用需要获取每个数据采集点的最新状态,那么建议你采用TDengine的cache功能,而不用单独部署Redis等缓存软件。
|
7. 在很多场景下(如车辆管理),应用需要获取每个数据采集点的最新状态,那么建议你采用 TDengine 的 Cache 功能,而不用单独部署 Redis 等缓存软件。
|
||||||
8. 如果你发现TDengine的函数无法满足你的要求,那么你可以使用用户自定义函数来解决问题。
|
8. 如果你发现 TDengine 的函数无法满足你的要求,那么你可以使用用户自定义函数(UDF)来解决问题。
|
||||||
|
|
||||||
本部分内容就是按照上述的顺序组织的。为便于理解,TDengine为每个功能为每个支持的编程语言都提供了示例代码。如果你希望深入了解SQL的使用,需要查看[SQL手册](/taos-sql/)。如果想更深入地了解各连接器的使用,请阅读[连接器参考指南](../connector/)。如果还希望想将TDengine与第三方系统集成起来,比如Grafana, 请参考[第三方工具](../third-party/)。
|
本部分内容就是按照上述顺序组织的。为便于理解,TDengine 为每个功能和每个支持的编程语言都提供了示例代码。如果你希望深入了解 SQL 的使用,需要查看[SQL 手册](/taos-sql/)。如果想更深入地了解各连接器的使用,请阅读[连接器参考指南](../connector/)。如果还希望想将 TDengine 与第三方系统集成起来,比如 Grafana, 请参考[第三方工具](../third-party/)。
|
||||||
|
|
||||||
如果在开发过程中遇到任何问题,请点击每个页面下方的["反馈问题"](https://github.com/taosdata/TDengine/issues/new/choose), 在GitHub上直接递交issue。
|
如果在开发过程中遇到任何问题,请点击每个页面下方的["反馈问题"](https://github.com/taosdata/TDengine/issues/new/choose), 在 GitHub 上直接递交 Issue。
|
||||||
|
|
||||||
```mdx-code-block
|
```mdx-code-block
|
||||||
import DocCardList from '@theme/DocCardList';
|
import DocCardList from '@theme/DocCardList';
|
||||||
|
|
|
@ -74,7 +74,7 @@ http://<fqdn>:<port>/rest/sql/[db_name]
|
||||||
|
|
||||||
参数说明:
|
参数说明:
|
||||||
|
|
||||||
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址。
|
- fqdn: 集群中的任一台主机 FQDN 或 IP 地址。
|
||||||
- port: 配置文件中 httpPort 配置项,缺省为 6041。
|
- port: 配置文件中 httpPort 配置项,缺省为 6041。
|
||||||
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
|
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
|
||||||
|
|
||||||
|
|
|
@ -13,11 +13,13 @@ TDengine 服务端或客户端安装后,`taos.h` 位于:
|
||||||
|
|
||||||
- Linux:`/usr/local/taos/include`
|
- Linux:`/usr/local/taos/include`
|
||||||
- Windows:`C:\TDengine\include`
|
- Windows:`C:\TDengine\include`
|
||||||
|
- macOS:`/usr/local/include`
|
||||||
|
|
||||||
TDengine 客户端驱动的动态库位于:
|
TDengine 客户端驱动的动态库位于:
|
||||||
|
|
||||||
- Linux: `/usr/local/taos/driver/libtaos.so`
|
- Linux: `/usr/local/taos/driver/libtaos.so`
|
||||||
- Windows: `C:\TDengine\taos.dll`
|
- Windows: `C:\TDengine\taos.dll`
|
||||||
|
- macOS: `/usr/local/lib/libtaos.dylib`
|
||||||
|
|
||||||
## 支持的平台
|
## 支持的平台
|
||||||
|
|
||||||
|
@ -119,7 +121,7 @@ TDengine 客户端驱动的安装请参考 [安装指南](../#安装步骤)
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
更多示例代码及下载请见 [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c)。
|
更多示例代码及下载请见 [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c)。
|
||||||
也可以在安装目录下的 `examples/c` 路径下找到。 该目录下有 makefile,在 Linux 环境下,直接执行 make 就可以编译得到执行文件。
|
也可以在安装目录下的 `examples/c` 路径下找到。 该目录下有 makefile,在 Linux/macOS 环境下,直接执行 make 就可以编译得到执行文件。
|
||||||
**提示:**在 ARM 环境下编译时,请将 makefile 中的 `-msse4.2` 去掉,这个选项只有在 x64/x86 硬件平台上才能支持。
|
**提示:**在 ARM 环境下编译时,请将 makefile 中的 `-msse4.2` 去掉,这个选项只有在 x64/x86 硬件平台上才能支持。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -109,7 +109,7 @@ TDengine 的 JDBC URL 规范格式为:
|
||||||
|
|
||||||
对于建立连接,原生连接与 REST 连接有细微不同。
|
对于建立连接,原生连接与 REST 连接有细微不同。
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
```java
|
```java
|
||||||
|
@ -120,7 +120,7 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
|
||||||
|
|
||||||
以上示例,使用了 JDBC 原生连接的 TSDBDriver,建立了到 hostname 为 taosdemo.com,端口为 6030(TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名(user)为 root,密码(password)为 taosdata。
|
以上示例,使用了 JDBC 原生连接的 TSDBDriver,建立了到 hostname 为 taosdemo.com,端口为 6030(TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名(user)为 root,密码(password)为 taosdata。
|
||||||
|
|
||||||
**注意**:使用 JDBC 原生连接,taos-jdbcdriver 需要依赖客户端驱动(Linux 下是 libtaos.so;Windows 下是 taos.dll)。
|
**注意**:使用 JDBC 原生连接,taos-jdbcdriver 需要依赖客户端驱动(Linux 下是 libtaos.so;Windows 下是 taos.dll;macOS 下是 libtaos.dylib)。
|
||||||
|
|
||||||
url 中的配置参数如下:
|
url 中的配置参数如下:
|
||||||
|
|
||||||
|
@ -375,7 +375,7 @@ public class ParameterBindingDemo {
|
||||||
|
|
||||||
private static final String host = "127.0.0.1";
|
private static final String host = "127.0.0.1";
|
||||||
private static final Random random = new Random(System.currentTimeMillis());
|
private static final Random random = new Random(System.currentTimeMillis());
|
||||||
private static final int BINARY_COLUMN_SIZE = 20;
|
private static final int BINARY_COLUMN_SIZE = 30;
|
||||||
private static final String[] schemaList = {
|
private static final String[] schemaList = {
|
||||||
"create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
|
"create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
|
||||||
"create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
|
"create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
|
||||||
|
@ -898,7 +898,7 @@ public static void main(String[] args) throws Exception {
|
||||||
|
|
||||||
**原因**:程序没有找到依赖的本地函数库 taos。
|
**原因**:程序没有找到依赖的本地函数库 taos。
|
||||||
|
|
||||||
**解决方法**:Windows 下可以将 C:\TDengine\driver\taos.dll 拷贝到 C:\Windows\System32\ 目录下,Linux 下将建立如下软链 `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` 即可。
|
**解决方法**:Windows 下可以将 C:\TDengine\driver\taos.dll 拷贝到 C:\Windows\System32\ 目录下,Linux 下将建立如下软链 `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` 即可,macOS 下需要建立软链 `ln -s /usr/local/lib/libtaos.dylib`。
|
||||||
|
|
||||||
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
||||||
|
|
||||||
|
|
|
@ -114,7 +114,7 @@ username:password@protocol(address)/dbname?param=value
|
||||||
```
|
```
|
||||||
### 使用连接器进行连接
|
### 使用连接器进行连接
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
_taosSql_ 通过 cgo 实现了 Go 的 `database/sql/driver` 接口。只需要引入驱动就可以使用 [`database/sql`](https://golang.org/pkg/database/sql/) 的接口。
|
_taosSql_ 通过 cgo 实现了 Go 的 `database/sql/driver` 接口。只需要引入驱动就可以使用 [`database/sql`](https://golang.org/pkg/database/sql/) 的接口。
|
||||||
|
|
|
@ -55,16 +55,6 @@ taos = "*"
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
|
||||||
<TabItem value="native" label="仅原生连接">
|
|
||||||
|
|
||||||
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `native` 特性:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[dependencies]
|
|
||||||
taos = { version = "*", default-features = false, features = ["native"] }
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
<TabItem value="rest" label="仅 Websocket">
|
<TabItem value="rest" label="仅 Websocket">
|
||||||
|
|
||||||
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `ws` 特性。
|
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `ws` 特性。
|
||||||
|
@ -74,6 +64,17 @@ taos = { version = "*", default-features = false, features = ["native"] }
|
||||||
taos = { version = "*", default-features = false, features = ["ws"] }
|
taos = { version = "*", default-features = false, features = ["ws"] }
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="native" label="仅原生连接">
|
||||||
|
|
||||||
|
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `native` 特性:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[dependencies]
|
||||||
|
taos = { version = "*", default-features = false, features = ["native"] }
|
||||||
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Python
|
sidebar_label: Python
|
||||||
title: TDengine Python Connector
|
title: TDengine Python Connector
|
||||||
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
|
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:taos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from "@theme/Tabs";
|
import Tabs from "@theme/Tabs";
|
||||||
|
@ -25,15 +25,15 @@ Python 连接器的源码托管在 [GitHub](https://github.com/taosdata/taos-con
|
||||||
|
|
||||||
## 支持的功能
|
## 支持的功能
|
||||||
|
|
||||||
- 原生连接支持 TDeingine 的所有核心功能, 包括: 连接管理、执行 SQL、参数绑定、订阅、无模式写入(schemaless)。
|
- 原生连接支持 TDengine 的所有核心功能, 包括: 连接管理、执行 SQL、参数绑定、订阅、无模式写入(schemaless)。
|
||||||
- REST 连接支持的功能包括:连接管理、执行 SQL。 (通过执行 SQL 可以: 管理数据库、管理表和超级表、写入数据、查询数据、创建连续查询等)。
|
- REST 连接支持的功能包括:连接管理、执行 SQL。 (通过执行 SQL 可以: 管理数据库、管理表和超级表、写入数据、查询数据、创建连续查询等)。
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
### 准备
|
### 准备
|
||||||
|
|
||||||
1. 安装 Python。建议使用 Python >= 3.6。如果系统上还没有 Python 可参考 [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) 安装。
|
1. 安装 Python。建议使用 Python >= 3.7。如果系统上还没有 Python 可参考 [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) 安装。
|
||||||
2. 安装 [pip](https://pypi.org/project/pip/)。大部分情况下 Python 的安装包都自带了 pip 工具, 如果没有请参考 [pip docuemntation](https://pip.pypa.io/en/stable/installation/) 安装。
|
2. 安装 [pip](https://pypi.org/project/pip/)。大部分情况下 Python 的安装包都自带了 pip 工具, 如果没有请参考 [pip documentation](https://pip.pypa.io/en/stable/installation/) 安装。
|
||||||
3. 如果使用原生连接,还需[安装客户端驱动](../#安装客户端驱动)。客户端软件包含了 TDengine 客户端动态链接库(libtaos.so 或 taos.dll) 和 TDengine CLI。
|
3. 如果使用原生连接,还需[安装客户端驱动](../#安装客户端驱动)。客户端软件包含了 TDengine 客户端动态链接库(libtaos.so 或 taos.dll) 和 TDengine CLI。
|
||||||
|
|
||||||
### 使用 pip 安装
|
### 使用 pip 安装
|
||||||
|
@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
|
||||||
|
|
||||||
### 安装验证
|
### 安装验证
|
||||||
|
|
||||||
<Tabs groupId="connect" default="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
对于原生连接,需要验证客户端驱动和 Python 连接器本身是否都正确安装。如果能成功导入 `taos` 模块,则说明已经正确安装了客户端驱动和 Python 连接器。可在 Python 交互式 Shell 中输入:
|
对于原生连接,需要验证客户端驱动和 Python 连接器本身是否都正确安装。如果能成功导入 `taos` 模块,则说明已经正确安装了客户端驱动和 Python 连接器。可在 Python 交互式 Shell 中输入:
|
||||||
|
@ -118,7 +118,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
|
||||||
|
|
||||||
在用连接器建立连接之前,建议先测试本地 TDengine CLI 到 TDengine 集群的连通性。
|
在用连接器建立连接之前,建议先测试本地 TDengine CLI 到 TDengine 集群的连通性。
|
||||||
|
|
||||||
<Tabs>
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试:
|
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试:
|
||||||
|
@ -173,7 +173,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
||||||
|
|
||||||
以下示例代码假设 TDengine 安装在本机, 且 FQDN 和 serverPort 都使用了默认配置。
|
以下示例代码假设 TDengine 安装在本机, 且 FQDN 和 serverPort 都使用了默认配置。
|
||||||
|
|
||||||
<Tabs>
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接" groupId="connect">
|
<TabItem value="native" label="原生连接" groupId="connect">
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -186,7 +186,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
||||||
- `user` :TDengine 用户名。 默认值是 root。
|
- `user` :TDengine 用户名。 默认值是 root。
|
||||||
- `password` : TDengine 用户密码。 默认值是 taosdata。
|
- `password` : TDengine 用户密码。 默认值是 taosdata。
|
||||||
- `port` : 要连接的数据节点的起始端口,即 serverPort 配置。默认值是 6030。只有在提供了 host 参数的时候,这个参数才生效。
|
- `port` : 要连接的数据节点的起始端口,即 serverPort 配置。默认值是 6030。只有在提供了 host 参数的时候,这个参数才生效。
|
||||||
- `config` : 客户端配置文件路径。 在 Windows 系统上默认是 `C:\TDengine\cfg`。 在 Linux 系统上默认是 `/etc/taos/`。
|
- `config` : 客户端配置文件路径。 在 Windows 系统上默认是 `C:\TDengine\cfg`。 在 Linux/macOS 系统上默认是 `/etc/taos/`。
|
||||||
- `timezone` : 查询结果中 TIMESTAMP 类型的数据,转换为 python 的 datetime 对象时使用的时区。默认为本地时区。
|
- `timezone` : 查询结果中 TIMESTAMP 类型的数据,转换为 python 的 datetime 对象时使用的时区。默认为本地时区。
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
@ -208,8 +208,8 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
||||||
`connect()` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
|
`connect()` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
|
||||||
|
|
||||||
- `url`: taosAdapter REST 服务的 URL。默认是 <http://localhost:6041>。
|
- `url`: taosAdapter REST 服务的 URL。默认是 <http://localhost:6041>。
|
||||||
- `user`: TDenigne 用户名。默认是 root。
|
- `user`: TDengine 用户名。默认是 root。
|
||||||
- `password`: TDeingine 用户密码。默认是 taosdata。
|
- `password`: TDengine 用户密码。默认是 taosdata。
|
||||||
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
|
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
@ -219,7 +219,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
||||||
|
|
||||||
### 基本使用
|
### 基本使用
|
||||||
|
|
||||||
<Tabs default="native" groupId="connect">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
##### TaosConnection 类的使用
|
##### TaosConnection 类的使用
|
||||||
|
@ -289,7 +289,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
|
||||||
|
|
||||||
### 与 pandas 一起使用
|
### 与 pandas 一起使用
|
||||||
|
|
||||||
<Tabs default="native" groupId="connect">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|
|
@ -85,7 +85,7 @@ REST 连接器支持所有能运行 Node.js 的平台。
|
||||||
|
|
||||||
### 使用 npm 安装
|
### 使用 npm 安装
|
||||||
|
|
||||||
<Tabs defaultValue="install_native">
|
<Tabs defaultValue="install_rest">
|
||||||
<TabItem value="install_native" label="安装原生连接器">
|
<TabItem value="install_native" label="安装原生连接器">
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost
|
||||||
|
|
||||||
请选择使用一种连接器。
|
请选择使用一种连接器。
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
安装并引用 `@tdengine/client` 包。
|
安装并引用 `@tdengine/client` 包。
|
||||||
|
|
|
@ -35,7 +35,7 @@ import CSAsyncQuery from "../07-develop/04-query-data/_cs_async.mdx"
|
||||||
|
|
||||||
## 支持的功能特性
|
## 支持的功能特性
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
|
@ -96,7 +96,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj
|
||||||
|
|
||||||
## 建立连接
|
## 建立连接
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
|
@ -171,7 +171,7 @@ namespace TDengineExample
|
||||||
|
|
||||||
#### SQL 写入
|
#### SQL 写入
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
|
@ -203,7 +203,7 @@ namespace TDengineExample
|
||||||
|
|
||||||
#### 参数绑定
|
#### 参数绑定
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
|
@ -227,7 +227,7 @@ namespace TDengineExample
|
||||||
|
|
||||||
#### 同步查询
|
#### 同步查询
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="rest">
|
||||||
|
|
||||||
<TabItem value="native" label="原生连接">
|
<TabItem value="native" label="原生连接">
|
||||||
|
|
||||||
|
|
|
@ -13,11 +13,13 @@ TDengine 服务端或客户端安装后,`taos.h` 位于:
|
||||||
|
|
||||||
- Linux:`/usr/local/taos/include`
|
- Linux:`/usr/local/taos/include`
|
||||||
- Windows:`C:\TDengine\include`
|
- Windows:`C:\TDengine\include`
|
||||||
|
- macOS:`/usr/local/include`
|
||||||
|
|
||||||
TDengine 客户端驱动的动态库位于:
|
TDengine 客户端驱动的动态库位于:
|
||||||
|
|
||||||
- Linux: `/usr/local/taos/driver/libtaos.so`
|
- Linux: `/usr/local/taos/driver/libtaos.so`
|
||||||
- Windows: `C:\TDengine\taos.dll`
|
- Windows: `C:\TDengine\taos.dll`
|
||||||
|
- macOS:`/usr/local/lib/libtaos.dylib`
|
||||||
|
|
||||||
## 支持的平台
|
## 支持的平台
|
||||||
|
|
||||||
|
|
|
@ -6,5 +6,6 @@
|
||||||
|
|
||||||
- libtaos.so: 在 Linux 系统中成功安装 TDengine 后,依赖的 Linux 版客户端驱动 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
|
- libtaos.so: 在 Linux 系统中成功安装 TDengine 后,依赖的 Linux 版客户端驱动 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
|
||||||
- taos.dll: 在 Windows 系统中安装完客户端之后,依赖的 Windows 版客户端驱动 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
|
- taos.dll: 在 Windows 系统中安装完客户端之后,依赖的 Windows 版客户端驱动 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
|
||||||
|
- libtaos.dylib: 在 macOS 系统中成功安装 TDengine 后,依赖的 macOS 版客户端驱动 libtaos.dylib 文件会被自动拷贝至 /usr/local/lib/libtaos.dylib,该目录包含在 macOS 自动扫描路径上,无需单独指定。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -4,11 +4,11 @@
|
||||||
$ taos
|
$ taos
|
||||||
|
|
||||||
taos> show databases;
|
taos> show databases;
|
||||||
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
|
name |
|
||||||
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
|
=================================
|
||||||
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
information_schema |
|
||||||
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
|
performance_schema |
|
||||||
db | 2022-08-04 14:14:49.385 | 2 | 4 | 1 | off | 14400m | 5254560m,5254560m,5254560m | 96 | 4 | 256 | 100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 |
|
db |
|
||||||
Query OK, 3 rows in database (0.019154s)
|
Query OK, 3 rows in database (0.019154s)
|
||||||
|
|
||||||
taos>
|
taos>
|
||||||
|
|
|
@ -39,7 +39,7 @@ CREATE DATABASE db_name PRECISION 'ns';
|
||||||
| 11 | TINYINT | 1 | 单字节整型,范围 [-128, 127] |
|
| 11 | TINYINT | 1 | 单字节整型,范围 [-128, 127] |
|
||||||
| 12 | TINYINT UNSIGNED | 1 | 无符号单字节整型,范围 [0, 255] |
|
| 12 | TINYINT UNSIGNED | 1 | 无符号单字节整型,范围 [0, 255] |
|
||||||
| 13 | BOOL | 1 | 布尔型,{true, false} |
|
| 13 | BOOL | 1 | 布尔型,{true, false} |
|
||||||
| 14 | NCHAR | 自定义 | 记录包含多字节字符在内的字符串,如中文字符。每个 NCHAR 字符占用 4 字节的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 `\'`。NCHAR 使用时须指定字符串大小,类型为 NCHAR(10) 的列表示此列的字符串最多存储 10 个 NCHAR 字符,会固定占用 40 字节的空间。如果用户字符串长度超出声明长度,将会报错。 |
|
| 14 | NCHAR | 自定义 | 记录包含多字节字符在内的字符串,如中文字符。每个 NCHAR 字符占用 4 字节的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 `\'`。NCHAR 使用时须指定字符串大小,类型为 NCHAR(10) 的列表示此列的字符串最多存储 10 个 NCHAR 字符。如果用户字符串长度超出声明长度,将会报错。 |
|
||||||
| 15 | JSON | | JSON 数据类型, 只有 Tag 可以是 JSON 格式 |
|
| 15 | JSON | | JSON 数据类型, 只有 Tag 可以是 JSON 格式 |
|
||||||
| 16 | VARCHAR | 自定义 | BINARY 类型的别名 |
|
| 16 | VARCHAR | 自定义 | BINARY 类型的别名 |
|
||||||
|
|
||||||
|
|
|
@ -50,6 +50,56 @@ SHOW CREATE STABLE stb_name;
|
||||||
DESCRIBE [db_name.]stb_name;
|
DESCRIBE [db_name.]stb_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 获取超级表中所有子表的标签信息
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SHOW TABLE TAGS FROM st1;
|
||||||
|
tbname | id | loc |
|
||||||
|
======================================================================
|
||||||
|
st1s1 | 1 | beijing |
|
||||||
|
st1s2 | 2 | shanghai |
|
||||||
|
st1s3 | 3 | guangzhou |
|
||||||
|
Query OK, 3 rows in database (0.004455s)
|
||||||
|
```
|
||||||
|
|
||||||
|
返回结果集的第一列为子表名,后续列为标签列。
|
||||||
|
|
||||||
|
如果已经知道标签列的名称,可以使用下面的语句来获取指定标签列的值。
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SELECT DISTINCT TBNAME, id FROM st1;
|
||||||
|
tbname | id |
|
||||||
|
===============================================
|
||||||
|
st1s1 | 1 |
|
||||||
|
st1s2 | 2 |
|
||||||
|
st1s3 | 3 |
|
||||||
|
Query OK, 3 rows in database (0.002891s)
|
||||||
|
```
|
||||||
|
|
||||||
|
需要注意,SELECT 语句中的 DISTINCT 和 TBNAME 都是必不可少的,TDengine 会根据它们对语句进行优化,使之在没有数据或数据非常多的情况下都可以正确并快速的返回标签值。
|
||||||
|
|
||||||
|
### 获取某个子表的标签信息
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SHOW TAGS FROM st1s1;
|
||||||
|
table_name | db_name | stable_name | tag_name | tag_type | tag_value |
|
||||||
|
============================================================================================================
|
||||||
|
st1s1 | test | st1 | id | INT | 1 |
|
||||||
|
st1s1 | test | st1 | loc | VARCHAR(20) | beijing |
|
||||||
|
Query OK, 2 rows in database (0.003684s)
|
||||||
|
```
|
||||||
|
|
||||||
|
同样的,也可以用 SELECT 语句来查询指定标签列的值。
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> SELECT DISTINCT TBNAME, id, loc FROM st1s1;
|
||||||
|
tbname | id | loc |
|
||||||
|
==================================================
|
||||||
|
st1s1 | 1 | beijing |
|
||||||
|
Query OK, 1 rows in database (0.001884s)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## 删除超级表
|
## 删除超级表
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
|
@ -12,7 +12,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW
|
||||||
SELECT [DISTINCT] select_list
|
SELECT [DISTINCT] select_list
|
||||||
from_clause
|
from_clause
|
||||||
[WHERE condition]
|
[WHERE condition]
|
||||||
[PARTITION BY tag_list]
|
[partition_by_clause]
|
||||||
[window_clause]
|
[window_clause]
|
||||||
[group_by_clause]
|
[group_by_clause]
|
||||||
[order_by_clasue]
|
[order_by_clasue]
|
||||||
|
@ -53,6 +53,9 @@ window_clause: {
|
||||||
| STATE_WINDOW(col)
|
| STATE_WINDOW(col)
|
||||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||||
|
|
||||||
|
partition_by_clause:
|
||||||
|
PARTITION BY expr [, expr] ...
|
||||||
|
|
||||||
group_by_clause:
|
group_by_clause:
|
||||||
GROUP BY expr [, expr] ... HAVING condition
|
GROUP BY expr [, expr] ... HAVING condition
|
||||||
|
|
||||||
|
|
|
@ -340,7 +340,7 @@ LTRIM(expr)
|
||||||
#### RTRIM
|
#### RTRIM
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
LTRIM(expr)
|
RTRIM(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:返回清除右边空格后的字符串。
|
**功能说明**:返回清除右边空格后的字符串。
|
||||||
|
|
|
@ -4,9 +4,9 @@ title: 特色查询
|
||||||
description: TDengine 提供的时序数据特有的查询功能
|
description: TDengine 提供的时序数据特有的查询功能
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。
|
TDengine 在支持标准 SQL 的基础之上,还提供了一系列满足时序业务场景需求的特色查询语法,这些语法能够为时序场景的应用的开发带来极大的便利。
|
||||||
|
|
||||||
TDengine 提供的特色查询包括数据切分查询和窗口切分查询。
|
TDengine 提供的特色查询包括数据切分查询和时间窗口切分查询。
|
||||||
|
|
||||||
## 数据切分查询
|
## 数据切分查询
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ select max(current) from meters partition by location interval(10m)
|
||||||
|
|
||||||
## 窗口切分查询
|
## 窗口切分查询
|
||||||
|
|
||||||
TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下:
|
TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT select_list FROM tb_name
|
SELECT select_list FROM tb_name
|
||||||
|
@ -132,6 +132,10 @@ SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 S
|
||||||
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 时间戳伪列
|
||||||
|
|
||||||
|
窗口聚合查询结果中,如果 SQL 语句中没有指定输出查询结果中的时间戳列,那么最终结果中不会自动包含窗口的时间列信息。如果需要在结果中输出聚合结果所对应的时间窗口信息,需要在 SELECT 子句中使用时间戳相关的伪列: 时间窗口起始时间 (\_WSTART), 时间窗口结束时间 (\_WEND), 时间窗口持续时间 (\_WDURATION), 以及查询整体窗口相关的伪列: 查询窗口起始时间(\_QSTART) 和查询窗口结束时间(\_QEND)。需要注意的是时间窗口起始时间和结束时间均是闭区间,时间窗口持续时间是数据当前时间分辨率下的数值。例如,如果当前数据库的时间分辨率是毫秒,那么结果中 500 就表示当前时间窗口的持续时间是 500毫秒 (500 ms)。
|
||||||
|
|
||||||
### 示例
|
### 示例
|
||||||
|
|
||||||
智能电表的建表语句如下:
|
智能电表的建表语句如下:
|
||||||
|
@ -143,8 +147,10 @@ CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS
|
||||||
针对智能电表采集的数据,以 10 分钟为一个阶段,计算过去 24 小时的电流数据的平均值、最大值、电流的中位数。如果没有计算值,用前一个非 NULL 值填充。使用的查询语句如下:
|
针对智能电表采集的数据,以 10 分钟为一个阶段,计算过去 24 小时的电流数据的平均值、最大值、电流的中位数。如果没有计算值,用前一个非 NULL 值填充。使用的查询语句如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
|
SELECT _WSTART, _WEND, AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
|
||||||
WHERE ts>=NOW-1d and ts<=now
|
WHERE ts>=NOW-1d and ts<=now
|
||||||
INTERVAL(10m)
|
INTERVAL(10m)
|
||||||
FILL(PREV);
|
FILL(PREV);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -137,19 +137,3 @@ local_option: {
|
||||||
```sql
|
```sql
|
||||||
SHOW LOCAL VARIABLES;
|
SHOW LOCAL VARIABLES;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 合并 vgroup
|
|
||||||
|
|
||||||
```sql
|
|
||||||
MERGE VGROUP vgroup_no1 vgroup_no2;
|
|
||||||
```
|
|
||||||
|
|
||||||
如果在系统实际运行一段时间后,因为不同时间线的数据特征不同导致在 vgroups 之间的数据和负载分布不均衡,可以通过合并或拆分 vgroups 的方式逐步实现负载均衡。
|
|
||||||
|
|
||||||
## 拆分 vgroup
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SPLIT VGROUP vgroup_no;
|
|
||||||
```
|
|
||||||
|
|
||||||
会创建一个新的 vgroup,并将指定 vgroup 中的数据按照一致性 HASH 迁移一部分到新的 vgroup 中。此过程中,原 vgroup 可以正常提供读写服务。
|
|
||||||
|
|
|
@ -30,7 +30,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **列名** | **数据类型** | **说明** |
|
||||||
| --- | :------------: | ------------ | ------------------------- |
|
| --- | :------------: | ------------ | ------------------------- |
|
||||||
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数 |
|
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
|
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
|
||||||
| 3 | status | BINARY(10) | 当前状态 |
|
| 3 | status | BINARY(10) | 当前状态 |
|
||||||
| 4 | note | BINARY(256) | 离线原因等信息 |
|
| 4 | note | BINARY(256) | 离线原因等信息 |
|
||||||
|
@ -50,16 +50,6 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| 4 | role_time | TIMESTAMP | 成为当前角色的时间 |
|
| 4 | role_time | TIMESTAMP | 成为当前角色的时间 |
|
||||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
| 5 | create_time | TIMESTAMP | 创建时间 |
|
||||||
|
|
||||||
## INS_MODULES
|
|
||||||
|
|
||||||
提供组件的相关信息。也可以使用 SHOW MODULES 来查询这些信息
|
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
|
||||||
| --- | :------: | ------------ | ---------- |
|
|
||||||
| 1 | id | SMALLINT | module id |
|
|
||||||
| 2 | endpoint | BINARY(134) | 组件的地址 |
|
|
||||||
| 3 | module | BINARY(10) | 组件状态 |
|
|
||||||
|
|
||||||
## INS_QNODES
|
## INS_QNODES
|
||||||
|
|
||||||
当前系统中 QNODE 的信息。也可以使用 SHOW QNODES 来查询这些信息。
|
当前系统中 QNODE 的信息。也可以使用 SHOW QNODES 来查询这些信息。
|
||||||
|
@ -89,33 +79,33 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| 1 | name | BINARY(32) | 数据库名 |
|
| 1 | name | BINARY(32) | 数据库名 |
|
||||||
| 2 | create_time | TIMESTAMP | 创建时间 |
|
| 2 | create_time | TIMESTAMP | 创建时间 |
|
||||||
| 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 |
|
| 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 |
|
||||||
| 4 | vgroups | INT | 数据库中有多少个 vgroup |
|
| 4 | vgroups | INT | 数据库中有多少个 vgroup。需要注意,`vgroups` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 6 | replica | INT | 副本数 |
|
| 6 | replica | INT | 副本数。需要注意,`replica` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 7 | quorum | BINARY(3) | 强一致性 |
|
| 7 | strict | BINARY(3) | 强一致性。需要注意,`strict` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 8 | duration | INT | 单文件存储数据的时间跨度 |
|
| 8 | duration | INT | 单文件存储数据的时间跨度。需要注意,`duration` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 9 | keep | INT | 数据保留时长 |
|
| 9 | keep | INT | 数据保留时长。需要注意,`keep` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 10 | buffer | INT | 每个 vnode 写缓存的内存块大小,单位 MB |
|
| 10 | buffer | INT | 每个 vnode 写缓存的内存块大小,单位 MB。需要注意,`buffer` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 11 | pagesize | INT | 每个 VNODE 中元数据存储引擎的页大小,单位为 KB |
|
| 11 | pagesize | INT | 每个 VNODE 中元数据存储引擎的页大小,单位为 KB。需要注意,`pagesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 12 | pages | INT | 每个 vnode 元数据存储引擎的缓存页个数 |
|
| 12 | pages | INT | 每个 vnode 元数据存储引擎的缓存页个数。需要注意,`pages` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 13 | minrows | INT | 文件块中记录的最大条数 |
|
| 13 | minrows | INT | 文件块中记录的最大条数。需要注意,`minrows` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 14 | maxrows | INT | 文件块中记录的最小条数 |
|
| 14 | maxrows | INT | 文件块中记录的最小条数。需要注意,`maxrows` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 15 | comp | INT | 数据压缩方式 |
|
| 15 | comp | INT | 数据压缩方式。需要注意,`comp` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 16 | precision | BINARY(2) | 时间分辨率 |
|
| 16 | precision | BINARY(2) | 时间分辨率。需要注意,`precision` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 17 | status | BINARY(10) | 数据库状态 |
|
| 17 | status | BINARY(10) | 数据库状态 |
|
||||||
| 18 | retention | BINARY (60) | 数据的聚合周期和保存时长 |
|
| 18 | retentions | BINARY (60) | 数据的聚合周期和保存时长。需要注意,`retentions` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 19 | single_stable | BOOL | 表示此数据库中是否只可以创建一个超级表 |
|
| 19 | single_stable | BOOL | 表示此数据库中是否只可以创建一个超级表。需要注意,`single_stable` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 20 | cachemodel | BINARY(60) | 表示是否在内存中缓存子表的最近数据 |
|
| 20 | cachemodel | BINARY(60) | 表示是否在内存中缓存子表的最近数据。需要注意,`cachemodel` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 21 | cachesize | INT | 表示每个 vnode 中用于缓存子表最近数据的内存大小 |
|
| 21 | cachesize | INT | 表示每个 vnode 中用于缓存子表最近数据的内存大小。需要注意,`cachesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 22 | wal_level | INT | WAL 级别 |
|
| 22 | wal_level | INT | WAL 级别。需要注意,`wal_level` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 23 | wal_fsync_period | INT | 数据落盘周期 |
|
| 23 | wal_fsync_period | INT | 数据落盘周期。需要注意,`wal_fsync_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 24 | wal_retention_period | INT | WAL 的保存时长 |
|
| 24 | wal_retention_period | INT | WAL 的保存时长。需要注意,`wal_retention_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 25 | wal_retention_size | INT | WAL 的保存上限 |
|
| 25 | wal_retention_size | INT | WAL 的保存上限。需要注意,`wal_retention_size` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 26 | wal_roll_period | INT | wal 文件切换时长 |
|
| 26 | wal_roll_period | INT | wal 文件切换时长。需要注意,`wal_roll_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 27 | wal_segment_size | BIGINT | wal 单个文件大小 |
|
| 27 | wal_segment_size | BIGINT | wal 单个文件大小。需要注意,`wal_segment_size` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 28 | stt_trigger | SMALLINT | 触发文件合并的落盘文件的个数 |
|
| 28 | stt_trigger | SMALLINT | 触发文件合并的落盘文件的个数。需要注意,`stt_trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 29 | table_prefix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度 |
|
| 29 | table_prefix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。需要注意,`table_prefix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 30 | table_suffix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度 |
|
| 30 | table_suffix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。需要注意,`table_suffix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 31 | tsdb_pagesize | INT | 时序数据存储引擎中的页大小 |
|
| 31 | tsdb_pagesize | INT | 时序数据存储引擎中的页大小。需要注意,`tsdb_pagesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
|
|
||||||
## INS_FUNCTIONS
|
## INS_FUNCTIONS
|
||||||
|
|
||||||
|
@ -124,8 +114,8 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **列名** | **数据类型** | **说明** |
|
||||||
| --- | :---------: | ------------ | -------------- |
|
| --- | :---------: | ------------ | -------------- |
|
||||||
| 1 | name | BINARY(64) | 函数名 |
|
| 1 | name | BINARY(64) | 函数名 |
|
||||||
| 2 | comment | BINARY(255) | 补充说明 |
|
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 3 | aggregate | INT | 是否为聚合函数 |
|
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 4 | output_type | BINARY(31) | 输出类型 |
|
| 4 | output_type | BINARY(31) | 输出类型 |
|
||||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
| 5 | create_time | TIMESTAMP | 创建时间 |
|
||||||
| 6 | code_len | INT | 代码长度 |
|
| 6 | code_len | INT | 代码长度 |
|
||||||
|
@ -154,12 +144,12 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
|
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | 创建时间 |
|
||||||
| 4 | columns | INT | 列数目 |
|
| 4 | columns | INT | 列数目 |
|
||||||
| 5 | tags | INT | 标签数目 |
|
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 6 | last_update | TIMESTAMP | 最后更新时间 |
|
| 6 | last_update | TIMESTAMP | 最后更新时间 |
|
||||||
| 7 | table_comment | BINARY(1024) | 表注释 |
|
| 7 | table_comment | BINARY(1024) | 表注释 |
|
||||||
| 8 | watermark | BINARY(64) | 窗口的关闭时间 |
|
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟 |
|
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 10 | rollup | BINARY(128) | rollup 聚合函数 |
|
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
|
|
||||||
## INS_TABLES
|
## INS_TABLES
|
||||||
|
|
||||||
|
@ -174,7 +164,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
|
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
|
||||||
| 6 | uid | BIGINT | 表 id |
|
| 6 | uid | BIGINT | 表 id |
|
||||||
| 7 | vgroup_id | INT | vgroup id |
|
| 7 | vgroup_id | INT | vgroup id |
|
||||||
| 8 | ttl | INT | 表的生命周期 |
|
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 9 | table_comment | BINARY(1024) | 表注释 |
|
| 9 | table_comment | BINARY(1024) | 表注释 |
|
||||||
| 10 | type | BINARY(20) | 表类型 |
|
| 10 | type | BINARY(20) | 表类型 |
|
||||||
|
|
||||||
|
@ -207,13 +197,13 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||||
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
|
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
|
||||||
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
|
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
|
||||||
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量 |
|
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 4 | streams | BINARY(10) | 授权创建的流数量 |
|
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 5 | users | BINARY(10) | 授权创建的用户数量 |
|
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 6 | accounts | BINARY(10) | 授权创建的帐户数量 |
|
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 7 | storage | BINARY(21) | 授权使用的存储空间大小 |
|
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量 |
|
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 9 | databases | BINARY(11) | 授权使用的数据库数量 |
|
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
|
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
|
||||||
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
|
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
|
||||||
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
|
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
|
||||||
|
@ -228,7 +218,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||||
| 1 | vgroup_id | INT | vgroup id |
|
| 1 | vgroup_id | INT | vgroup id |
|
||||||
| 2 | db_name | BINARY(32) | 数据库名 |
|
| 2 | db_name | BINARY(32) | 数据库名 |
|
||||||
| 3 | tables | INT | 此 vgroup 内有多少表 |
|
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
|
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
|
||||||
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
|
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
|
||||||
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
|
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
|
||||||
|
@ -247,7 +237,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **列名** | **数据类型** | **说明** |
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | name | BINARY(32) | 配置项名称 |
|
| 1 | name | BINARY(32) | 配置项名称 |
|
||||||
| 2 | value | BINARY(64) | 该配置项的值 |
|
| 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
|
|
||||||
## INS_DNODE_VARIABLES
|
## INS_DNODE_VARIABLES
|
||||||
|
|
||||||
|
@ -257,7 +247,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | dnode_id | INT | dnode 的 ID |
|
| 1 | dnode_id | INT | dnode 的 ID |
|
||||||
| 2 | name | BINARY(32) | 配置项名称 |
|
| 2 | name | BINARY(32) | 配置项名称 |
|
||||||
| 3 | value | BINARY(64) | 该配置项的值 |
|
| 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
|
|
||||||
## INS_TOPICS
|
## INS_TOPICS
|
||||||
|
|
||||||
|
@ -288,5 +278,5 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
||||||
| 5 | source_db | BINARY(64) | 源数据库 |
|
| 5 | source_db | BINARY(64) | 源数据库 |
|
||||||
| 6 | target_db | BIANRY(64) | 目的数据库 |
|
| 6 | target_db | BIANRY(64) | 目的数据库 |
|
||||||
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
|
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
|
||||||
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算 |
|
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算 |
|
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||||
|
|
|
@ -14,14 +14,6 @@ SHOW APPS;
|
||||||
|
|
||||||
显示接入集群的应用(客户端)信息。
|
显示接入集群的应用(客户端)信息。
|
||||||
|
|
||||||
## SHOW BNODES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW BNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
显示当前系统中存在的 BNODE (backup node, 即备份节点)的信息。
|
|
||||||
|
|
||||||
## SHOW CLUSTER
|
## SHOW CLUSTER
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -129,14 +121,6 @@ SHOW MNODES;
|
||||||
|
|
||||||
显示当前系统中 MNODE 的信息。
|
显示当前系统中 MNODE 的信息。
|
||||||
|
|
||||||
## SHOW MODULES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW MODULES;
|
|
||||||
```
|
|
||||||
|
|
||||||
显示当前系统中所安装的组件的信息。
|
|
||||||
|
|
||||||
## SHOW QNODES
|
## SHOW QNODES
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -153,15 +137,7 @@ SHOW SCORES;
|
||||||
|
|
||||||
显示系统被许可授权的容量的信息。
|
显示系统被许可授权的容量的信息。
|
||||||
|
|
||||||
注:企业版独有
|
注:企业版独有。
|
||||||
|
|
||||||
## SHOW SNODES
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW SNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
显示当前系统中 SNODE (流计算节点)的信息。
|
|
||||||
|
|
||||||
## SHOW STABLES
|
## SHOW STABLES
|
||||||
|
|
||||||
|
|
|
@ -17,10 +17,10 @@ conn_id 可以通过 `SHOW CONNECTIONS` 获取。
|
||||||
## 终止查询
|
## 终止查询
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW QUERY query_id;
|
KILL QUERY kill_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
query_id 可以通过 `SHOW QUERIES` 获取。
|
kill_id 可以通过 `SHOW QUERIES` 获取。
|
||||||
|
|
||||||
## 终止事务
|
## 终止事务
|
||||||
|
|
||||||
|
|
|
@ -94,3 +94,10 @@ description: "TDengine 3.0 版本的语法变更说明"
|
||||||
| 9 | SAMPLE | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
| 9 | SAMPLE | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
||||||
| 10 | STATECOUNT | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
| 10 | STATECOUNT | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
||||||
| 11 | STATEDURATION | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
| 11 | STATEDURATION | 增强 | 可以直接用于超级表了。没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
||||||
|
|
||||||
|
|
||||||
|
## SCHEMALESS 变更
|
||||||
|
|
||||||
|
| # | **元素** | **<div style={{width: 60}}>差异性</div>** | **说明** |
|
||||||
|
| - | :------- | :-------- | :------- |
|
||||||
|
| 1 | 主键ts 变更为 _ts | 变更 | schemaless自动建的列名用 _ 开头,不同于2.x。
|
||||||
|
|
|
@ -189,7 +189,7 @@ AllowWebSockets
|
||||||
/influxdb/v1/write
|
/influxdb/v1/write
|
||||||
```
|
```
|
||||||
|
|
||||||
支持 InfluxDB 查询参数如下:
|
支持 InfluxDB 参数如下:
|
||||||
|
|
||||||
- `db` 指定 TDengine 使用的数据库名
|
- `db` 指定 TDengine 使用的数据库名
|
||||||
- `precision` TDengine 使用的时间精度
|
- `precision` TDengine 使用的时间精度
|
||||||
|
@ -197,7 +197,7 @@ AllowWebSockets
|
||||||
- `p` TDengine 密码
|
- `p` TDengine 密码
|
||||||
|
|
||||||
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
|
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
|
||||||
|
示例: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
||||||
### OpenTSDB
|
### OpenTSDB
|
||||||
|
|
||||||
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
|
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
|
||||||
|
|
|
@ -231,7 +231,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
- **name** : 数据库名。
|
- **name** : 数据库名。
|
||||||
|
|
||||||
- **drop** : 插入前是否删除数据库,默认为 true。
|
- **drop** : 插入前是否删除数据库,可选项为 "yes" 或者 "no", 为 "no" 时不创建。默认删除。
|
||||||
|
|
||||||
#### 流式计算相关配置参数
|
#### 流式计算相关配置参数
|
||||||
|
|
||||||
|
@ -334,13 +334,13 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
- **name** : 列的名字,若与 count 同时使用,比如 "name":"current", "count":3, 则 3 个列的名字分别为 current, current_2. current_3。
|
- **name** : 列的名字,若与 count 同时使用,比如 "name":"current", "count":3, 则 3 个列的名字分别为 current, current_2. current_3。
|
||||||
|
|
||||||
- **min** : 数据类型的 列/标签 的最小值。
|
- **min** : 数据类型的 列/标签 的最小值。生成的值将大于或等于最小值。
|
||||||
|
|
||||||
- **max** : 数据类型的 列/标签 的最大值。
|
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
|
||||||
|
|
||||||
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
|
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
|
||||||
|
|
||||||
- **sma**: 将该列加入bsma中,值为 "yes" 或者 "no",默认为 "no"。
|
- **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。
|
||||||
|
|
||||||
#### 插入行为配置参数
|
#### 插入行为配置参数
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine
|
||||||
|
|
||||||
## 执行
|
## 执行
|
||||||
|
|
||||||
要进入 TDengine CLI,您只要在 Linux 终端或 Windows 终端执行 `taos` 即可。
|
要进入 TDengine CLI,您只要在终端执行 `taos` 即可。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
|
|
|
@ -5,29 +5,30 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
||||||
|
|
||||||
## TDengine 服务端支持的平台列表
|
## TDengine 服务端支持的平台列表
|
||||||
|
|
||||||
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** |
|
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
|
||||||
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- |
|
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- |
|
||||||
| X64 | ● | ● | ● | ● | ● | ● | ● |
|
| X64 | ● | ● | ● | ● | ● | ● | ● | ● |
|
||||||
| 树莓派 ARM64 | | | ● | | | | |
|
| 树莓派 ARM64 | | | ● | | | | | |
|
||||||
| 华为云 ARM64 | | | | ● | | | |
|
| 华为云 ARM64 | | | | ● | | | | |
|
||||||
|
| M1 | | | | | | | | ● |
|
||||||
|
|
||||||
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
|
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
|
||||||
|
|
||||||
## TDengine 客户端和连接器支持的平台列表
|
## TDengine 客户端和连接器支持的平台列表
|
||||||
|
|
||||||
目前 TDengine 的连接器可支持的平台广泛,目前包括:X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32 等开发环境。
|
目前 TDengine 的连接器可支持的平台广泛,目前包括:X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32/macOS 等开发环境。
|
||||||
|
|
||||||
对照矩阵如下:
|
对照矩阵如下:
|
||||||
|
|
||||||
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** |
|
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** | **X64 64bit** | **ARM64** |
|
||||||
| ----------- | ------------- | ------------- | --------- |
|
| ----------- | ------------- | ------------- | --------- | ------------- | --------- |
|
||||||
| **OS** | **Linux** | **Win64** | **Linux** |
|
| **OS** | **Linux** | **Win64** | **Linux** | **macOS** | **macOS** |
|
||||||
| **C/C++** | ● | ● | ● |
|
| **C/C++** | ● | ● | ● | ● | ● |
|
||||||
| **JDBC** | ● | ● | ● |
|
| **JDBC** | ● | ● | ● | ○ | ○ |
|
||||||
| **Python** | ● | ● | ● |
|
| **Python** | ● | ● | ● | ● | ● |
|
||||||
| **Go** | ● | ● | ● |
|
| **Go** | ● | ● | ● | ● | ● |
|
||||||
| **NodeJs** | ● | ● | ● |
|
| **NodeJs** | ● | ● | ● | ○ | ○ |
|
||||||
| **C#** | ● | ● | ○ |
|
| **C#** | ● | ● | ○ | ○ | ○ |
|
||||||
| **RESTful** | ● | ● | ● |
|
| **RESTful** | ● | ● | ● | ● | ● |
|
||||||
|
|
||||||
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
|
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
|
||||||
|
|
|
@ -205,7 +205,7 @@ taos --dump-config
|
||||||
:::info
|
:::info
|
||||||
为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix 时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
|
为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix 时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
|
||||||
|
|
||||||
在 Linux 系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
|
在 Linux/macOS 中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
|
||||||
|
|
||||||
```
|
```
|
||||||
timezone UTC-8
|
timezone UTC-8
|
||||||
|
@ -248,9 +248,9 @@ SELECT count(*) FROM table_name WHERE TS<1554984068000;
|
||||||
:::info
|
:::info
|
||||||
TDengine 为存储中文、日文、韩文等非 ASCII 编码的宽字符,提供一种专门的字段类型 nchar。写入 nchar 字段的数据将统一采用 UCS4-LE 格式进行编码并发送到服务器。需要注意的是,编码正确性是客户端来保证。因此,如果用户想要正常使用 nchar 字段来存储诸如中文、日文、韩文等非 ASCII 字符,需要正确设置客户端的编码格式。
|
TDengine 为存储中文、日文、韩文等非 ASCII 编码的宽字符,提供一种专门的字段类型 nchar。写入 nchar 字段的数据将统一采用 UCS4-LE 格式进行编码并发送到服务器。需要注意的是,编码正确性是客户端来保证。因此,如果用户想要正常使用 nchar 字段来存储诸如中文、日文、韩文等非 ASCII 字符,需要正确设置客户端的编码格式。
|
||||||
|
|
||||||
客户端的输入的字符均采用操作系统当前默认的编码格式,在 Linux 系统上多为 UTF-8,部分中文系统编码则可能是 GB18030 或 GBK 等。在 docker 环境中默认的编码是 POSIX。在中文版 Windows 系统中,编码则是 CP936。客户端需要确保正确设置自己所使用的字符集,即客户端运行的操作系统当前编码字符集,才能保证 nchar 中的数据正确转换为 UCS4-LE 编码格式。
|
客户端的输入的字符均采用操作系统当前默认的编码格式,在 Linux/macOS 系统上多为 UTF-8,部分中文系统编码则可能是 GB18030 或 GBK 等。在 docker 环境中默认的编码是 POSIX。在中文版 Windows 系统中,编码则是 CP936。客户端需要确保正确设置自己所使用的字符集,即客户端运行的操作系统当前编码字符集,才能保证 nchar 中的数据正确转换为 UCS4-LE 编码格式。
|
||||||
|
|
||||||
在 Linux 中 locale 的命名规则为: <语言>\_<地区>.<字符集编码> 如:zh_CN.UTF-8,zh 代表中文,CN 代表大陆地区,UTF-8 表示字符集。字符集编码为客户端正确解析本地字符串提供编码转换的说明。Linux 系统与 Mac OSX 系统可以通过设置 locale 来确定系统的字符编码,由于 Windows 使用的 locale 中不是 POSIX 标准的 locale 格式,因此在 Windows 下需要采用另一个配置参数 charset 来指定字符编码。在 Linux 系统中也可以使用 charset 来指定字符编码。
|
在 Linux/macOS 中 locale 的命名规则为: <语言>\_<地区>.<字符集编码> 如:zh_CN.UTF-8,zh 代表中文,CN 代表大陆地区,UTF-8 表示字符集。字符集编码为客户端正确解析本地字符串提供编码转换的说明。Linux/macOS 可以通过设置 locale 来确定系统的字符编码,由于 Windows 使用的 locale 中不是 POSIX 标准的 locale 格式,因此在 Windows 下需要采用另一个配置参数 charset 来指定字符编码。在 Linux/macOS 中也可以使用 charset 来指定字符编码。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -263,9 +263,9 @@ TDengine 为存储中文、日文、韩文等非 ASCII 编码的宽字符,提
|
||||||
| 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 |
|
| 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 |
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
如果配置文件中不设置 charset,在 Linux 系统中,taos 在启动时候,自动读取系统当前的 locale 信息,并从 locale 信息中解析提取 charset 编码格式。如果自动读取 locale 信息失败,则尝试读取 charset 配置,如果读取 charset 配置也失败,则中断启动过程。
|
如果配置文件中不设置 charset,在 Linux/macOS 中,taos 在启动时候,自动读取系统当前的 locale 信息,并从 locale 信息中解析提取 charset 编码格式。如果自动读取 locale 信息失败,则尝试读取 charset 配置,如果读取 charset 配置也失败,则中断启动过程。
|
||||||
|
|
||||||
在 Linux 系统中,locale 信息包含了字符编码信息,因此正确设置了 Linux 系统 locale 以后可以不用再单独设置 charset。例如:
|
在 Linux/macOS 中,locale 信息包含了字符编码信息,因此正确设置了 Linux/macOS 的 locale 以后可以不用再单独设置 charset。例如:
|
||||||
|
|
||||||
```
|
```
|
||||||
locale zh_CN.UTF-8
|
locale zh_CN.UTF-8
|
||||||
|
@ -279,7 +279,7 @@ charset CP936
|
||||||
|
|
||||||
如果需要调整字符编码,请查阅当前操作系统使用的编码,并在配置文件中正确设置。
|
如果需要调整字符编码,请查阅当前操作系统使用的编码,并在配置文件中正确设置。
|
||||||
|
|
||||||
在 Linux 系统中,如果用户同时设置了 locale 和字符集编码 charset,并且 locale 和 charset 的不一致,后设置的值将覆盖前面设置的值。
|
在 Linux/macOS 中,如果用户同时设置了 locale 和字符集编码 charset,并且 locale 和 charset 的不一致,后设置的值将覆盖前面设置的值。
|
||||||
|
|
||||||
```
|
```
|
||||||
locale zh_CN.UTF-8
|
locale zh_CN.UTF-8
|
||||||
|
|