Merge branch '3.0' into feat/sangshuduo/TD-14141-update-taostools-for3.0

This commit is contained in:
Shuduo Sang 2022-08-26 14:09:14 +08:00
commit 4e8a187f7a
241 changed files with 6227 additions and 5894 deletions

View File

@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
- [Java](https://docs.taosdata.com/connector/java/)
- [C/C++](https://docs.taosdata.com/connector/cpp/)
- [Python](https://docs.taosdata.com/connector/python/)
- [Go](https://docs.taosdata.com/connector/go/)
- [Node.js](https://docs.taosdata.com/connector/node/)
- [Rust](https://docs.taosdata.com/connector/rust/)
- [C#](https://docs.taosdata.com/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/connector/rest-api/)
# 成为社区贡献者

View File

@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde
# What is TDengine
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **Open Source**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
# Documentation
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com))
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# Building
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by:
nmake install
```
<!--
## On macOS platform
After building successfully, TDengine can be installed by:
@ -263,6 +264,7 @@ After building successfully, TDengine can be installed by:
```bash
sudo make install
```
-->
## Quick Run
@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
- [Java](https://docs.tdengine.com/reference/connector/java/)
- [C/C++](https://docs.tdengine.com/reference/connector/cpp/)
- [Python](https://docs.tdengine.com/reference/connector/python/)
- [Go](https://docs.tdengine.com/reference/connector/go/)
- [Node.js](https://docs.tdengine.com/reference/connector/node/)
- [Rust](https://docs.tdengine.com/reference/connector/rust/)
- [C#](https://docs.tdengine.com/reference/connector/csharp/)
- [RESTful API](https://docs.tdengine.com/reference/rest-api/)
# Contribute to TDengine

View File

@ -2,8 +2,6 @@ cmake_minimum_required(VERSION 3.0)
set(CMAKE_VERBOSE_MAKEFILE OFF)
SET(BUILD_SHARED_LIBS "OFF")
#set output directory
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib)
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/bin)
@ -103,6 +101,9 @@ IF (TD_WINDOWS)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}")
ELSE ()
IF (${TD_DARWIN})
set(CMAKE_MACOSX_RPATH 0)
ENDIF ()
IF (${COVER} MATCHES "true")
MESSAGE(STATUS "Test coverage mode, add extra flags")
SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage")

View File

@ -1,3 +1,19 @@
SET(PREPARE_ENV_CMD "prepare_env_cmd")
SET(PREPARE_ENV_TARGET "prepare_env_target")
ADD_CUSTOM_COMMAND(OUTPUT ${PREPARE_ENV_CMD}
POST_BUILD
COMMAND echo "make test directory"
DEPENDS taosd
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/cfg/
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/log/
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/data/
COMMAND ${CMAKE_COMMAND} -E echo dataDir ${TD_TESTS_OUTPUT_DIR}/data > ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
COMMAND ${CMAKE_COMMAND} -E echo logDir ${TD_TESTS_OUTPUT_DIR}/log >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
COMMAND ${CMAKE_COMMAND} -E echo charset UTF-8 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
COMMAND ${CMAKE_COMMAND} -E echo monitor 0 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
COMMENT "prepare taosd environment")
ADD_CUSTOM_TARGET(${PREPARE_ENV_TARGET} ALL WORKING_DIRECTORY ${TD_EXECUTABLE_OUTPUT_PATH} DEPENDS ${PREPARE_ENV_CMD})
IF (TD_LINUX)
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh")
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")

View File

@ -90,6 +90,12 @@ ELSE ()
ENDIF ()
ENDIF ()
option(
BUILD_SHARED_LIBS
""
OFF
)
option(
RUST_BINDINGS
"If build with rust-bindings"

View File

@ -104,15 +104,15 @@ Each row contains the device ID, time stamp, collected metrics (current, voltage
## Metric
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases.
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
## Label/Tag
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time.
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
## Table
@ -137,7 +137,7 @@ The design of one table for one data collection point will require a huge number
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
## Subtable
@ -156,7 +156,9 @@ The relationship between a STable and the subtables created based on this STable
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp)
## Database

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -16,7 +16,7 @@ import CDemo from "./_sub_c.mdx";
TDengine provides data subscription and consumption interfaces similar to message queue products. These interfaces make it easier for applications to obtain data written to TDengine either in real time and to process data in the order that events occurred. This simplifies your time-series data processing systems and reduces your costs because it is no longer necessary to deploy a message queue product such as Kafka.
To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, standard table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications.
To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications.
By subscribing to a topic, a consumer can obtain the latest data in that topic in real time. Multiple consumers can be formed into a consumer group that consumes messages together. Consumer groups enable faster speed through multi-threaded, distributed data consumption. Note that consumers in different groups that are subscribed to the same topic do not consume messages together. A single consumer can subscribe to multiple topics. If the data in a supertable is sharded across multiple vnodes, consumer groups can consume it much more efficiently than single consumers. TDengine also includes an acknowledgement mechanism that ensures at-least-once delivery in complicated environments where machines may crash or restart.

View File

@ -20,11 +20,11 @@ In theory, larger cache sizes are always better. However, at a certain point, it
## Read Cache
When you create a database, you can configure whether the latest data from every subtable is cached. To do so, set the *cachelast* parameter as follows:
- 0: Caching is disabled.
- 1: The latest row of data in each subtable is cached. This option significantly improves the performance of the `LAST_ROW` function
- 2: The latest non-null value in each column of each subtable is cached. This option significantly improves the performance of the `LAST` function in normal situations, such as WHERE, ORDER BY, GROUP BY, and INTERVAL statements.
- 3: Rows and columns are both cached. This option is equivalent to simultaneously enabling options 1 and 2.
When you create a database, you can configure whether the latest data from every subtable is cached. To do so, set the *cachemodel* parameter as follows:
- none: Caching is disabled.
- last_row: The latest row of data in each subtable is cached. This option significantly improves the performance of the `LAST_ROW` function
- last_value: The latest non-null value in each column of each subtable is cached. This option significantly improves the performance of the `LAST` function in normal situations, such as WHERE, ORDER BY, GROUP BY, and INTERVAL statements.
- both: Rows and columns are both cached. This option is equivalent to simultaneously enabling option last_row and last_value.
## Metadata Cache

View File

@ -170,71 +170,21 @@ taoscfg:
# number of replications, for cluster only
TAOS_REPLICA: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
#TAOS_NUM_OF_RPC_THREADS: "2"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
@ -245,37 +195,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
#TAOS_MIN_INTERVAL_TIME: "1"
# the compressed rpc message, option:
# -1 (no compression)
@ -283,17 +203,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
#TAOS_MAX_SHELL_CONNS: "50000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
@ -313,21 +224,8 @@ taoscfg:
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log
#TAOS_ASYNC_LOG: "0"
#TAOS_ASYNC_LOG: "1"
#
# time of keeping log files, days
@ -344,25 +242,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
```
## Scaling Out

View File

@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
- Internal function `now` can be used to get the current timestamp on the client side
- The current timestamp of the client side is applied when `now` is used to insert data
- Epoch Timetimestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
- Epoch Timetimestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.

View File

@ -3,7 +3,7 @@ sidebar_label: SHOW Statement
title: SHOW Statement for Metadata
---
In addition to running SELECT statements on INFORMATION_SCHEMA, you can also use SHOW to obtain system metadata, information, and status.
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
## SHOW ACCOUNTS

View File

@ -15,6 +15,27 @@ About details of installing TDenine, please refer to [Installation Guide](../../
## Uninstall
<Tabs>
<TabItem label="Uninstall apt-get" value="aptremove">
Apt-get package of TDengine can be uninstalled as below:
```bash
$ sudo apt-get remove tdengine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
tdengine
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
After this operation, 68.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 135625 files and directories currently installed.)
Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="Uninstall Deb" value="debuninst">
Deb package of TDengine can be uninstalled as below:

View File

@ -1,40 +1,32 @@
---
sidebar_label: Resource Planning
title: Resource Planning
---
It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
## Memory Requirement of Server Side
## Server Memory Requirements
By default, the number of vgroups created for each database is the same as the number of CPU cores. This can be configured by the parameter `maxVgroupsPerDb`. Each vnode in a vgroup stores one replica. Each vnode consumes a fixed amount of memory, i.e. `blocks` \* `cache`. In addition, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below:
Each database creates a fixed number of vgroups. This number is 2 by default and can be configured with the `vgroups` parameter. The number of replicas can be controlled with the `replica` parameter. Each replica requires one vnode per vgroup. Altogether, the memory required by each database depends on the following configuration options:
- vgroups
- replica
- buffer
- pages
- pagesize
- cachesize
For more information, see [Database](../../taos-sql/database).
The memory required by a database is therefore greater than or equal to:
```
Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
vgroups * replica * (buffer + pages * pagesize + cachesize)
```
For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` is 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
However, note that this requirement is spread over all dnodes in the cluster, not on a single physical machine. The physical servers that run dnodes meet the requirement together. If a cluster has multiple databases, the memory required increases accordingly. In complex environments where dnodes were added after initial deployment in response to increasing resource requirements, load may not be balanced among the original dnodes and newer dnodes. In this situation, the actual status of your dnodes is more important than theoretical calculations.
In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
```
taosd_memory = vnode_memory + mnode_memory + query_memory
```
In the above formula:
1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula for Database Memory Size, mentioned above, then dividing by number of dnodes and multiplying the number of replicas.
```
vnode_memory = (sum(Database Memory Size) / number_of_dnodes) * replica
```
2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster".
3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables".
Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
## Memory Requirement of Client Side
## Client Memory Requirements
For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
@ -56,10 +48,10 @@ So, at least 3GB needs to be reserved for such a client.
The CPU resources required depend on two aspects:
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. If each insert request contains more than 200 records, a single core can process more than 1 million records per second. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users.
In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. If possible, ensure that CPU usage remains below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
## Disk Requirement
@ -77,6 +69,6 @@ To increase performance, multiple disks can be setup for parallel data reading o
## Number of Hosts
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. If the number of data replicas is not 1, the required resources are multiplied by the number of replicas.
**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html).
Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.

View File

@ -1,6 +1,5 @@
---
sidebar_label: Fault Tolerance
title: Fault Tolerance & Disaster Recovery
title: Fault Tolerance and Disaster Recovery
---
## Fault Tolerance
@ -11,22 +10,21 @@ When a data block is received by TDengine, the original data block is first writ
There are 2 configuration parameters related to WAL:
- walLevel
- 0wal is disabled
- 1wal is enabled without fsync
- 2wal is enabled with fsync
- fsyncThis parameter is only valid when walLevel is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
- wal_level: Specifies the WAL level. 1 indicates that WAL is enabled but fsync is disabled. 2 indicates that WAL and fsync are both enabled. The default value is 1.
- wal_fsync_period: This parameter is only valid when wal_level is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
To achieve absolutely no data loss, walLevel should be set to 2 and fsync should be set to 1. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when fsync is set to 3,000 milliseconds.
To achieve absolutely no data loss, set wal_level to 2 and wal_fsync_period to 0. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when wal_fsync_period is set to 3000 milliseconds.
## Disaster Recovery
TDengine uses replication to provide high availability and disaster recovery capability.
TDengine uses replication to provide high availability.
A TDengine cluster is managed by mnode. To ensure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
A TDengine cluster is managed by mnodes. You can configure up to three mnodes to ensure high availability. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, the parameter `replica` is used to specify the number of replicas. To achieve high availability, set `replica` to 3.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. For more information, see [taosX](../../reference/taosX).

View File

@ -13,110 +13,59 @@ Diagnostic steps
1. If the port range to be diagnosed is being occupied by a `taosd` server process, please first stop `taosd.
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
-l <pktlen\> The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
-l <pktlen\> The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000.
Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
Output of the server side for the example is below:
```bash
# taos -n server -P 6000
12/21 14:50:13.522509 0x7f536f455200 UTL work as server, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
12/21 14:50:13.522659 0x7f5352242700 UTL TCP server at port:6000 is listening
12/21 14:50:13.522727 0x7f5351240700 UTL TCP server at port:6001 is listening
# taos -n server -P 6030 -l 1000
network test server is initialized, port:6030
request is received, size:1000
request is received, size:1000
...
...
...
12/21 14:50:13.523954 0x7f5342fed700 UTL TCP server at port:6011 is listening
12/21 14:50:13.523989 0x7f53437ee700 UTL UDP server at port:6010 is listening
12/21 14:50:13.524019 0x7f53427ec700 UTL UDP server at port:6011 is listening
12/21 14:50:22.192849 0x7f5352242700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6000
12/21 14:50:22.192993 0x7f5352242700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6000
12/21 14:50:22.237082 0x7f5351a41700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6000
12/21 14:50:22.237203 0x7f5351a41700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6000
12/21 14:50:22.237450 0x7f5351240700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6001
12/21 14:50:22.237576 0x7f5351240700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6001
12/21 14:50:22.281038 0x7f5350a3f700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6001
12/21 14:50:22.281141 0x7f5350a3f700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6001
...
...
...
12/21 14:50:22.677443 0x7f5342fed700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6011
12/21 14:50:22.677576 0x7f5342fed700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6011
12/21 14:50:22.721144 0x7f53427ec700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6011
12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011
request is received, size:1000
request is received, size:1000
```
Output of the client side for the example is below:
```bash
# taos -n client -h 172.27.0.7 -P 6000
12/21 14:50:22.192434 0x7fc95d859200 UTL work as client, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
taos -n client -h v3s2 -P 6030 -l 1000
network test client is initialized, the server is v3s2:6030
request is sent, size:1000
response is received, size:1000
request is sent, size:1000
response is received, size:1000
...
...
...
request is sent, size:1000
response is received, size:1000
request is sent, size:1000
response is received, size:1000
12/21 14:50:22.192472 0x7fc95d859200 UTL server ip:172.27.0.7 is resolved from host:172.27.0.7
12/21 14:50:22.236869 0x7fc95d859200 UTL successed to test TCP port:6000
12/21 14:50:22.237215 0x7fc95d859200 UTL successed to test UDP port:6000
...
...
...
12/21 14:50:22.676891 0x7fc95d859200 UTL successed to test TCP port:6010
12/21 14:50:22.677240 0x7fc95d859200 UTL successed to test UDP port:6010
12/21 14:50:22.720893 0x7fc95d859200 UTL successed to test TCP port:6011
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
total succ: 100/100 cost: 16.23 ms speed: 5.87 MB/s
```
The output needs to be checked carefully for the system operator to find the root cause and resolve the problem.
## Startup Status and RPC Diagnostic
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a common task which should be performed by a system operator, especially in the case of a cluster, to determine whether `taosd` has been started successfully.
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or whether `taosd` is abnormal.
## Sync and Arbitrator Diagnostic
```bash
taos -n sync -P 6040 -h <fqdn of server>
taos -n sync -P 6042 -h <fqdn of server>
```
The above commands can be executed in a Linux shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
## Network Speed Diagnostic
`taos -n speed -h <fqdn of server> -P 6030 -N 10 -l 10000000 -S TCP`
From version 2.2.0.0 onwards, the above command can be executed in a Linux shell to test network speed. The command sends uncompressed packages to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
-nWhen set to "speed", it means testing network speed.
-hThe FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used.
-PThe port of the server process to connect to, the default value is 6030.
-NThe number of packages that will be sent in the test, range is [1,10000], default value is 100.
-lThe size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024.
-SThe type of network packages to send, can be either TCP or UDP, default value is TCP.
## FQDN Resolution Diagnostic
`taos -n fqdn -h <fqdn of server>`
From version 2.2.0.0 onward, the above command can be executed in a Linux shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
-nWhen set to "fqdn", it means testing the speed of resolving FQDN.
-hThe FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
## Server Log
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily and so on the server side, important information is stored in a different place from other logs.
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. Ensure that the disk drive on which logs are stored has sufficient space.
## Client Log
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded.
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The parameter `debugFlag` is used to control the log level. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded.
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions.
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions. You can configure asynclog to 0 when needed for troubleshooting purposes to ensure that no log information is lost.

View File

@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless
## Installation
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. The REST interface is provided by [taosAdapter](../taosadapter), to use REST interface you need to make sure `taosAdapter` is running properly.
## Verification

View File

@ -1,5 +1,4 @@
---
sidebar_position: 1
sidebar_label: C/C++
title: C/C++ Connector
---

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java
title: TDengine Java Connector
description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors.

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 4
sidebar_label: Go
title: TDengine Go Connector
---

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust
title: TDengine Rust Connector
---

View File

@ -1,5 +1,4 @@
---
sidebar_position: 3
sidebar_label: Python
title: TDengine Python Connector
description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 6
sidebar_label: Node.js
title: TDengine Node.js Connector
---

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 7
sidebar_label: C#
title: C# Connector
---

View File

@ -1,6 +1,5 @@
---
sidebar_position: 1
sidebar_label: PHP (community contribution)
sidebar_label: PHP
title: PHP Connector
---

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,8 @@ title: Schemaless Writing
description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface."
---
In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. To provide the flexibility needed in such cases and in a rapidly changing IoT landscape, TDengine provides a series of interfaces for the schemaless writing method. These interfaces eliminate the need to create super tables and subtables in advance by automatically creating the storage structure corresponding to the data as the data is written to the interface. When necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing
will automatically add the required columns to ensure that the data written by the user is stored correctly.
The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability.
@ -19,12 +20,12 @@ With the following formatting conventions, schemaless writing uses a single stri
measurement,tag_set field_set timestamp
```
where :
where:
- measurement will be used as the data table name. It will be separated from tag_set by a comma.
- tag_set will be used as tag data in the format `<tag_key>=<tag_value>,<tag_key>=<tag_value>`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space.
- The timestamp is the primary key corresponding to the data in this row.
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>` Enter a space between `tag_set` and `field_set`.
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>` Enter a space between `field_set` and `timestamp`.
- `timestamp` is the primary key timestamp corresponding to this row of data
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
@ -37,16 +38,18 @@ In the schemaless writing data line protocol, each data item in the field_set ne
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
| -------- | -------- | ------------ | -------------- |
| 1 | none or f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8/u8 | TinyInt/UTinyInt | 1 |
| 4 | i16/u16 | SmallInt/USmallInt | 2 |
| 5 | i32/u32 | Int/UInt | 4 |
| 6 | i64/i/u64/u | Bigint/Bigint/UBigint/UBigint | 8 |
| 1 | None or f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8/u8 | TinyInt/UTinyInt | 1 |
| 4 | i16/u16 | SmallInt/USmallInt | 2 |
| 5 | i32/u32 | Int/UInt | 4 |
| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 |
- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label
is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column
is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
@ -65,18 +68,22 @@ Schemaless writes process row data according to the following principles.
```
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has.
You can configure smlChildTableName to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
3. If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to
NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
7. Errors encountered throughout the processing will interrupt the writing process and return an error code.
8. In order to improve the efficiency of writing, it is assumed by default that the order of the fields in the same Super is the same (the first data contains all fields, and the following data is in this order). If the order is different, the parameter smlDataFormat needs to be configured to be false. Otherwise, the data is written in the same order, and the data in the library will be abnormal.
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat to false. Otherwise, data will be written out of order and a database error will occur.
:::tip
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 16k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
16KB. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
:::
## Time resolution recognition
@ -85,75 +92,74 @@ Three specified modes are supported in the schemaless writing process, as follow
| **Serial** | **Value** | **Description** |
| -------- | ------------------- | ------------------------------- |
| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol |
| 2 | SML_TELNET_PROTOCOL | OpenTSDB Text Line Protocol |
| 3 | SML_JSON_PROTOCOL | JSON protocol format |
| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol |
| 2 | SML_TELNET_PROTOCOL | OpenTSDB file protocol |
| 3 | SML_JSON_PROTOCOL | OpenTSDB JSON protocol |
In the SML_LINE_PROTOCOL parsing mode, the user is required to specify the time resolution of the input timestamp. The available time resolutions are shown in the following table.
In InfluxDB line protocol mode, you must specify the precision of the input timestamp. Valid precisions are described in the following table.
| **Serial Number** | **Time Resolution Definition** | **Meaning** |
| **No.** | **Precision** | **Description** |
| -------- | --------------------------------- | -------------- |
| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) |
| 2 | TSDB_SML_TIMESTAMP_HOURS | hour |
| 3 | TSDB_SML_TIMESTAMP_MINUTES | MINUTES
| 4 | TSDB_SML_TIMESTAMP_SECONDS | SECONDS
| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | milliseconds
| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | microseconds
| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | nanoseconds |
| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) |
| 2 | TSDB_SML_TIMESTAMP_HOURS | Hours |
| 3 | TSDB_SML_TIMESTAMP_MINUTES | Minutes |
| 4 | TSDB_SML_TIMESTAMP_SECONDS | Seconds |
| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | Milliseconds |
| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | Microseconds |
| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | Nanoseconds |
In SML_TELNET_PROTOCOL and SML_JSON_PROTOCOL modes, the time precision is determined based on the length of the timestamp (in the same way as the OpenTSDB standard operation), and the user-specified time resolution is ignored at this point.
In OpenTSDB file and JSON protocol modes, the precision of the timestamp is determined from its length in the standard OpenTSDB manner. User input is ignored.
## Data schema mapping rules
## Data Model Mapping
This section describes how data for line protocols are mapped to data with a schema. The data measurement in each line protocol is mapped as follows:
- The tag name in tag_set is the name of the tag in the data schema
- The name in field_set is the column's name.
The following data is used as an example to illustrate the mapping rules.
This section describes how data in line protocol is mapped to a schema. The data measurement in each line is mapped to a
supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
The row data mapping generates a super table: `st`, which contains three labels of type NCHAR: t1, t2, t3. Five data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement.
This row is mapped to a supertable: `st` contains three NCHAR tags: t1, t2, and t3. Five columns are created: ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), and c4 (bigint). The following SQL statement is generated:
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
## Data schema change handling
## Processing Schema Changes
This section describes the impact on the data schema for different line protocol data writing cases.
This section describes the impact on the schema caused by different data being written.
When writing to an explicitly identified field type using the line protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the
If you use line protocol to write to a specific tag field and then later change the field type, a schema error will ocur. This triggers an error on the write API. This is shown as follows:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
The data type mapping in the first row defines column c4 as DOUBLE, but the data in the second row is declared as BIGINT by the numeric suffix, which triggers a parsing error with schemaless writing.
The first row defines c4 as a double. However, in the second row, the suffix indicates that the value of c4 is a bigint. This causes schemaless writing to throw an error.
If the line protocol before the column declares the data column as BINARY, the subsequent one requires a longer binary length, which triggers a super table schema change.
An error also occurs if data input into a binary column exceeds the defined length of the column.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
The first line of the line protocol parsing will declare column c5 is a BINARY(4) field. The second line data write will parse column c5 as a BINARY column. But in the second line, c5's width is 6 so you need to increase the width of the BINARY field to be able to accommodate the new string.
The first row defines c5 as a binary(4). but the second row writes 6 bytes to it. This means that the length of the binary column must be expanded to contain the data.
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
The second line of data has an additional column c6 of type BINARY(6) compared to the first row. Then a column c6 of type BINARY(6) is automatically added at this point.
The preceding data includes a new entry, c6, with type binary(6). When this occurs, a new column c6 with type binary(6) is added automatically.
## Write integrity
## Write Integrity
TDengine provides idempotency guarantees for data writing, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail.
TDengine guarantees the idempotency of data writes. This means that you can repeatedly call the API to perform write operations with bad data. However, TDengine does not guarantee the atomicity of multi-row writes. In a multi-row write, some data may be written successfully and other data unsuccessfully.
## Error code
##: Error Codes
If it is an error in the data itself during the schemaless writing process, the application will get `TSDB_CODE_TSC_LINE_SYNTAX_ERROR` error message, which indicates that the error occurred in writing. The other error codes are consistent with the TDengine and can be obtained via the `taos_errstr()` to get the specific cause of the error.
The TSDB_CODE_TSC_LINE_SYNTAX_ERROR indicates an error in the schemaless writing component.
This error occurs when writing text. For other errors, schemaless writing uses the standard TDengine error codes
found in taos_errstr.

View File

@ -6,9 +6,7 @@ title: Grafana
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard.
You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md).
TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard. You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md).
## Prerequisites
@ -65,7 +63,6 @@ Restart Grafana service and open Grafana in web-browser, usually <http://localho
Save the script and type `./install.sh --help` for the full usage of the script.
</TabItem>
<TabItem value="manual" label="Install & Configure Manually">
Follow the installation steps in [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) with the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation.
@ -76,7 +73,7 @@ grafana-cli plugins install tdengine-datasource
sudo -u grafana grafana-cli plugins install tdengine-datasource
```
Alternatively, you can manually download the .zip file from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and unpack it into your grafana plugins directory.
You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows:
```bash
GF_VERSION=3.2.2
@ -131,7 +128,7 @@ docker run -d \
grafana/grafana
```
You can setup a zero-configuration stack for TDengine + Grafana by [docker-compose](https://docs.docker.com/compose/) and [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) file
You can setup a zero-configuration stack for TDengine + Grafana by [docker-compose](https://docs.docker.com/compose/) and [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) file:
1. Save the provisioning configuration file to `tdengine.yml`.
@ -196,7 +193,7 @@ Go back to the main interface to create a dashboard and click Add Query to enter
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, custom template variables are also supported.
- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart, avg(mem_system) from log.dnodes_info where ts >= $from and ts < $to interval($interval)`. In this statement, $from, $to, and $interval are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported.
- ALIAS BY: This allows you to set the current query alias.
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
@ -208,7 +205,11 @@ Follow the default prompt to query the average system memory usage for the speci
### Importing the Dashboard
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. The dashboard is published in Grafana as [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167). Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x.
![TDengine Database Grafana plugine import dashboard](./import_dashboard.webp)
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。 Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:

View File

@ -1,6 +1,6 @@
---
sidebar_label: StatsD
title: StatsD writing
title: StatsD Writing
---
import StatsD from "../14-reference/_statsd.mdx"
@ -12,8 +12,8 @@ You can write StatsD data to TDengine by simply modifying the configuration file
## Prerequisites
To write StatsD data to TDengine requires the following preparations.
- The TDengine cluster has been deployed and is working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
1. The TDengine cluster is deployed and functioning properly
2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
- StatsD has been installed. To install StatsD, please refer to [official documentation](https://github.com/statsd/statsd)
## Configuration steps
@ -39,8 +39,12 @@ $ echo "foo:1|c" | nc -u -w0 127.0.0.1 8125
Use the TDengine CLI to verify that StatsD data is written to TDengine and can read out correctly.
```
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
statsd | 2022-04-20 09:54:51.220 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.003142s)
taos> use statsd;
Database changed.

View File

@ -1,6 +1,6 @@
---
sidebar_label: HiveMQ Broker
title: HiveMQ Broker writing
title: HiveMQ Broker Writing
---
[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. Please refer to the [HiveMQ extension - TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md) for details on how to use it.
[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. For more information, see [HiveMQ TDengine Extension](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md).

View File

@ -1,114 +1,163 @@
---
sidebar_label: FAQ
title: Frequently Asked Questions
---
## Submit an Issue
If the tips in FAQ don't help much, please submit an issue on [GitHub](https://github.com/taosdata/TDengine) to describe your problem. In your description please include the TDengine version, hardware and OS information, the steps to reproduce the problem and any other relevant information. It would be very helpful if you can package the contents in `/var/log/taos` and `/etc/taos` and upload. These two are the default directories used by TDengine. If you have changed the default directories in your configuration, please package the files in your configured directories. We recommended setting `debugFlag` to 135 in `taos.cfg`, restarting `taosd`, then reproducing the problem and collecting the logs. If you don't want to restart, an alternative way of setting `debugFlag` is executing `alter dnode <dnode_id> debugFlag 135` command in TDengine CLI `taos`. During normal running, however, please make sure `debugFlag` is set to 131.
If your issue could not be resolved by reviewing this documentation, you can submit your issue on GitHub and receive support from the TDengine Team. When you submit an issue, attach the following directories from your TDengine deployment:
1. The directory containing TDengine logs (`/var/log/taos` by default)
2. The directory containing TDengine configuration files (`/etc/taos` by default)
In your GitHub issue, provide the version of TDengine and the operating system and environment for your deployment, the operations that you performed when the issue occurred, and the time of occurrence and affected tables.
To obtain more debugging information, open `taos.cfg` and set the `debugFlag` parameter to `135`. Then restart TDengine Server and reproduce the issue. The debug-level logs generated help the TDengine Team to resolve your issue. If it is not possible to restart TDengine Server, you can run the following command in the TDengine CLI to set the debug flag:
```
alter dnode <dnode_id> 'debugFlag' '135';
```
You can run the `SHOW DNODES` command to determine the dnode ID.
When debugging information is no longer needed, set `debugFlag` to 131.
## Frequently Asked Questions
### 1. How to upgrade to TDengine 2.0 from older version?
### 1. What are the best practices for upgrading a previous version of TDengine to version 3.0?
version 2.x is not compatible with version 1.x. With regard to the configuration and data files, please perform the following steps before upgrading. Please follow data integrity, security, backup and other relevant SOPs, best practices before removing/deleting any data.
TDengine 3.0 is not compatible with the configuration and data files from previous versions. Before upgrading, perform the following steps:
1. Delete configuration files: `sudo rm -rf /etc/taos/taos.cfg`
2. Delete log files: `sudo rm -rf /var/log/taos/`
3. Delete data files if the data doesn't need to be kept: `sudo rm -rf /var/lib/taos/`
4. Install latest 2.x version
5. If the data needs to be kept and migrated to newer version, please contact professional service at TDengine for assistance.
1. Run `sudo rm -rf /etc/taos/taos.cfg` to delete your configuration file.
2. Run `sudo rm -rf /var/log/taos/` to delete your log files.
3. Run `sudo rm -rf /var/lib/taos/` to delete your data files.
4. Install TDengine 3.0.
5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support).
### 2. How to handle "Unable to establish connection"
### 4. How can I resolve the "Unable to establish connection" error?
When the client is unable to connect to the server, you can try the following ways to troubleshoot and resolve the problem.
This error indicates that the client could not connect to the server. Perform the following troubleshooting steps:
1. Check the network
1. Check the network.
- Check if the hosts where the client and server are running are accessible to each other, for example by `ping` command.
- Check if the TCP/UDP on port 6030-6042 are open for access if firewall is enabled. If possible, disable the firewall for diagnostics, but please ensure that you are following security and other relevant protocols.
- Check if the FQDN and serverPort are configured correctly in `taos.cfg` used by the server side.
- Check if the `firstEp` is set properly in the `taos.cfg` used by the client side.
- For machines deployed in the cloud, verify that your security group can access ports 6030 and 6031 (TCP and UDP).
- For virtual machines deployed locally, verify that the hosts where the client and server are running are accessible to each other. Do not use localhost as the hostname.
- For machines deployed on a corporate network, verify that your NAT configuration allows the server to respond to the client.
2. Make sure the client version and server version are same.
2. Verify that the client and server are running the same version of TDengine.
3. On server side, check the running status of `taosd` by executing `systemctl status taosd` . If your server is started using another way instead of `systemctl`, use the proper method to check whether the server process is running normally.
3. On the server, run `systemctl status taosd` to verify that taosd is running normally. If taosd is stopped, run `systemctl start taosd`.
4. If using connector of Python, Java, Go, Rust, C#, node.JS on Linux to connect to the server, please make sure `libtaos.so` is in directory `/usr/local/taos/driver` and `/usr/local/taos/driver` is in system lib search environment variable `LD_LIBRARY_PATH`.
4. Verify that the client is configured with the correct FQDN for the server.
5. If using connector on Windows, please make sure `C:\TDengine\driver\taos.dll` is in your system lib search path. We recommend putting `taos.dll` under `C:\Windows\System32`.
5. If the server cannot be reached with the `ping` command, verify that network and DNS or hosts file settings are correct. For a TDengine cluster, the client must be able to ping the FQDN of every node in the cluster.
6. Some advanced network diagnostics tools
6. Verify that your firewall settings allow all hosts in the cluster to communicate on ports 6030 and 6041 (TCP and UDP). You can run `ufw status` (Ubuntu) or `firewall-cmd --list-port` (CentOS) to check the configuration.
- On Linux system tool `nc` can be used to check whether the TCP/UDP can be accessible on a specified port
Check whether a UDP port is open: `nc -vuz {hostIP} {port} `
Check whether a TCP port on server side is open: `nc -l {port}`
Check whether a TCP port on client side is open: `nc {hostIP} {port}`
7. If you are using the Python, Java, Go, Rust, C#, or Node.js connector on Linux to connect to the server, verify that `libtaos.so` is in the `/usr/local/taos/driver` directory and `/usr/local/taos/driver` is in the `LD_LIBRARY_PATH` environment variable.
- On Windows system `Test-NetConnection -ComputerName {fqdn} -Port {port}` on PowerShell can be used to check whether the port on server side is open for access.
8. If you are using Windows, verify that `C:\TDengine\driver\taos.dll` is in the `PATH` environment variable. If possible, move `taos.dll` to the `C:\Windows\System32` directory.
7. TDengine CLI `taos` can also be used to check network, please refer to [TDengine CLI](/reference/taos-shell).
9. On Linux systems, you can use the `nc` tool to check whether a port is accessible:
- To check whether a UDP port is open, run `nc -vuz {hostIP} {port}`.
- To check whether a TCP port on the server side is open, run `nc -l {port}`.
- To check whether a TCP port on client side is open, run `nc {hostIP} {port}`.
### 3. How to handle "Unexpected generic error in RPC" or "Unable to resolve FQDN" ?
10. On Windows systems, you can run `Test-NetConnection -ComputerName {fqdn} -Port {port}` in PowerShell to check whether a port on the server side is accessible.
This error is caused because the FQDN can't be resolved. Please try following ways:
11. You can also use the TDengine CLI to diagnose network issues. For more information, see [Problem Diagnostics](https://docs.tdengine.com/operation/diagnose/).
1. Check whether the FQDN is configured properly on the server side
2. If DSN server is configured in the network, please check whether it works; otherwise, check `/etc/hosts` to see whether the FQDN is configured with correct IP
3. If the network configuration on the server side is OK, try to ping the server from the client side.
4. If TDengine has been used before with an old hostname then the hostname has been changed, please check `/var/lib/taos/taos/dnode/dnodeEps.json`. Before setting up a new TDengine cluster, it's better to cleanup the directories configured.
### 5. How can I resolve the "Unable to resolve FQDN" error?
### 4. "Invalid SQL" is returned even though the Syntax is correct
Clients and dnodes must be able to resolve the FQDN of each required node. You can confirm your configuration as follows:
"Invalid SQL" is returned when the length of SQL statement exceeds maximum allowed length or the syntax is not correct.
1. Verify that the FQDN is configured properly on the server.
2. If your network has a DNS server, verify that it is operational.
3. If your network does not have a DNS server, verify that the FQDNs in the `hosts` file are correct.
4. On the client, use the `ping` command to test your connection to the server. If you cannot ping an FQDN, TDengine cannot reach it.
5. If TDengine has been previously installed and the `hostname` was modified, open `dnode.json` in the `data` folder and verify that the endpoint configuration is correct. The default location of the dnode file is `/var/lib/taos/dnode`. Ensure that you clean up previous installations before reinstalling TDengine.
6. Confirm whether FQDNs are preconfigured in `/etc/hosts` and `/etc/hostname`.
### 5. Whether validation queries are supported?
### 6. What is the most effective way to write data to TDengine?
It's suggested to use a builtin database named as `log` to monitor.
Writing data in batches provides higher efficiency in most situations. You can insert one or more data records into one or more tables in a single SQL statement.
<a class="anchor" id="update"></a>
### 9. Why are table names not fully displayed?
### 6. Can I delete a record?
The number of columns in the TDengine CLI terminal display is limited. This can cause table names to be cut off, and if you use an incomplete name in a statement, the "Table does not exist" error will occur. You can increase the display size with the `maxBinaryDisplayWidth` parameter or the SQL statement `set max_binary_display_width`. You can also append `\G` to your SQL statement to bypass this limitation.
From version 2.6.0.0 Enterprise version, deleting data can be supported.
### 10. How can I migrate data?
### 7. How to create a table of over 1024 columns?
In TDengine, the `hostname` uniquely identifies a machine. When you move data files to a new machine, you must configure the new machine to have the same `host name` as the original machine.
From version 2.1.7.0, at most 4096 columns can be defined for a table.
:::note
### 8. How to improve the efficiency of inserting data?
The data structure of previous versions of TDengine is not compatible with version 3.0. To migrate from TDengine 1.x or 2.x to 3.0, you must export data from your older deployment and import it back into TDengine 3.0.
Inserting data in batch is a good practice. Single SQL statement can insert data for one or multiple tables in batch.
:::
### 9. JDBC Error the executed SQL is not a DML or a DDL
### 11. How can I temporary change the log level from the TDengine Client?
Please upgrade to latest JDBC driver, for details please refer to [Java Connector](/reference/connector/java)
### 10. Failed to connect with error "invalid timestamp"
The most common reason is that the time setting is not aligned on the client side and the server side. On Linux system, please use `ntpdate` command. On Windows system, please enable automatic sync in system time setting.
### 11. Table name is not shown in full
There is a display width setting in TDengine CLI `taos`. It can be controlled by configuration parameter `maxBinaryDisplayWidth`, or can be set using SQL command `set max_binary_display_width`. A more convenient way is to append `\G` in a SQL command to bypass this limitation.
### 12. How to change log level temporarily?
Below SQL command can be used to adjust log level temporarily
To change the log level for debugging purposes, you can use the following command:
```sql
ALTER LOCAL flag_name flag_value;
ALTER LOCAL local_option
local_option: {
'resetLog'
| 'rpcDebugFlag' value
| 'tmrDebugFlag' value
| 'cDebugFlag' value
| 'uDebugFlag' value
| 'debugFlag' value
}
```
- flag_name can be: debugFlagcDebugFlagtmrDebugFlaguDebugFlagrpcDebugFlag
- flag_value can be: 131 (INFO/WARNING/ERROR), 135 (plus DEBUG), 143 (plus TRACE)
<a class="anchor" id="timezone"></a>
Use `resetlog` to remove all logs generated on the local client. Use the other parameters to specify a log level for a specific component.
### 13. What to do if go compilation fails?
For each parameter, you can set the value to `131` (error and warning), `135` (error, warning, and debug), or `143` (error, warning, debug, and trace).
From version 2.3.0.0, a new component named `taosAdapter` is introduced. Its' developed in Go. If you want to compile from source code and meet go compilation problems, try to do below steps to resolve Go environment problems.
### Why do TDengine components written in Go fail to compile?
```sh
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
TDengine includes taosAdapter, an independent component written in Go. This component provides the REST API as well as data access for other products such as Prometheus and Telegraf.
When using the develop branch, you must run `git submodule update --init --recursive` to download the taosAdapter repository and then compile it.
TDengine Go components require Go version 1.14 or later.
### 13. How can I query the storage space being used by my data?
The TDengine data files are stored in `/var/lib/taos` by default. Log files are stored in `/var/log/taos`.
To see how much space your data files occupy, run `du -sh /var/lib/taos/vnode --exclude='wal'`. This excludes the write-ahead log (WAL) because its size is relatively fixed while writes are occurring, and it is written to disk and cleared when you shut down TDengine.
If you want to see how much space is occupied by a single database, first determine which vgroup is storing the database by running `show vgroups`. Then check `/var/lib/taos/vnode` for the files associated with the vgroup ID.
### 15. How is timezone information processed for timestamps?
TDengine uses the timezone of the client for timestamps. The server timezone does not affect timestamps. The client converts Unix timestamps in SQL statements to UTC before sending them to the server. When you query data on the server, it provides timestamps in UTC to the client, which converts them to its local time.
Timestamps are processed as follows:
1. The client uses its system timezone unless it has been configured otherwise.
2. A timezone configured in `taos.cfg` takes precedence over the system timezone.
3. A timezone explicitly specified when establishing a connection to TDengine through a connector takes precedence over `taos.cfg` and the system timezone. For example, the Java connector allows you to specify a timezone in the JDBC URL.
4. If you use an RFC 3339 timestamp (2013-04-12T15:52:01.123+08:00), or an ISO 8601 timestamp (2013-04-12T15:52:01.123+0800), the timezone specified in the timestamp is used instead of the timestamps configured using any other method.
### 16. Which network ports are required by TDengine?
See [serverPort](https://docs.tdengine.com/reference/config/#serverport) in Configuration Parameters.
Note that ports are specified using 6030 as the default first port. If you change this port, all other ports change as well.
### 17. Why do applications such as Grafana fail to connect to TDengine over the REST API?
In TDengine, the REST API is provided by taosAdapter. Ensure that taosAdapter is running before you connect an application to TDengine over the REST API. You can run `systemctl start taosadapter` to start the service.
Note that the log path for taosAdapter must be configured separately. The default path is `/var/log/taos`. You can choose one of eight log levels. The default is `info`. You can set the log level to `panic` to disable log output. You can modify the taosAdapter configuration file to change these settings. The default location is `/etc/taos/taosadapter.toml`.
For more information, see [taosAdapter](https://docs.tdengine.com/reference/taosadapter/).
### 18. How can I resolve out-of-memory (OOM) errors?
OOM errors are thrown by the operating system when its memory, including swap, becomes insufficient and it needs to terminate processes to remain operational. Most OOM errors in TDengine occur for one of the following reasons: free memory is less than the value of `vm.min_free_kbytes` or free memory is less than the size of the request. If TDengine occupies reserved memory, an OOM error can occur even when free memory is sufficient.
TDengine preallocates memory to each vnode. The number of vnodes per database is determined by the `vgroups` parameter, and the amount of memory per vnode is determined by the `buffer` parameter. To prevent OOM errors from occurring, ensure that you prepare sufficient memory on your hosts to support the number of vnodes that your deployment requires. Configure an appropriately sized swap space. If you continue to receive OOM errors, your SQL statements may be querying too much data for your system. TDengine Enterprise Edition includes optimized memory management that increases stability for enterprise customers.

View File

@ -1,5 +1,6 @@
---
title: 产品简介
description: 简要介绍 TDengine 的主要功能
toc_max_heading_level: 2
---

View File

@ -1,5 +1,7 @@
---
sidebar_label: 基本概念
title: 数据模型和基本概念
description: TDengine 的数据模型和基本概念
---
为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 location 和分组 group ID 的静态属性. 其采集的数据类似如下的表格:
@ -104,15 +106,15 @@ title: 数据模型和基本概念
## 采集量 (Metric)
采集量是指传感器、设备或其他类型采集点采集的物理量比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。
采集量是指传感器、设备或其他类型采集点采集的物理量比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。智能电表示例中的电流、电压、相位就是采集量。
## 标签 (Label/Tag)
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的location与groupId就是标签。
## 数据采集点 (Data Collection Point)
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的d1001, d1002, d1003, d1004等就是数据采集点。
## 表 (Table)
@ -131,13 +133,14 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001来做表
对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一台汽车建立多张表。
## 超级表 (STable)
由于一个数据采集点一张表导致表的数量巨增难以管理而且应用经常需要做采集点之间的聚合操作聚合的操作也变得复杂起来。为解决这个问题TDengine 引入超级表Super Table简称为 STable的概念。
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema标签的数据类型可以是整数、浮点数、字符串标签可以有多个可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。智能电表示例中我们可以创建一个超级表meters.
## 子表 (Subtable)
@ -156,7 +159,9 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001来做表
查询既可以在表上进行也可以在超级表上进行。针对超级表的查询TDengine 将把所有子表中的数据视为一个整体数据集进行处理会先把满足标签过滤条件的表从超级表中找出来然后再扫描这些表的时序数据进行聚合操作这样需要扫描的数据集会大幅减少从而显著提高查询的性能。本质上TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine系统建议给一个数据采集点建表需要通过超级表建表而不是建普通表。
TDengine系统建议给一个数据采集点建表需要通过超级表建表而不是建普通表。在智能电表的示例中我们可以通过超级表meters创建子表d1001, d1002, d1003, d1004等。
为了更好地理解超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。 ![智能电表数据模型示意图](./supertable.webp)
## 库 (database)

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -1,6 +1,7 @@
---
sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
description: 使用 Docker 快速体验 TDengine 的高效写入和查询
---
本节首先介绍如何通过 Docker 快速体验 TDengine然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.

View File

@ -1,6 +1,7 @@
---
sidebar_label: 安装包
title: 使用安装包立即开始
description: 使用安装包快速体验 TDengine
---
import Tabs from "@theme/Tabs";

View File

@ -1,6 +1,6 @@
---
title: 建立连接
description: "本节介绍如何使用连接器建立与 TDengine 的连接,给出连接器安装、连接的简单说明。"
description: 使用连接器建立与 TDengine 的连接,以及连接器的安装和连接
---
import Tabs from "@theme/Tabs";

View File

@ -1,5 +1,7 @@
---
sidebar_label: 数据建模
title: TDengine 数据建模
description: TDengine 中如何建立数据模型
---
TDengine 采用类关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库、超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。

View File

@ -1,5 +1,7 @@
---
sidebar_label: 写入数据
title: 写入数据
description: TDengine 的各种写入方式
---
TDengine 支持多种写入协议,包括 SQLInfluxDB Line 协议, OpenTSDB Telnet 协议OpenTSDB JSON 格式协议。数据可以单条插入也可以批量插入可以插入一个数据采集点的数据也可以同时插入多个数据采集点的数据。同时TDengine 支持多线程插入支持时间乱序数据插入也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。

View File

@ -1,4 +1,5 @@
---
sidebar_label: 查询数据
title: 查询数据
description: "主要查询功能,通过连接器执行同步查询和异步查询"
---

View File

@ -20,11 +20,11 @@ create database db0 vgroups 100 buffer 16MB
## 读缓存
在创建数据库时可以选择是否缓存该数据库中每个子表的最新数据。由参数 cachelast 设置,分为三种情况:
- 0: 不缓存
- 1: 缓存子表最近一行数据,这将显著改善 last_row 函数的性能
- 2: 缓存子表每一列最近的非 NULL 值,这将显著改善无特殊影响(比如 WHERE, ORDER BY, GROUP BY, INTERVAL时的 last 函数的性能
- 3: 同时缓存行和列,即等同于上述 cachelast 值为 1 或 2 时的行为同时生效
在创建数据库时可以选择是否缓存该数据库中每个子表的最新数据。由参数 cachemodel 设置,分为四种情况:
- none: 不缓存
- last_row: 缓存子表最近一行数据,这将显著改善 last_row 函数的性能
- last_value: 缓存子表每一列最近的非 NULL 值,这将显著改善无特殊影响(比如 WHERE, ORDER BY, GROUP BY, INTERVAL时的 last 函数的性能
- both: 同时缓存最近的行和列,即等同于上述 cachemodel 值为 last_row 和 last_value 的行为同时生效
## 元数据缓存

View File

@ -1,5 +1,7 @@
---
title: 开发指南
sidebar_label: 开发指南
description: 让开发者能够快速上手的指南
---
开发一个应用如果你准备采用TDengine作为时序数据处理的工具那么有如下几个事情要做

View File

@ -1,5 +1,7 @@
---
title: REST API
sidebar_label: REST API
description: 详细介绍 TDengine 提供的 RESTful API.
---
为支持各种不同类型平台的开发TDengine 提供符合 REST 设计标准的 API即 REST API。为最大程度降低学习成本不同于其他数据库 REST API 的设计方法TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
@ -10,7 +12,7 @@ title: REST API
## 安装
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。TDengine 的 RESTful API 由 [taosAdapter](../../reference/taosadapter) 提供,在使用 RESTful API 之前需要确保 `taosAdapter` 正常运行。
## 验证

View File

@ -1,5 +1,4 @@
---
sidebar_position: 1
sidebar_label: C/C++
title: C/C++ Connector
---

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java
title: TDengine Java Connector
description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原生连接与 REST连接两种连接器。

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust
title: TDengine Rust Connector
---

View File

@ -1,5 +1,4 @@
---
sidebar_position: 3
sidebar_label: Python
title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块tasos 和 taosrest。除了对原生接口和 REST 接口的封装taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 6
sidebar_label: Node.js
title: TDengine Node.js Connector
---

View File

@ -1,6 +1,5 @@
---
toc_max_heading_level: 4
sidebar_position: 7
sidebar_label: C#
title: C# Connector
---

View File

@ -1,6 +1,5 @@
---
sidebar_position: 1
sidebar_label: PHP社区贡献
sidebar_label: PHP
title: PHP Connector
---

View File

@ -1,6 +1,7 @@
---
sidebar_label: 错误码
title: TDengine C/C++ 连接器错误码
description: C/C++ 连接器的错误码列表和详细说明
---
本文中详细列举了在使用 TDengine C/C++ 连接器时客户端可能得到的错误码以及所要采取的相应动作。其它语言的连接器在使用原生连接方式时也会所得到的返回码返回给连接器的调用者。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 手动部署
title: 集群部署和管理
description: 使用命令行工具手动部署 TDengine 集群
---
## 准备工作

View File

@ -1,6 +1,7 @@
---
sidebar_label: Kubernetes
title: 在 Kubernetes 上部署 TDengine 集群
description: 利用 Kubernetes 部署 TDengine 集群的详细指南
---
作为面向云原生架构设计的时序数据库TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。

View File

@ -1,6 +1,7 @@
---
sidebar_label: Helm
title: 使用 Helm 部署 TDengine 集群
description: 使用 Helm 部署 TDengine 集群的详细指南
---
Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。
@ -171,70 +172,19 @@ taoscfg:
TAOS_REPLICA: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
#TAOS_NUM_OF_RPC_THREADS: "2"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
@ -245,37 +195,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
#TAOS_MIN_INTERVAL_TIME: "1"
# the compressed rpc message, option:
# -1 (no compression)
@ -283,17 +203,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
#TAOS_MAX_SHELL_CONNS: "50000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
@ -313,21 +224,8 @@ taoscfg:
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log
#TAOS_ASYNC_LOG: "0"
#TAOS_ASYNC_LOG: "1"
#
# time of keeping log files, days
@ -344,25 +242,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
```
## 扩容

View File

@ -1,5 +1,7 @@
---
sidebar_label: 部署集群
title: 部署集群
description: 部署 TDengine 集群的多种方式
---
TDengine 支持集群提供水平扩展的能力。如果需要获得更高的处理能力只需要多增加节点即可。TDengine 采用虚拟节点技术将一个节点虚拟化为多个虚拟节点以实现负载均衡。同时TDengine可以将多个节点上的虚拟节点组成虚拟节点组通过多副本机制以保证供系统的高可用。TDengine的集群功能完全开源。

View File

@ -11,7 +11,7 @@ description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类
- 时间格式为 `YYYY-MM-DD HH:mm:ss.MS`,默认时间分辨率为毫秒。比如:`2017-08-12 18:25:58.128`
- 内部函数 now 是客户端的当前时间
- 插入记录时,如果时间戳为 now插入数据时使用提交这条记录的客户端的当前时间
- Epoch Time时间戳也可以是一个长整数表示从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的毫秒数(相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的微秒数;纳秒精度逻辑类似。)
- Epoch Time时间戳也可以是一个长整数表示从 UTC 时间 1970-01-01 00:00:00 开始的毫秒数。相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从 UTC 时间 1970-01-01 00:00:00 开始的微秒数;纳秒精度逻辑类似。
- 时间可以加减,比如 now-2h表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`表示查询两周前整整一周的数据。在指定降采样操作down sampling的时间窗口interval时间单位还可以使用 n (自然月) 和 y (自然年)。
TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。

View File

@ -1,5 +1,7 @@
---
title: 表管理
sidebar_label: 表
description: 对表的各种管理操作
---
## 创建表

View File

@ -1,6 +1,7 @@
---
sidebar_label: 超级表管理
title: 超级表 STable 管理
description: 对超级表的各种管理操作
---
## 创建超级表

View File

@ -1,6 +1,7 @@
---
sidebar_label: 数据写入
title: 数据写入
description: 写入数据的详细语法
---
## 写入语法

View File

@ -1,6 +1,7 @@
---
sidebar_label: 数据查询
title: 数据查询
description: 查询数据的详细语法
---
## 查询语法

View File

@ -1,6 +1,7 @@
---
sidebar_label: 函数
title: 函数
description: TDengine 支持的函数列表
toc_max_heading_level: 4
---

View File

@ -1,6 +1,7 @@
---
sidebar_label: 时序数据特色查询
title: 时序数据特色查询
description: TDengine 提供的时序数据特有的查询功能
---
TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 数据订阅
title: 数据订阅
description: TDengine 消息队列提供的数据订阅功能
---
TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 流式计算
title: 流式计算
description: 流式计算的相关 SQL 的详细语法
---

View File

@ -1,6 +1,7 @@
---
sidebar_label: 运算符
title: 运算符
description: TDengine 支持的所有运算符
---
## 算术运算符

View File

@ -1,6 +1,7 @@
---
sidebar_label: JSON 类型使用说明
title: JSON 类型使用说明
description: 对 JSON 类型如何使用的详细说明
---

View File

@ -1,5 +1,7 @@
---
title: 转义字符说明
sidebar_label: 转义字符
description: TDengine 中使用转义字符的详细规则
---
## 转义字符表

View File

@ -1,6 +1,7 @@
---
sidebar_label: 命名与边界限制
title: 命名与边界限制
description: 合法字符集和命名中的限制规则
---
## 名称命名规则

View File

@ -1,6 +1,7 @@
---
sidebar_label: 保留关键字
title: TDengine 保留关键字
description: TDengine 保留关键字的详细列表
---
## 保留关键字

View File

@ -1,6 +1,7 @@
---
sidebar_label: 集群管理
title: 集群管理
description: 管理集群的 SQL 命令的详细解析
---
组成 TDengine 集群的物理实体是 dnode (data node 的缩写),它是一个运行在操作系统之上的进程。在 dnode 中可以建立负责时序数据存储的 vnode (virtual node),在多节点集群环境下当某个数据库的 replica 为 3 时,该数据库中的每个 vgroup 由 3 个 vnode 组成;当数据库的 replica 为 1 时,该数据库中的每个 vgroup 由 1 个 vnode 组成。如果要想配置某个数据库为多副本,则集群中的 dnode 数量至少为 3。在 dnode 还可以创建 mnode (management node),单个集群中最多可以创建三个 mnode。在 TDengine 3.0.0.0 中为了支持存算分离,引入了一种新的逻辑节点 qnode (query node)qnode 和 vnode 既可以共存在一个 dnode 中,也可以完全分离在不同的 dnode 上。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 元数据
title: 存储元数据的 Information_Schema 数据库
description: Information_Schema 数据库中存储了系统中所有的元数据信息
---
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:

View File

@ -1,6 +1,7 @@
---
sidebar_label: 统计数据
title: 存储统计数据的 Performance_Schema 数据库
description: Performance_Schema 数据库中存储了系统中的各种统计信息
---
TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和表结构。

View File

@ -1,9 +1,10 @@
---
sidebar_label: SHOW 命令
title: 使用 SHOW 命令查看系统元数据
description: SHOW 命令的完整列表
---
除了使用 `select` 语句查询 `INFORMATION_SCHEMA` 数据库中的表获得系统中的各种元数据、系统信息和状态之外,也可以用 `SHOW` 命令来实现同样的目的
SHOW 命令可以用来获取简要的系统信息。若想获取系统中详细的各种元数据、系统信息和状态,请使用 select 语句查询 INFORMATION_SCHEMA 数据库中的表
## SHOW ACCOUNTS

View File

@ -1,6 +1,7 @@
---
sidebar_label: 权限管理
title: 权限管理
description: 企业版中才具有的权限管理功能
---
本节讲述如何在 TDengine 中进行权限管理的相关操作。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 自定义函数
title: 用户自定义函数
description: 使用 UDF 的详细指南
---
除了 TDengine 的内置函数以外用户还可以编写自己的函数逻辑并加入TDengine系统中。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 索引
title: 使用索引
description: 索引功能的使用细节
---
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 异常恢复
title: 异常恢复
description: 如何终止出现问题的连接、查询和事务以使系统恢复正常
---
在一个复杂的应用场景中,连接和查询任务等有可能进入一种错误状态或者耗时过长迟迟无法结束,此时需要有能够终止这些连接或任务的方法。

View File

@ -1,6 +1,7 @@
---
title: TDinsight - 基于Grafana的TDengine零依赖监控解决方案
title: TDinsight
sidebar_label: TDinsight
description: 基于Grafana的TDengine零依赖监控解决方案
---
TDinsight 是使用监控数据库和 [Grafana] 对 TDengine 进行监控的解决方案。

View File

@ -698,122 +698,123 @@ charset 的有效值是 UTF-8。
| 45 | numOfVnodeFetchThreads | 否 | 是 |
| 46 | numOfVnodeWriteThreads | 否 | 是 |
| 47 | numOfVnodeSyncThreads | 否 | 是 |
| 48 | numOfQnodeQueryThreads | 否 | 是 |
| 49 | numOfQnodeFetchThreads | 否 | 是 |
| 50 | numOfSnodeSharedThreads | 否 | 是 |
| 51 | numOfSnodeUniqueThreads | 否 | 是 |
| 52 | rpcQueueMemoryAllowed | 否 | 是 |
| 53 | logDir | 是 | 是 |
| 54 | minimalLogDirGB | 是 | 是 |
| 55 | numOfLogLines | 是 | 是 |
| 56 | asyncLog | 是 | 是 |
| 57 | logKeepDays | 是 | 是 |
| 58 | debugFlag | 是 | 是 |
| 59 | tmrDebugFlag | 是 | 是 |
| 60 | uDebugFlag | 是 | 是 |
| 61 | rpcDebugFlag | 是 | 是 |
| 62 | jniDebugFlag | 是 | 是 |
| 63 | qDebugFlag | 是 | 是 |
| 64 | cDebugFlag | 是 | 是 |
| 65 | dDebugFlag | 是 | 是 |
| 66 | vDebugFlag | 是 | 是 |
| 67 | mDebugFlag | 是 | 是 |
| 68 | wDebugFlag | 是 | 是 |
| 69 | sDebugFlag | 是 | 是 |
| 70 | tsdbDebugFlag | 是 | 是 |
| 71 | tqDebugFlag | 否 | 是 |
| 72 | fsDebugFlag | 是 | 是 |
| 73 | udfDebugFlag | 否 | 是 |
| 74 | smaDebugFlag | 否 | 是 |
| 75 | idxDebugFlag | 否 | 是 |
| 76 | tdbDebugFlag | 否 | 是 |
| 77 | metaDebugFlag | 否 | 是 |
| 78 | timezone | 是 | 是 |
| 79 | locale | 是 | 是 |
| 80 | charset | 是 | 是 |
| 81 | udf | 是 | 是 |
| 82 | enableCoreFile | 是 | 是 |
| 83 | arbitrator | 是 | 否 |
| 84 | numOfThreadsPerCore | 是 | 否 |
| 85 | numOfMnodes | 是 | 否 |
| 86 | vnodeBak | 是 | 否 |
| 87 | balance | 是 | 否 |
| 88 | balanceInterval | 是 | 否 |
| 89 | offlineThreshold | 是 | 否 |
| 90 | role | 是 | 否 |
| 91 | dnodeNopLoop | 是 | 否 |
| 92 | keepTimeOffset | 是 | 否 |
| 93 | rpcTimer | 是 | 否 |
| 94 | rpcMaxTime | 是 | 否 |
| 95 | rpcForceTcp | 是 | 否 |
| 96 | tcpConnTimeout | 是 | 否 |
| 97 | syncCheckInterval | 是 | 否 |
| 98 | maxTmrCtrl | 是 | 否 |
| 99 | monitorReplica | 是 | 否 |
| 100 | smlTagNullName | 是 | 否 |
| 101 | keepColumnName | 是 | 否 |
| 102 | ratioOfQueryCores | 是 | 否 |
| 103 | maxStreamCompDelay | 是 | 否 |
| 104 | maxFirstStreamCompDelay | 是 | 否 |
| 105 | retryStreamCompDelay | 是 | 否 |
| 106 | streamCompDelayRatio | 是 | 否 |
| 107 | maxVgroupsPerDb | 是 | 否 |
| 108 | maxTablesPerVnode | 是 | 否 |
| 109 | minTablesPerVnode | 是 | 否 |
| 110 | tableIncStepPerVnode | 是 | 否 |
| 111 | cache | 是 | 否 |
| 112 | blocks | 是 | 否 |
| 113 | days | 是 | 否 |
| 114 | keep | 是 | 否 |
| 115 | minRows | 是 | 否 |
| 116 | maxRows | 是 | 否 |
| 117 | quorum | 是 | 否 |
| 118 | comp | 是 | 否 |
| 119 | walLevel | 是 | 否 |
| 120 | fsync | 是 | 否 |
| 121 | replica | 是 | 否 |
| 122 | partitions | 是 | 否 |
| 123 | quorum | 是 | 否 |
| 124 | update | 是 | 否 |
| 125 | cachelast | 是 | 否 |
| 126 | maxSQLLength | 是 | 否 |
| 127 | maxWildCardsLength | 是 | 否 |
| 128 | maxRegexStringLen | 是 | 否 |
| 129 | maxNumOfOrderedRes | 是 | 否 |
| 130 | maxConnections | 是 | 否 |
| 131 | mnodeEqualVnodeNum | 是 | 否 |
| 132 | http | 是 | 否 |
| 133 | httpEnableRecordSql | 是 | 否 |
| 134 | httpMaxThreads | 是 | 否 |
| 135 | restfulRowLimit | 是 | 否 |
| 136 | httpDbNameMandatory | 是 | 否 |
| 137 | httpKeepAlive | 是 | 否 |
| 138 | enableRecordSql | 是 | 否 |
| 139 | maxBinaryDisplayWidth | 是 | 否 |
| 140 | stream | 是 | 否 |
| 141 | retrieveBlockingModel | 是 | 否 |
| 142 | tsdbMetaCompactRatio | 是 | 否 |
| 143 | defaultJSONStrType | 是 | 否 |
| 144 | walFlushSize | 是 | 否 |
| 145 | keepTimeOffset | 是 | 否 |
| 146 | flowctrl | 是 | 否 |
| 147 | slaveQuery | 是 | 否 |
| 148 | adjustMaster | 是 | 否 |
| 149 | topicBinaryLen | 是 | 否 |
| 150 | telegrafUseFieldNum | 是 | 否 |
| 151 | deadLockKillQuery | 是 | 否 |
| 152 | clientMerge | 是 | 否 |
| 153 | sdbDebugFlag | 是 | 否 |
| 154 | odbcDebugFlag | 是 | 否 |
| 155 | httpDebugFlag | 是 | 否 |
| 156 | monDebugFlag | 是 | 否 |
| 157 | cqDebugFlag | 是 | 否 |
| 158 | shortcutFlag | 是 | 否 |
| 159 | probeSeconds | 是 | 否 |
| 160 | probeKillSeconds | 是 | 否 |
| 161 | probeInterval | 是 | 否 |
| 162 | lossyColumns | 是 | 否 |
| 163 | fPrecision | 是 | 否 |
| 164 | dPrecision | 是 | 否 |
| 165 | maxRange | 是 | 否 |
| 166 | range | 是 | 否 |
| 48 | numOfVnodeRsmaThreads | 否 | 是 |
| 49 | numOfQnodeQueryThreads | 否 | 是 |
| 50 | numOfQnodeFetchThreads | 否 | 是 |
| 51 | numOfSnodeSharedThreads | 否 | 是 |
| 52 | numOfSnodeUniqueThreads | 否 | 是 |
| 53 | rpcQueueMemoryAllowed | 否 | 是 |
| 54 | logDir | 是 | 是 |
| 55 | minimalLogDirGB | 是 | 是 |
| 56 | numOfLogLines | 是 | 是 |
| 57 | asyncLog | 是 | 是 |
| 58 | logKeepDays | 是 | 是 |
| 59 | debugFlag | 是 | 是 |
| 60 | tmrDebugFlag | 是 | 是 |
| 61 | uDebugFlag | 是 | 是 |
| 62 | rpcDebugFlag | 是 | 是 |
| 63 | jniDebugFlag | 是 | 是 |
| 64 | qDebugFlag | 是 | 是 |
| 65 | cDebugFlag | 是 | 是 |
| 66 | dDebugFlag | 是 | 是 |
| 67 | vDebugFlag | 是 | 是 |
| 68 | mDebugFlag | 是 | 是 |
| 69 | wDebugFlag | 是 | 是 |
| 70 | sDebugFlag | 是 | 是 |
| 71 | tsdbDebugFlag | 是 | 是 |
| 72 | tqDebugFlag | 否 | 是 |
| 73 | fsDebugFlag | 是 | 是 |
| 74 | udfDebugFlag | 否 | 是 |
| 75 | smaDebugFlag | 否 | 是 |
| 76 | idxDebugFlag | 否 | 是 |
| 77 | tdbDebugFlag | 否 | 是 |
| 78 | metaDebugFlag | 否 | 是 |
| 79 | timezone | 是 | 是 |
| 80 | locale | 是 | 是 |
| 81 | charset | 是 | 是 |
| 82 | udf | 是 | 是 |
| 83 | enableCoreFile | 是 | 是 |
| 84 | arbitrator | 是 | 否 |
| 85 | numOfThreadsPerCore | 是 | 否 |
| 86 | numOfMnodes | 是 | 否 |
| 87 | vnodeBak | 是 | 否 |
| 88 | balance | 是 | 否 |
| 89 | balanceInterval | 是 | 否 |
| 90 | offlineThreshold | 是 | 否 |
| 91 | role | 是 | 否 |
| 92 | dnodeNopLoop | 是 | 否 |
| 93 | keepTimeOffset | 是 | 否 |
| 94 | rpcTimer | 是 | 否 |
| 95 | rpcMaxTime | 是 | 否 |
| 96 | rpcForceTcp | 是 | 否 |
| 97 | tcpConnTimeout | 是 | 否 |
| 98 | syncCheckInterval | 是 | 否 |
| 99 | maxTmrCtrl | 是 | 否 |
| 100 | monitorReplica | 是 | 否 |
| 101 | smlTagNullName | 是 | 否 |
| 102 | keepColumnName | 是 | 否 |
| 103 | ratioOfQueryCores | 是 | 否 |
| 104 | maxStreamCompDelay | 是 | 否 |
| 105 | maxFirstStreamCompDelay | 是 | 否 |
| 106 | retryStreamCompDelay | 是 | 否 |
| 107 | streamCompDelayRatio | 是 | 否 |
| 108 | maxVgroupsPerDb | 是 | 否 |
| 109 | maxTablesPerVnode | 是 | 否 |
| 110 | minTablesPerVnode | 是 | 否 |
| 111 | tableIncStepPerVnode | 是 | 否 |
| 112 | cache | 是 | 否 |
| 113 | blocks | 是 | 否 |
| 114 | days | 是 | 否 |
| 115 | keep | 是 | 否 |
| 116 | minRows | 是 | 否 |
| 117 | maxRows | 是 | 否 |
| 118 | quorum | 是 | 否 |
| 119 | comp | 是 | 否 |
| 120 | walLevel | 是 | 否 |
| 121 | fsync | 是 | 否 |
| 122 | replica | 是 | 否 |
| 123 | partitions | 是 | 否 |
| 124 | quorum | 是 | 否 |
| 125 | update | 是 | 否 |
| 126 | cachelast | 是 | 否 |
| 127 | maxSQLLength | 是 | 否 |
| 128 | maxWildCardsLength | 是 | 否 |
| 129 | maxRegexStringLen | 是 | 否 |
| 130 | maxNumOfOrderedRes | 是 | 否 |
| 131 | maxConnections | 是 | 否 |
| 132 | mnodeEqualVnodeNum | 是 | 否 |
| 133 | http | 是 | 否 |
| 134 | httpEnableRecordSql | 是 | 否 |
| 135 | httpMaxThreads | 是 | 否 |
| 136 | restfulRowLimit | 是 | 否 |
| 137 | httpDbNameMandatory | 是 | 否 |
| 138 | httpKeepAlive | 是 | 否 |
| 139 | enableRecordSql | 是 | 否 |
| 140 | maxBinaryDisplayWidth | 是 | 否 |
| 141 | stream | 是 | 否 |
| 142 | retrieveBlockingModel | 是 | 否 |
| 143 | tsdbMetaCompactRatio | 是 | 否 |
| 144 | defaultJSONStrType | 是 | 否 |
| 145 | walFlushSize | 是 | 否 |
| 146 | keepTimeOffset | 是 | 否 |
| 147 | flowctrl | 是 | 否 |
| 148 | slaveQuery | 是 | 否 |
| 149 | adjustMaster | 是 | 否 |
| 150 | topicBinaryLen | 是 | 否 |
| 151 | telegrafUseFieldNum | 是 | 否 |
| 152 | deadLockKillQuery | 是 | 否 |
| 153 | clientMerge | 是 | 否 |
| 154 | sdbDebugFlag | 是 | 否 |
| 155 | odbcDebugFlag | 是 | 否 |
| 156 | httpDebugFlag | 是 | 否 |
| 157 | monDebugFlag | 是 | 否 |
| 158 | cqDebugFlag | 是 | 否 |
| 159 | shortcutFlag | 是 | 否 |
| 160 | probeSeconds | 是 | 否 |
| 161 | probeKillSeconds | 是 | 否 |
| 162 | probeInterval | 是 | 否 |
| 163 | lossyColumns | 是 | 否 |
| 164 | fPrecision | 是 | 否 |
| 165 | dPrecision | 是 | 否 |
| 166 | maxRange | 是 | 否 |
| 167 | range | 是 | 否 |

View File

@ -1,5 +1,6 @@
---
title: 参考手册
description: TDengine 中的各种组件的详细说明
---
参考手册是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。

View File

@ -47,7 +47,23 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
<Tabs>
<TabItem label="apt-get 卸载" value="aptremove">
内容 TBD
卸载命令如下:
```
$ sudo apt-get remove tdengine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
tdengine
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
After this operation, 68.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 135625 files and directories currently installed.)
Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="Deb 卸载" value="debuninst">
@ -57,7 +73,7 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
```
$ sudo dpkg -r tdengine
(Reading database ... 120119 files and directories currently installed.)
Removing tdengine (3.0.0.10002) ...
Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```

View File

@ -1,6 +1,7 @@
---
sidebar_label: 容量规划
title: 容量规划
description: 如何规划一个 TDengine 集群所需的物理资源
---
使用 TDengine 来搭建一个物联网大数据平台计算资源、存储资源需要根据业务场景进行规划。下面分别讨论系统运行所需要的内存、CPU 以及硬盘空间。

View File

@ -1,5 +1,7 @@
---
title: 容错和灾备
sidebar_label: 容错和灾备
description: TDengine 的容错和灾备功能
---
## 容错

View File

@ -1,5 +1,6 @@
---
title: 数据导入
description: 如何导入外部数据到 TDengine
---
TDengine 提供多种方便的数据导入功能,一种按脚本文件导入,一种按数据文件导入,一种是 taosdump 工具导入本身导出的文件。

View File

@ -1,5 +1,6 @@
---
title: 数据导出
description: 如何导出 TDengine 中的数据
---
为方便数据导出TDengine 提供了两种导出方式,分别是按表导出和用 taosdump 导出。

View File

@ -1,5 +1,6 @@
---
title: 系统监控
description: 监控 TDengine 的运行状态
---
TDengine 通过 [taosKeeper](/reference/taosKeeper/) 将服务器的 CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度等信息定时写入指定数据库。TDengine 还将重要的系统操作(比如登录、创建、删除数据库等)日志以及各种错误报警信息进行记录。系统管理员可以从 CLI 直接查看这个数据库,也可以在 WEB 通过图形化界面查看这些监测信息。

View File

@ -1,5 +1,6 @@
---
title: 诊断及其他
description: 一些常见问题的诊断技巧
---
## 网络连接诊断

View File

@ -1,6 +1,7 @@
---
sidebar_label: Grafana
title: Grafana
description: 使用 Grafana 与 TDengine 的详细说明
---
import Tabs from "@theme/Tabs";

View File

@ -1,6 +1,7 @@
---
sidebar_label: Prometheus
title: Prometheus
description: 使用 Prometheus 访问 TDengine
---
import Prometheus from "../14-reference/_prometheus.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: Telegraf
title: Telegraf 写入
description: 使用 Telegraf 向 TDengine 写入数据
---
import Telegraf from "../14-reference/_telegraf.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: collectd
title: collectd 写入
description: 使用 collected 向 TDengine 写入数据
---
import CollectD from "../14-reference/_collectd.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: StatsD
title: StatsD 直接写入
description: 使用 StatsD 向 TDengine 写入
---
import StatsD from "../14-reference/_statsd.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: icinga2
title: icinga2 写入
description: 使用 icinga2 写入 TDengine
---
import Icinga2 from "../14-reference/_icinga2.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: TCollector
title: TCollector 写入
description: 使用 TCollector 写入 TDengine
---
import TCollector from "../14-reference/_tcollector.mdx"

View File

@ -1,6 +1,7 @@
---
sidebar_label: EMQX Broker
title: EMQX Broker 写入
description: 使用 EMQX Broker 写入 TDengine
---
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx)是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine也在企业版上提供原生的 TDengine 驱动实现直接保存。

View File

@ -1,6 +1,7 @@
---
sidebar_label: HiveMQ Broker
title: HiveMQ Broker 写入
description: 使用 HivMQ Broker 写入 TDengine
---
[HiveMQ](https://www.hivemq.com/) 是一个提供免费个人版和企业版的 MQTT 代理,主要用于企业和新兴的机器到机器 M2M 通讯和内部传输满足可伸缩性、易管理和安全特性。HiveMQ 提供了开源的插件开发包。可以通过 HiveMQ extension - TDengine 保存数据到 TDengine。详细使用方法请参考 [HiveMQ extension - TDengine 说明文档](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md)。

View File

@ -1,6 +1,7 @@
---
sidebar_label: Kafka
title: TDengine Kafka Connector 使用教程
title: TDengine Kafka Connector
description: 使用 TDengine Kafka Connector 的详细指南
---
TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDengine Sink Connector。用户只需提供简单的配置文件就可以将 Kafka 中指定 topic 的数据(批量或实时)同步到 TDengine 或将 TDengine 中指定数据库的数据(批量或实时)同步到 Kafka。

View File

@ -1,6 +1,7 @@
---
sidebar_label: 整体架构
title: 整体架构
description: TDengine 架构设计,包括:集群、存储、缓存与持久化、数据备份、多级存储等
---
## 集群与基本逻辑单元

View File

@ -1,5 +1,6 @@
---
title: 高可用
description: TDengine 的高可用设计
---
## Vnode 的高可用性

Some files were not shown because too many files have changed in this diff Show More