From 309d9860284fcee8c4f4119d08771505defdc878 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 22 Aug 2022 12:44:24 +0800 Subject: [PATCH 01/29] doc: english version of config --- docs/en/14-reference/12-config/index.md | 1321 +++++++++-------------- 1 file changed, 510 insertions(+), 811 deletions(-) diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md index b6b535429b..532e16de5c 100644 --- a/docs/en/14-reference/12-config/index.md +++ b/docs/en/14-reference/12-config/index.md @@ -1,16 +1,13 @@ --- -sidebar_label: Configuration title: Configuration Parameters description: "Configuration parameters for client and server in TDengine" --- -In this chapter, all the configuration parameters on both server and client side are described thoroughly. - ## Configuration File on Server Side On the server side, the actual service of TDengine is provided by an executable `taosd` whose parameters can be configured in file `taos.cfg` to meet the requirements of different use cases. The default location of `taos.cfg` is `/etc/taos`, but can be changed by using `-c` parameter on the CLI of `taosd`. For example, the configuration file can be put under `/home/user` and used like below -```bash +``` taosd -c /home/user ``` @@ -24,8 +21,6 @@ taosd -C TDengine CLI `taos` is the tool for users to interact with TDengine. It can share same configuration file as `taosd` or use a separate configuration file. When launching `taos`, parameter `-c` can be used to specify the location where its configuration file is. For example `taos -c /home/cfg` means `/home/cfg/taos.cfg` will be used. If `-c` is not used, the default location of the configuration file is `/etc/taos`. For more details please use `taos --help` to get. -From version 2.0.10.0 below commands can be used to show the configuration parameters of the client side. - ```bash taos -C ``` @@ -36,6 +31,11 @@ taos --dump-config # Configuration Parameters +:::note +The parameters described in this document by the effect that they have on the system. + +::: + :::note `taosd` needs to be restarted for the parameters changed in the configuration file to take effect. @@ -45,19 +45,19 @@ taos --dump-config ### firstEp -| Attribute | Description | -| ------------- | ---------------------------------------------------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | -------------------------------------------------------------- | +| Applicable | Server and Client | | Meaning | The end point of the first dnode in the cluster to be connected to when `taosd` or `taos` is started | -| Default Value | localhost:6030 | +| Default | localhost:6030 | ### secondEp -| Attribute | Description | -| ------------- | ---------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ------------------------------------------------------------------------------------- | +| Applicable | Server and Client | | Meaning | The end point of the second dnode to be connected to if the firstEp is not available when `taosd` or `taos` is started | -| Default Value | None | +| Default | None | ### fqdn @@ -65,36 +65,28 @@ taos --dump-config | ------------- | ------------------------------------------------------------------------ | | Applicable | Server Only | | Meaning | The FQDN of the host where `taosd` will be started. It can be IP address | -| Default Value | The first hostname configured for the host | -| Note | It should be within 96 bytes | +| Default Value | The first hostname configured for the host | +| Note | It should be within 96 bytes | | ### serverPort -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | +| Attribute | Description | +| -------- | ----------------------------------------------------------------------------------------------------------------------- | +| Applicable | Server Only | | Meaning | The port for external access after `taosd` is started | | Default Value | 6030 | -| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 | :::note -TDengine uses 13 continuous ports, both TCP and UDP, starting with the port specified by `serverPort`. You should ensure, in your firewall rules, that these ports are kept open. Below table describes the ports used by TDengine in details. - +- Ensure that your firewall rules do not block TCP port 6042 on any host in the cluster. Below table describes the ports used by TDengine in details. ::: - | Protocol | Default Port | Description | How to configure | | :------- | :----------- | :----------------------------------------------- | :--------------------------------------------------------------------------------------------- | -| TCP | 6030 | Communication between client and server | serverPort | -| TCP | 6035 | Communication among server nodes in cluster | serverPort+5 | -| TCP | 6040 | Data syncup among server nodes in cluster | serverPort+10 | +| TCP | 6030 | Communication between client and server. In a multi-node cluster, communication between nodes. serverPort | | TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) | -| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator | -| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper | -| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](/reference/taosadapter/) | -| UDP | 6045 | Data access for statsd | refer to [taosAdapter](/reference/taosadapter/) | -| TCP | 6060 | Port of Monitoring Service in Enterprise version | | -| UDP | 6030-6034 | Communication between client and server | serverPort | -| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort | +| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper | +| TCP | 6044 | Data access port for StatsD | Configurable through taosAdapter parameters. +| UDP | 6045 | Data access for statsd | Configurable through taosAdapter parameters. +| TCP | 6060 | Port of Monitoring Service in Enterprise version | | ### maxShellConns @@ -105,104 +97,109 @@ TDengine uses 13 continuous ports, both TCP and UDP, starting with the port spec | Value Range | 10-50000000 | | Default Value | 5000 | -### maxConnections - -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The maximum number of connections allowed by a database | -| Value Range | 1-100000 | -| Default Value | 5000 | -| Note | The maximum number of worker threads on the client side is maxConnections/100 | - -### rpcForceTcp - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------- | -| Applicable | Server and Client | -| Meaning | TCP is used by force | -| Value Range | 0: disabled 1: enabled | -| Default Value | 0 | -| Note | It's suggested to configure to enable if network is not good enough | - ## Monitoring Parameters ### monitor -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The switch for monitoring inside server. The workload of the hosts, including CPU, memory, disk, network, TTP requests, are collected and stored in a system builtin database `LOG` | -| Value Range | 0: monitoring disabled, 1: monitoring enabled | -| Default Value | 1 | +| Attribute | Description | +| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Applicable | Server only | +| Meaning | The switch for monitoring inside server. The main object of monitoring is to collect information about load on physical nodes, including CPU usage, memory usage, disk usage, and network bandwidth. Monitoring information is sent over HTTP to the taosKeeper service specified by `monitorFqdn` and `monitorProt`. +| Value Range | 0: monitoring disabled, 1: monitoring enabled | +| Default | 1 | + +### monitorFqdn + +| Attribute | Description | +| -------- | -------------------------- | +| Applicable | Server Only | +| Meaning | FQDN of taosKeeper monitoring service | +| Default | None | + +### monitorPort + +| Attribute | Description | +| -------- | --------------------------- | +| Applicable | Server Only | +| Meaning | Port of taosKeeper monitoring service | +| Default Value | 6043 | ### monitorInterval -| Attribute | Description | -| ------------- | ------------------------------------------ | -| Applicable | Server Only | +| Attribute | Description | +| -------- | -------------------------------------------- | +| Applicable | Server Only | | Meaning | The interval of collecting system workload | | Unit | second | -| Value Range | 1-600 | -| Default Value | 30 | +| Value Range | 1-200000 | +| Default Value | 30 | ### telemetryReporting -| Attribute | Description | -| ------------- | ---------------------------------------------------------------------------- | -| Applicable | Server Only | +| Attribute | Description | +| -------- | ---------------------------------------- | +| Applicable | Server Only | | Meaning | Switch for allowing TDengine to collect and report service usage information | | Value Range | 0: Not allowed; 1: Allowed | -| Default Value | 1 | +| Default Value | 1 | ## Query Parameters -### queryBufferSize +### queryPolicy -| Attribute | Description | -| ------------- | ---------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The total memory size reserved for all queries | -| Unit | MB | -| Default Value | None | -| Note | It can be estimated by "maximum number of concurrent queries" _ "number of tables" _ 170 | +| Attribute | Description | +| -------- | ----------------------------- | +| Applicable | Client only | +| Meaning | Execution policy for query statements | +| Unit | None | +| Default | 1 | +| Notes | 1: Run queries on vnodes and not on qnodes | -### ratioOfQueryCores +2: Run subtasks without scan operators on qnodes and subtasks with scan operators on vnodes. + +3: Only run scan operators on vnodes; run all other operators on qnodes. + +### querySmaOptimize + +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Client only | +| 含义 | SMA index optimization policy | +| Unit | None | +| Default Value | 0 | +| Notes | + +0: Disable SMA indexing and perform all queries on non-indexed data. + +1: Enable SMA indexing and perform queries from suitable statements on precomputation results.| -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Maximum number of query threads | -| Default Value | 1 | -| Note | value range: float number between [0, 2] 0: only 1 query thread; >0: the times of the number of cores | ### maxNumOfDistinctRes -| Attribute | Description | -| ------------- | -------------------------------------------- | -| Applicable | Server Only | +| Attribute | Description | +| -------- | -------------------------------- | --- | +| Applicable | Server Only | | Meaning | The maximum number of distinct rows returned | | Value Range | [100,000 - 100,000,000] | | Default Value | 100,000 | -| Note | After version 2.3.0.0 | ## Locale Parameters ### timezone -| Attribute | Description | -| ------------- | ------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ------------------------------ | +| Applicable | Server and Client | | Meaning | TimeZone | | Default Value | TimeZone configured in the host | :::info -To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored in TDengine. The timestamp generated from any timezones at same time is same in Unix timestamp. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly. +To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored in TDengine. The timestamp generated from any timezones at same time is same in Unix timestamp. Note that Unix timestamps are converted and recorded on the client side. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly. -On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below. +On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below. For example: ``` -timezone UTC-7 +timezone UTC-8 timezone GMT-8 timezone Asia/Shanghai ``` @@ -240,11 +237,11 @@ To avoid the problems of using time strings, Unix timestamp can be used directly | Default Value | Locale configured in host | :::info -A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly. +A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. Note that the correct encoding is determined by the user. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly. The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE. -The locale definition standard on Linux is: \_., for example, in "zh_CN.UTF-8", "zh" means Chinese, "CN" means China mainland, "UTF-8" means charset. On Linux and Mac OSX, the charset can be set by locale in the system. On Windows system another configuration parameter `charset` must be used to configure charset because the locale used on Windows is not POSIX standard. Of course, `charset` can also be used on Linux to specify the charset. +The locale definition standard on Linux is: \_., for example, in "zh_CN.UTF-8", "zh" means Chinese, "CN" means China mainland, "UTF-8" means charset. The charset indicates how to display the characters. On Linux and Mac OSX, the charset can be set by locale in the system. On Windows system another configuration parameter `charset` must be used to configure charset because the locale used on Windows is not POSIX standard. Of course, `charset` can also be used on Linux to specify the charset. ::: @@ -257,864 +254,566 @@ The locale definition standard on Linux is: \_., f | Default Value | charset set in the system | :::info -On Linux, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start. So on Linux system, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example: +On Linux, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start. + +So on Linux system, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example: ``` locale zh_CN.UTF-8 ``` -On a Linux system, if the charset contained in `locale` is not consistent with that set by `charset`, the later setting in the configuration file takes precedence. - -```title="Effective charset is GBK" -locale zh_CN.UTF-8 -charset GBK -``` - -```title="Effective charset is UTF-8" -charset GBK -locale zh_CN.UTF-8 -``` - On Windows system, it's not possible to obtain charset from system locale. If it's not set in configuration file `taos.cfg`, it would be default to CP936, same as set as below in `taos.cfg`. For example ``` charset CP936 ``` +Refer to the documentation for your operating system before changing the charset. + +On a Linux system, if the charset contained in `locale` is not consistent with that set by `charset`, the later setting in the configuration file takes precedence. + +``` +locale zh_CN.UTF-8 +charset GBK +``` + +The charset that takes effect is GBK. + +``` +charset GBK +locale zh_CN.UTF-8 +``` + +The charset that takes effect is UTF-8. + ::: ## Storage Parameters ### dataDir -| Attribute | Description | -| ------------- | ------------------------------------------- | +| Attribute | Description | +| -------- | ------------------------------------------ | | Applicable | Server Only | | Meaning | All data files are stored in this directory | | Default Value | /var/lib/taos | -### cache - -| Attribute | Description | -| ------------- | ----------------------------- | -| Applicable | Server Only | -| Meaning | The size of each memory block | -| Unit | MB | -| Default Value | 16 | - -### blocks - -| Attribute | Description | -| ------------- | -------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The number of memory blocks of size `cache` used by each vnode | -| Default Value | 6 | - -### days - -| Attribute | Description | -| ------------- | ----------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The time range of the data stored in single data file | -| Unit | day | -| Default Value | 10 | - -### keep - -| Attribute | Description | -| ------------- | -------------------------------------- | -| Applicable | Server Only | -| Meaning | The number of days for data to be kept | -| Unit | day | -| Default Value | 3650 | - -### minRows - -| Attribute | Description | -| ------------- | ------------------------------------------ | -| Applicable | Server Only | -| Meaning | minimum number of rows in single data file | -| Default Value | 100 | - -### maxRows - -| Attribute | Description | -| ------------- | ------------------------------------------ | -| Applicable | Server Only | -| Meaning | maximum number of rows in single data file | -| Default Value | 4096 | - -### walLevel - -| Attribute | Description | -| ------------- | ---------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | WAL level | -| Value Range | 0: wal disabled
1: wal enabled without fsync
2: wal enabled with fsync | -| Default Value | 1 | - -### fsync - -| Attribute | Description | -| ------------- | --------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The waiting time for invoking fsync when walLevel is 2 | -| Unit | millisecond | -| Value Range | 0: no waiting time, fsync is performed immediately once WAL is written;
maximum value is 180000, i.e. 3 minutes | -| Default Value | 3000 | - -### update - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------ | -| Applicable | Server Only | -| Meaning | If it's allowed to update existing data | -| Value Range | 0: not allowed
1: a row can only be updated as a whole
2: a part of columns can be updated | -| Default Value | 0 | -| Note | Not available from version 2.0.8.0 | - -### cacheLast - -| Attribute | Description | -| ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Whether to cache the latest rows of each sub table in memory | -| Value Range | 0: not cached
1: the last row of each sub table is cached
2: the last non-null value of each column is cached
3: identical to both 1 and 2 are set | -| Default Value | 0 | - ### minimalTmpDirGB -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ------------------------------------------------ | +| Applicable | Server and Client | | Meaning | When the available disk space in tmpDir is below this threshold, writing to tmpDir is suspended | -| Unit | GB | -| Default Value | 1.0 | +| Unit | GB | +| Default Value | 1.0 | ### minimalDataDirGB -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------ | -| Applicable | Server Only | -| Meaning | hen the available disk space in dataDir is below this threshold, writing to dataDir is suspended | -| Unit | GB | -| Default Value | 2.0 | - -### vnodeBak - -| Attribute | Description | -| ------------- | --------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Whether to backup the corresponding vnode directory when a vnode is deleted | -| Value Range | 0: not backed up, 1: backup | -| Default Value | 1 | +| Attribute | Description | +| -------- | ------------------------------------------------ | +| Applicable | Server Only | +| Meaning | When the available disk space in dataDir is below this threshold, writing to dataDir is suspended | +| Unit | GB | +| Default Value | 2.0 | ## Cluster Parameters -### numOfMnodes +### supportVnodes -| Attribute | Description | -| ------------- | ------------------------------ | -| Applicable | Server Only | -| Meaning | The number of management nodes | -| Default Value | 3 | - -### replica - -| Attribute | Description | -| ------------- | -------------------------- | -| Applicable | Server Only | -| Meaning | The number of replications | -| Value Range | 1-3 | -| Default Value | 1 | - -### quorum - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------ | -| Applicable | Server Only | -| Meaning | The number of required confirmations for data replication in case of multiple replications | -| Value Range | 1,2 | -| Default Value | 1 | - -### role - -| Attribute | Description | -| ------------- | --------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The role of the dnode | -| Value Range | 0: both mnode and vnode
1: mnode only
2: dnode only | -| Default Value | 0 | - -### balance - -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Server Only | -| Meaning | Automatic load balancing | -| Value Range | 0: disabled, 1: enabled | -| Default Value | 1 | - -### balanceInterval - -| Attribute | Description | -| ------------- | ----------------------------------------------- | -| Applicable | Server Only | -| Meaning | The interval for checking load balance by mnode | -| Unit | second | -| Value Range | 1-30000 | -| Default Value | 300 | - -### arbitrator - -| Attribute | Description | -| ------------- | -------------------------------------------------- | -| Applicable | Server Only | -| Meaning | End point of arbitrator, format is same as firstEp | -| Default Value | None | +| Attribute | Description | +| -------- | --------------------------- | +| Applicable | Server Only | +| Meaning | Maximum number of vnodes per dnode | +| Value Range | 0-4096 | +| Default Value | 256 | ## Time Parameters -### precision - -| Attribute | Description | -| ------------- | ------------------------------------------------- | -| Applicable | Server only | -| Meaning | Time precision used for each database | -| Value Range | ms: millisecond; us: microsecond ; ns: nanosecond | -| Default Value | ms | - -### rpcTimer - -| Attribute | Description | -| ------------- | ------------------ | -| Applicable | Server and Client | -| Meaning | rpc retry interval | -| Unit | milliseconds | -| Value Range | 100-3000 | -| Default Value | 300 | - -### rpcMaxTime - -| Attribute | Description | -| ------------- | ---------------------------------- | -| Applicable | Server and Client | -| Meaning | maximum wait time for rpc response | -| Unit | second | -| Value Range | 100-7200 | -| Default Value | 600 | - ### statusInterval -| Attribute | Description | -| ------------- | ----------------------------------------------- | -| Applicable | Server Only | +| Attribute | Description | +| -------- | --------------------------- | +| Applicable | Server Only | | Meaning | the interval of dnode reporting status to mnode | | Unit | second | -| Value Range | 1-10 | -| Default Value | 1 | +| Value Range | 1-10 | +| Default Value | 1 | ### shellActivityTimer -| Attribute | Description | -| ------------- | ------------------------------------------------------ | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | --------------------------------- | +| Applicable | Server and Client | | Meaning | The interval for taos shell to send heartbeat to mnode | -| Unit | second | -| Value Range | 1-120 | -| Default Value | 3 | - -### tableMetaKeepTimer - -| Attribute | Description | -| ------------- | -------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The expiration time for metadata in cache, once it's reached the client would refresh the metadata | -| Unit | second | -| Value Range | 1-8640000 | -| Default Value | 7200 | - -### maxTmrCtrl - -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Server and Client | -| Meaning | Maximum number of timers | -| Unit | None | -| Value Range | 8-2048 | -| Default Value | 512 | - -### offlineThreshold - -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The expiration time for dnode online status, once it's reached before receiving status from a node, the dnode becomes offline | -| Unit | second | -| Value Range | 5-7200000 | -| Default Value | 86400\*10 (i.e. 10 days) | +| Unit | second | +| Value Range | 1-120 | +| Default Value | 3 | ## Performance Optimization Parameters -### numOfThreadsPerCore - -| Attribute | Description | -| ------------- | ------------------------------------------- | -| Applicable | Server and Client | -| Meaning | The number of consumer threads per CPU core | -| Default Value | 1.0 | - -### ratioOfQueryThreads - -| Attribute | Description | -| ------------- | --------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Maximum number of query threads | -| Value Range | 0: Only one query thread
1: Same as number of CPU cores
2: two times of CPU cores | -| Default Value | 1 | -| Note | This value can be a float number, 0.5 means half of the CPU cores | - -### maxVgroupsPerDb - -| Attribute | Description | -| ------------- | ------------------------------------ | -| Applicable | Server Only | -| Meaning | Maximum number of vnodes for each DB | -| Value Range | 0-8192 | -| Default Value | | - -### maxTablesPerVnode - -| Attribute | Description | -| ------------- | -------------------------------------- | -| Applicable | Server Only | -| Meaning | Maximum number of tables in each vnode | -| Default Value | 1000000 | - -### minTablesPerVnode - -| Attribute | Description | -| ------------- | -------------------------------------- | -| Applicable | Server Only | -| Meaning | Minimum number of tables in each vnode | -| Default Value | 1000 | - -### tableIncStepPerVnode - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | When minTablesPerVnode is reached, the number of tables are allocated for a vnode each time | -| Default Value | 1000 | - -### maxNumOfOrderedRes - -| Attribute | Description | -| ------------- | ------------------------------------------- | -| Applicable | Server and Client | -| Meaning | Maximum number of rows ordered for a STable | -| Default Value | 100,000 | - -### mnodeEqualVnodeNum - -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The number of vnodes whose system resources consumption are considered as equal to single mnode | -| Default Value | 4 | - ### numOfCommitThreads -| Attribute | Description | -| ------------- | ----------------------------------------- | -| Applicable | Server Only | +| Attribute | Description | +| -------- | ---------------------- | +| Applicable | Server Only | | Meaning | Maximum of threads for committing to disk | -| Default Value | | +| Default Value | | ## Compression Parameters -### comp - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Whether data is compressed | -| Value Range | 0: uncompressed, 1: One phase compression, 2: Two phase compression | -| Default Value | 2 | - -### tsdbMetaCompactRatio - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------- | -| Meaning | The threshold for percentage of redundant in meta file to trigger compression for meta file | -| Value Range | 0: no compression forever, [1-100]: The threshold percentage | -| Default Value | 0 | - ### compressMsgSize -| Attribute | Description | -| ------------- | -------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The threshold for message size to compress the message.. | +| Attribute | Description | +| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Applicable | Server Only | +| Meaning | The threshold for message size to compress the message. | Set the value to 64330 bytes for good message compression. | | Unit | bytes | | Value Range | 0: already compress; >0: compress when message exceeds it; -1: always uncompress | -| Default Value | -1 | +| Default Value | -1 | ### compressColData -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The threshold for size of column data to trigger compression for the query result | +| Attribute | Description | +| -------- | --------------------------------------------------------------------------------------- | +| Applicable | Server Only | +| Meaning | The threshold for size of column data to trigger compression for the query result | | Unit | bytes | | Value Range | 0: always compress; >0: only compress when the size of any column data exceeds the threshold; -1: always uncompress | -| Default Value | -1 | -| Note | available from version 2.3.0.0 | - -### lossyColumns - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | The floating number types for lossy compression | -| Value Range | "": lossy compression is disabled
float: only for float
double: only for double
float \| double: for both float and double | -| Default Value | "" , i.e. disabled | - -### fPrecision - -| Attribute | Description | -| ------------- | ----------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Compression precision for float type | -| Value Range | 0.1 ~ 0.00000001 | -| Default Value | 0.00000001 | -| Note | The fractional part lower than this value will be discarded | - -### dPrecision - -| Attribute | Description | -| ------------- | ----------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Compression precision for double type | -| Value Range | 0.1 ~ 0.0000000000000001 | -| Default Value | 0.0000000000000001 | -| Note | The fractional part lower than this value will be discarded | - -## Continuous Query Parameters - -### stream - -| Attribute | Description | -| ------------- | ---------------------------------- | -| Applicable | Server Only | -| Meaning | Whether to enable continuous query | -| Value Range | 0: disabled
1: enabled | -| Default Value | 1 | - -### minSlidingTime - -| Attribute | Description | -| ------------- | -------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Minimum sliding time of time window | -| Unit | millisecond or microsecond , depending on time precision | -| Value Range | 10-1000000 | -| Default Value | 10 | - -### minIntervalTime - -| Attribute | Description | -| ------------- | --------------------------- | -| Applicable | Server Only | -| Meaning | Minimum size of time window | -| Unit | millisecond | -| Value Range | 1-1000000 | -| Default Value | 10 | - -### maxStreamCompDelay - -| Attribute | Description | -| ------------- | ------------------------------------------------ | -| Applicable | Server Only | -| Meaning | Maximum delay before starting a continuous query | -| Unit | millisecond | -| Value Range | 10-1000000000 | -| Default Value | 20000 | - -### maxFirstStreamCompDelay - -| Attribute | Description | -| ------------- | -------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Maximum delay time before starting a continuous query the first time | -| Unit | millisecond | -| Value Range | 10-1000000000 | -| Default Value | 10000 | - -### retryStreamCompDelay - -| Attribute | Description | -| ------------- | --------------------------------------------- | -| Applicable | Server Only | -| Meaning | Delay time before retrying a continuous query | -| Unit | millisecond | -| Value Range | 10-1000000000 | -| Default Value | 10 | - -### streamCompDelayRatio - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------ | -| Applicable | Server Only | -| Meaning | The delay ratio, with time window size as the base, for continuous query | -| Value Range | 0.1-0.9 | -| Default Value | 0.1 | - -:::info -To prevent system resource from being exhausted by multiple concurrent streams, a random delay is applied on each stream automatically. `maxFirstStreamCompDelay` is the maximum delay time before a continuous query is started the first time. `streamCompDelayRatio` is the ratio for calculating delay time, with the size of the time window as base. `maxStreamCompDelay` is the maximum delay time. The actual delay time is a random time not bigger than `maxStreamCompDelay`. If a continuous query fails, `retryStreamComDelay` is the delay time before retrying it, also not bigger than `maxStreamCompDelay`. - -::: - -## HTTP Parameters - -:::note -HTTP service was provided by `taosd` prior to version 2.4.0.0 and is provided by `taosAdapter` after version 2.4.0.0. -The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter](/reference/taosadapter/). - -::: - -### http - -| Attribute | Description | -| ------------- | ------------------------------ | -| Applicable | Server Only | -| Meaning | Whether to enable http service | -| Value Range | 0: disabled, 1: enabled | -| Default Value | 1 | - -### httpEnableRecordSql - -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------- | -| Applicable | Server Only | -| Meaning | Whether to record the SQL invocation through REST interface | -| Default Value | 0: false; 1: true | -| Note | The resulting files, i.e. httpnote.0/httpnote.1, are located under logDir | - -### httpMaxThreads - -| Attribute | Description | -| ------------- | -------------------------------------------- | -| Applicable | Server Only | -| Meaning | The number of threads for RESTFul interface. | -| Default Value | 2 | - -### restfulRowLimit - -| Attribute | Description | -| ------------- | ------------------------------------------------------------ | -| Applicable | Server Only | -| Meaning | Maximum number of rows returned each time by REST interface. | -| Default Value | 10240 | -| Note | Maximum value is 10,000,000 | - -### httpDBNameMandatory - -| Attribute | Description | -| ------------- | ---------------------------------------- | -| Applicable | Server Only | -| Meaning | Whether database name is required in URL | -| Value Range | 0:not required, 1: required | -| Default Value | 0 | -| Note | From version 2.3.0.0 | +| Default Value | -1 | ## Log Parameters ### logDir -| Attribute | Description | -| ------------- | ----------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | -------------------------------------------------- | +| Applicable | Server and Client | | Meaning | The directory for writing log files | | Default Value | /var/log/taos | ### minimalLogDirGB -| Attribute | Description | -| ------------- | -------------------------------------------------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | -------------------------------------------- | +| Applicable | Server and Client | | Meaning | When the available disk space in logDir is below this threshold, writing to log files is suspended | -| Unit | GB | -| Default Value | 1.0 | +| Unit | GB | +| Default Value | 1.0 | ### numOfLogLines -| Attribute | Description | -| ------------- | ------------------------------------------ | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ---------------------------- | +| Applicable | Server and Client | | Meaning | Maximum number of lines in single log file | -| Default Value | 10,000,000 | +| Default Value | 10000000 | ### asyncLog -| Attribute | Description | -| ------------- | ---------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server and Client | | Meaning | The mode of writing log file | | Value Range | 0: sync way; 1: async way | -| Default Value | 1 | +| Default Value | 1 | ### logKeepDays -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ----------------------------------------------------------------------------------- | +| Applicable | Server and Client | | Meaning | The number of days for log files to be kept | -| Unit | day | -| Default Value | 0 | +| Unit | day | +| Default Value | 0 | | Note | When it's bigger than 0, the log file would be renamed to "taosdlog.xxx" in which "xxx" is the timestamp when the file is changed last time | ### debugFlag -| Attribute | Description | -| ------------- | --------------------------------------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ------------------------------------------------------------------------------------------------- | +| Applicable | Server and Client | | Meaning | Log level | | Value Range | 131: INFO/WARNING/ERROR; 135: plus DEBUG; 143: plus TRACE | | Default Value | 131 or 135, depending on the module | -### mDebugFlag - -| Attribute | Description | -| ------------- | ------------------ | -| Applicable | Server Only | -| Meaning | Log level of mnode | -| Value Range | same as debugFlag | -| Default Value | 135 | - -### dDebugFlag - -| Attribute | Description | -| ------------- | ------------------ | -| Applicable | Server and Client | -| Meaning | Log level of dnode | -| Value Range | same as debugFlag | -| Default Value | 135 | - -### sDebugFlag - -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Server and Client | -| Meaning | Log level of sync module | -| Value Range | same as debugFlag | -| Default Value | 135 | - -### wDebugFlag - -| Attribute | Description | -| ------------- | ----------------------- | -| Applicable | Server and Client | -| Meaning | Log level of WAL module | -| Value Range | same as debugFlag | -| Default Value | 135 | - -### sdbDebugFlag - -| Attribute | Description | -| ------------- | ---------------------- | -| Applicable | Server and Client | -| Meaning | logLevel of sdb module | -| Value Range | same as debugFlag | -| Default Value | 135 | - -### rpcDebugFlag - -| Attribute | Description | -| ------------- | ----------------------- | -| Applicable | Server and Client | -| Meaning | Log level of rpc module | -| Value Range | Same as debugFlag | -| Default Value | | - ### tmrDebugFlag -| Attribute | Description | -| ------------- | ------------------------- | +| Attribute | Description | +| -------- | -------------------- | | Applicable | Server and Client | | Meaning | Log level of timer module | -| Value Range | Same as debugFlag | -| Default Value | | - -### cDebugFlag - -| Attribute | Description | -| ------------- | ------------------- | -| Applicable | Client Only | -| Meaning | Log level of Client | -| Value Range | Same as debugFlag | -| Default Value | | - -### jniDebugFlag - -| Attribute | Description | -| ------------- | ----------------------- | -| Applicable | Client Only | -| Meaning | Log level of jni module | -| Value Range | Same as debugFlag | -| Default Value | | - -### odbcDebugFlag - -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Client Only | -| Meaning | Log level of odbc module | -| Value Range | Same as debugFlag | -| Default Value | | +| Value Range | same as debugFlag | +| Default Value | | ### uDebugFlag -| Attribute | Description | -| ------------- | -------------------------- | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | ---------------------- | +| Applicable | Server and Client | | Meaning | Log level of common module | -| Value Range | Same as debugFlag | -| Default Value | | +| Value Range | same as debugFlag | +| Default Value | | -### httpDebugFlag +### rpcDebugFlag -| Attribute | Description | -| ------------- | ------------------------------------------- | -| Applicable | Server Only | -| Meaning | Log level of http module (prior to 2.4.0.0) | -| Value Range | Same as debugFlag | -| Default Value | | +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server and Client | +| Meaning | Log level of rpc module | +| Value Range | same as debugFlag | +| Default Value | | -### mqttDebugFlag +### jniDebugFlag -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Server Only | -| Meaning | Log level of mqtt module | -| Value Range | Same as debugFlag | -| Default Value | | - -### monitorDebugFlag - -| Attribute | Description | -| ------------- | ------------------------------ | -| Applicable | Server Only | -| Meaning | Log level of monitoring module | -| Value Range | Same as debugFlag | -| Default Value | | +| Attribute | Description | +| -------- | ------------------ | +| Applicable | Client Only | +| Meaning | Log level of jni module | +| Value Range | same as debugFlag | +| Default Value | | ### qDebugFlag -| Attribute | Description | -| ------------- | ------------------------- | +| Attribute | Description | +| -------- | -------------------- | | Applicable | Server and Client | -| Meaning | Log level of query module | -| Value Range | Same as debugFlag | -| Default Value | | +| Meaning | Log level of query module | +| Value Range | same as debugFlag | +| Default Value | | + +### cDebugFlag + +| Attribute | Description | +| -------- | --------------------- | +| Applicable | Client Only | +| Meaning | Log level of Client | +| Value Range | same as debugFlag | +| Default Value | | + +### dDebugFlag + +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server Only | +| Meaning | Log level of dnode | +| Value Range | same as debugFlag | +| Default Value | 135 | ### vDebugFlag -| Attribute | Description | -| ------------- | ------------------ | -| Applicable | Server and Client | +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server Only | | Meaning | Log level of vnode | -| Value Range | Same as debugFlag | -| Default Value | | +| Value Range | same as debugFlag | +| Default Value | | + +### mDebugFlag + +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server Only | +| Meaning | Log level of mnode module | +| Value Range | same as debugFlag | +| Default Value | 135 | + +### wDebugFlag + +| Attribute | Description | +| -------- | ------------------ | +| Applicable | Server Only | +| Meaning | Log level of WAL module | +| Value Range | same as debugFlag | +| Default Value | 135 | + +### sDebugFlag + +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server and Client | +| Meaning | Log level of sync module | +| Value Range | same as debugFlag | +| Default Value | 135 | ### tsdbDebugFlag -| Attribute | Description | -| ------------- | ------------------------ | -| Applicable | Server Only | -| Meaning | Log level of TSDB module | -| Value Range | Same as debugFlag | -| Default Value | | +| Attribute | Description | +| -------- | ------------------- | +| Applicable | Server Only | +| Meaning | Log level of TSDB module | +| Value Range | same as debugFlag | +| Default Value | | -### cqDebugFlag +### tqDebugFlag | Attribute | Description | -| ------------- | ------------------------------------ | -| Applicable | Server and Client | -| Meaning | Log level of continuous query module | -| Value Range | Same as debugFlag | -| Default Value | | +| -------- | ----------------- | +| Applicable | Server only | +| Meaning | Log level of TQ module | +| Value Range | same as debugFlag | +| Default Value | | -## Client Only +### fsDebugFlag -### maxSQLLength +| Attribute | Description | +| -------- | ----------------- | +| Applicable | Server only | +| Meaning | Log level of FS module | +| Value Range | same as debugFlag | +| Default Value | | + +### udfDebugFlag | Attribute | Description | -| ------------- | -------------------------------------- | -| Applicable | Client Only | -| Meaning | Maximum length of single SQL statement | -| Unit | bytes | -| Value Range | 65480-1048576 | -| Default Value | 1048576 | +| -------- | ------------------ | +| Applicable | Server Only | +| Meaning | Log level of UDF module | +| Value Range | same as debugFlag | +| Default Value | | -### tscEnableRecordSql +### smaDebugFlag -| Attribute | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | -| Meaning | Whether to record SQL statements in file | -| Value Range | 0: false, 1: true | -| Default Value | 0 | -| Note | The generated files are named as "tscnote-xxxx.0/tscnote-xxx.1" in which "xxxx" is the pid of the client, and located at same place as client log | +| Attribute | Description | +| -------- | ------------------ | +| Applicable | Server Only | +| Meaning | Log level of SMA module | +| Value Range | same as debugFlag | +| Default Value | | -### maxBinaryDisplayWidth +### idxDebugFlag -| Attribute | Description | -| ------------- | --------------------------------------------------------------------------------------------------- | -| Meaning | Maximum display width of binary and nchar in taos shell. Anything beyond this limit would be hidden | -| Value Range | 5 - | -| Default Value | 30 | +| Attribute | Description | +| -------- | -------------------- | +| Applicable | Server Only | +| Meaning | Log level of index module | +| Value Range | same as debugFlag | +| Default Value | | -:::info -If the length of value exceeds `maxBinaryDisplayWidth`, then the actual display width is max(column name, maxBinaryDisplayLength); otherwise the actual display width is max(length of column name, length of column value). This parameter can also be changed dynamically using `set max_binary_display_width ` in TDengine CLI `taos`. +### tdbDebugFlag -::: +| Attribute | Description | +| -------- | ------------------ | +| Applicable | Server Only | +| Meaning | Log level of TDB module | +| Value Range | same as debugFlag | +| Default Value | | -### maxWildCardsLength +## Schemaless Parameters -| Attribute | Description | -| ------------- | ----------------------------------------------------- | -| Meaning | The maximum length for wildcard string used with LIKE | -| Unit | bytes | -| Value Range | 0-16384 | -| Default Value | 100 | -| Note | From version 2.1.6.1 | +### smlChildTableName -### clientMerge +| Attribute | Description | +| -------- | ------------------------- | +| Applicable | Client only | +| Meaning | Custom subtable name for schemaless writes | +| Type | String | +| Default Value | None | -| Attribute | Description | -| ------------- | --------------------------------------------------- | -| Meaning | Whether to filter out duplicate data on client side | -| Value Range | 0: false; 1: true | -| Default Value | 0 | -| Note | From version 2.3.0.0 | +### smlTagName -### maxRegexStringLen +| Attribute | Description | +| -------- | ------------------------------------ | +| Applicable | Client only | +| Meaning | Default tag for schemaless writes without tag value specified | +| Type | String | +| Default Value | _tag_null | + +### smlDataFormat | Attribute | Description | -| ------------- | ------------------------------------ | -| Meaning | Maximum length of regular expression | -| Value Range | [128, 16384] | -| Default Value | 128 | -| Note | From version 2.3.0.0 | +| -------- | ----------------------------- | +| Applicable | Client only | +| Meaning | Whether schemaless columns are consistently ordered | +| Value Range | 0: not consistent; 1: consistent. | +| Default | 1 | ## Other Parameters ### enableCoreFile -| Attribute | Description | -| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Attribute | Description | +| -------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | Applicable | Server and Client | | Meaning | Whether to generate core file when server crashes | | Value Range | 0: false, 1: true | | Default Value | 1 | | Note | The core file is generated under root directory `systemctl start taosd` is used to start, or under the working directory if `taosd` is started directly on Linux Shell. | + +### udf + +| Attribute | Description | +| -------- | ------------------ | +| Applicable | Server Only | +| Meaning | Whether the UDF service is enabled | +| Value Range | 0: disable UDF; 1: enabled UDF | +| Default Value | 1 | + +## Parameter Comparison of TDengine 2.x and 3.0 +| # | **Parameter** | **In 2.x** | **In 3.0** | +| --- | :-----------------: | --------------- | --------------- | +| 1 | firstEp | Yes | Yes | +| 2 | secondEp | Yes | Yes | +| 3 | fqdn | Yes | Yes | +| 4 | serverPort | Yes | Yes | +| 5 | maxShellConns | Yes | Yes | +| 6 | monitor | Yes | Yes | +| 7 | monitorFqdn | No | Yes | +| 8 | monitorPort | No | Yes | +| 9 | monitorInterval | Yes | Yes | +| 10 | monitorMaxLogs | No | Yes | +| 11 | monitorComp | No | Yes | +| 12 | telemetryReporting | Yes | Yes | +| 13 | telemetryInterval | No | Yes | +| 14 | telemetryServer | No | Yes | +| 15 | telemetryPort | No | Yes | +| 16 | queryPolicy | No | Yes | +| 17 | querySmaOptimize | No | Yes | +| 18 | queryBufferSize | Yes | Yes | +| 19 | maxNumOfDistinctRes | Yes | Yes | +| 20 | minSlidingTime | Yes | Yes | +| 21 | minIntervalTime | Yes | Yes | +| 22 | countAlwaysReturnValue | Yes | Yes | +| 23 | dataDir | Yes | Yes | +| 24 | minimalDataDirGB | Yes | Yes | +| 25 | supportVnodes | No | Yes | +| 26 | tempDir | Yes | Yes | +| 27 | minimalTmpDirGB | Yes | Yes | +| 28 | compressMsgSize | Yes | Yes | +| 29 | compressColData | Yes | Yes | +| 30 | smlChildTableName | Yes | Yes | +| 31 | smlTagName | Yes | Yes | +| 32 | smlDataFormat | No | Yes | +| 33 | statusInterval | Yes | Yes | +| 34 | shellActivityTimer | Yes | Yes | +| 35 | transPullupInterval | No | Yes | +| 36 | mqRebalanceInterval | No | Yes | +| 37 | ttlUnit | No | Yes | +| 38 | ttlPushInterval | No | Yes | +| 39 | numOfTaskQueueThreads | No | Yes | +| 40 | numOfRpcThreads | No | Yes | +| 41 | numOfCommitThreads | Yes | Yes | +| 42 | numOfMnodeReadThreads | No | Yes | +| 43 | numOfVnodeQueryThreads | No | Yes | +| 44 | numOfVnodeStreamThreads | No | Yes | +| 45 | numOfVnodeFetchThreads | No | Yes | +| 46 | numOfVnodeWriteThreads | No | Yes | +| 47 | numOfVnodeSyncThreads | No | Yes | +| 48 | numOfQnodeQueryThreads | No | Yes | +| 49 | numOfQnodeFetchThreads | No | Yes | +| 50 | numOfSnodeSharedThreads | No | Yes | +| 51 | numOfSnodeUniqueThreads | No | Yes | +| 52 | rpcQueueMemoryAllowed | No | Yes | +| 53 | logDir | Yes | Yes | +| 54 | minimalLogDirGB | Yes | Yes | +| 55 | numOfLogLines | Yes | Yes | +| 56 | asyncLog | Yes | Yes | +| 57 | logKeepDays | Yes | Yes | +| 58 | debugFlag | Yes | Yes | +| 59 | tmrDebugFlag | Yes | Yes | +| 60 | uDebugFlag | Yes | Yes | +| 61 | rpcDebugFlag | Yes | Yes | +| 62 | jniDebugFlag | Yes | Yes | +| 63 | qDebugFlag | Yes | Yes | +| 64 | cDebugFlag | Yes | Yes | +| 65 | dDebugFlag | Yes | Yes | +| 66 | vDebugFlag | Yes | Yes | +| 67 | mDebugFlag | Yes | Yes | +| 68 | wDebugFlag | Yes | Yes | +| 69 | sDebugFlag | Yes | Yes | +| 70 | tsdbDebugFlag | Yes | Yes | +| 71 | tqDebugFlag | No | Yes | +| 72 | fsDebugFlag | Yes | Yes | +| 73 | udfDebugFlag | No | Yes | +| 74 | smaDebugFlag | No | Yes | +| 75 | idxDebugFlag | No | Yes | +| 76 | tdbDebugFlag | No | Yes | +| 77 | metaDebugFlag | No | Yes | +| 78 | timezone | Yes | Yes | +| 79 | locale | Yes | Yes | +| 80 | charset | Yes | Yes | +| 81 | udf | Yes | Yes | +| 82 | enableCoreFile | Yes | Yes | +| 83 | arbitrator | Yes | No | +| 84 | numOfThreadsPerCore | Yes | No | +| 85 | numOfMnodes | Yes | No | +| 86 | vnodeBak | Yes | No | +| 87 | balance | Yes | No | +| 88 | balanceInterval | Yes | No | +| 89 | offlineThreshold | Yes | No | +| 90 | role | Yes | No | +| 91 | dnodeNopLoop | Yes | No | +| 92 | keepTimeOffset | Yes | No | +| 93 | rpcTimer | Yes | No | +| 94 | rpcMaxTime | Yes | No | +| 95 | rpcForceTcp | Yes | No | +| 96 | tcpConnTimeout | Yes | No | +| 97 | syncCheckInterval | Yes | No | +| 98 | maxTmrCtrl | Yes | No | +| 99 | monitorReplica | Yes | No | +| 100 | smlTagNullName | Yes | No | +| 101 | keepColumnName | Yes | No | +| 102 | ratioOfQueryCores | Yes | No | +| 103 | maxStreamCompDelay | Yes | No | +| 104 | maxFirstStreamCompDelay | Yes | No | +| 105 | retryStreamCompDelay | Yes | No | +| 106 | streamCompDelayRatio | Yes | No | +| 107 | maxVgroupsPerDb | Yes | No | +| 108 | maxTablesPerVnode | Yes | No | +| 109 | minTablesPerVnode | Yes | No | +| 110 | tableIncStepPerVnode | Yes | No | +| 111 | cache | Yes | No | +| 112 | blocks | Yes | No | +| 113 | days | Yes | No | +| 114 | keep | Yes | No | +| 115 | minRows | Yes | No | +| 116 | maxRows | Yes | No | +| 117 | quorum | Yes | No | +| 118 | comp | Yes | No | +| 119 | walLevel | Yes | No | +| 120 | fsync | Yes | No | +| 121 | replica | Yes | No | +| 122 | partitions | Yes | No | +| 123 | quorum | Yes | No | +| 124 | update | Yes | No | +| 125 | cachelast | Yes | No | +| 126 | maxSQLLength | Yes | No | +| 127 | maxWildCardsLength | Yes | No | +| 128 | maxRegexStringLen | Yes | No | +| 129 | maxNumOfOrderedRes | Yes | No | +| 130 | maxConnections | Yes | No | +| 131 | mnodeEqualVnodeNum | Yes | No | +| 132 | http | Yes | No | +| 133 | httpEnableRecordSql | Yes | No | +| 134 | httpMaxThreads | Yes | No | +| 135 | restfulRowLimit | Yes | No | +| 136 | httpDbNameMandatory | Yes | No | +| 137 | httpKeepAlive | Yes | No | +| 138 | enableRecordSql | Yes | No | +| 139 | maxBinaryDisplayWidth | Yes | No | +| 140 | stream | Yes | No | +| 141 | retrieveBlockingModel | Yes | No | +| 142 | tsdbMetaCompactRatio | Yes | No | +| 143 | defaultJSONStrType | Yes | No | +| 144 | walFlushSize | Yes | No | +| 145 | keepTimeOffset | Yes | No | +| 146 | flowctrl | Yes | No | +| 147 | slaveQuery | Yes | No | +| 148 | adjustMaster | Yes | No | +| 149 | topicBinaryLen | Yes | No | +| 150 | telegrafUseFieldNum | Yes | No | +| 151 | deadLockKillQuery | Yes | No | +| 152 | clientMerge | Yes | No | +| 153 | sdbDebugFlag | Yes | No | +| 154 | odbcDebugFlag | Yes | No | +| 155 | httpDebugFlag | Yes | No | +| 156 | monDebugFlag | Yes | No | +| 157 | cqDebugFlag | Yes | No | +| 158 | shortcutFlag | Yes | No | +| 159 | probeSeconds | Yes | No | +| 160 | probeKillSeconds | Yes | No | +| 161 | probeInterval | Yes | No | +| 162 | lossyColumns | Yes | No | +| 163 | fPrecision | Yes | No | +| 164 | dPrecision | Yes | No | +| 165 | maxRange | Yes | No | +| 166 | range | Yes | No | From 3fae4ba6f0e9ed7b8934520530785e2cd5322951 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 22 Aug 2022 14:10:16 +0800 Subject: [PATCH 02/29] doc: english version of schemaless --- .../13-schemaless/13-schemaless.md | 114 +++++++++--------- 1 file changed, 60 insertions(+), 54 deletions(-) diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 8b6a26ae52..816ebe0047 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -3,7 +3,8 @@ title: Schemaless Writing description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface." --- -In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. To provide the flexibility needed in such cases and in a rapidly changing IoT landscape, TDengine provides a series of interfaces for the schemaless writing method. These interfaces eliminate the need to create super tables and subtables in advance by automatically creating the storage structure corresponding to the data as the data is written to the interface. When necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly. +In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing +will automatically add the required columns to ensure that the data written by the user is stored correctly. The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability. @@ -19,12 +20,12 @@ With the following formatting conventions, schemaless writing uses a single stri measurement,tag_set field_set timestamp ``` -where : +where: - measurement will be used as the data table name. It will be separated from tag_set by a comma. -- tag_set will be used as tag data in the format `=,=`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space. -- field_set will be used as normal column data in the format of `=,=`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space. -- The timestamp is the primary key corresponding to the data in this row. +- `tag_set` will be used as tags, with format like `=,=` Enter a space between `tag_set` and `field_set`. +- `field_set`will be used as data columns, with format like `=,=` Enter a space between `field_set` and `timestamp`. +- `timestamp` is the primary key timestamp corresponding to this row of data All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes ("). @@ -37,16 +38,18 @@ In the schemaless writing data line protocol, each data item in the field_set ne | **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** | | -------- | -------- | ------------ | -------------- | -| 1 | none or f64 | double | 8 | -| 2 | f32 | float | 4 | -| 3 | i8/u8 | TinyInt/UTinyInt | 1 | -| 4 | i16/u16 | SmallInt/USmallInt | 2 | -| 5 | i32/u32 | Int/UInt | 4 | -| 6 | i64/i/u64/u | Bigint/Bigint/UBigint/UBigint | 8 | +| 1 | None or f64 | double | 8 | +| 2 | f32 | float | 4 | +| 3 | i8/u8 | TinyInt/UTinyInt | 1 | +| 4 | i16/u16 | SmallInt/USmallInt | 2 | +| 5 | i32/u32 | Int/UInt | 4 | +| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 | - `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types. -For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row. +For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label +is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column +is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row. ```json st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 @@ -65,18 +68,22 @@ Schemaless writes process row data according to the following principles. ``` Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. -The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. +The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has. +You can configure smlChildTableName to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. -If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2. +3. If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2. 4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental). -5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL. +5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to + NULL. 6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data. 7. Errors encountered throughout the processing will interrupt the writing process and return an error code. -8. In order to improve the efficiency of writing, it is assumed by default that the order of the fields in the same Super is the same (the first data contains all fields, and the following data is in this order). If the order is different, the parameter smlDataFormat needs to be configured to be false. Otherwise, the data is written in the same order, and the data in the library will be abnormal. +8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat to false. Otherwise, data will be written out of order and a database error will occur. :::tip -All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 16k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. +All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed +16KB. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. + ::: ## Time resolution recognition @@ -85,75 +92,74 @@ Three specified modes are supported in the schemaless writing process, as follow | **Serial** | **Value** | **Description** | | -------- | ------------------- | ------------------------------- | -| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol | -| 2 | SML_TELNET_PROTOCOL | OpenTSDB Text Line Protocol | -| 3 | SML_JSON_PROTOCOL | JSON protocol format | +| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol | +| 2 | SML_TELNET_PROTOCOL | OpenTSDB file protocol | +| 3 | SML_JSON_PROTOCOL | OpenTSDB JSON protocol | -In the SML_LINE_PROTOCOL parsing mode, the user is required to specify the time resolution of the input timestamp. The available time resolutions are shown in the following table. +In InfluxDB line protocol mode, you must specify the precision of the input timestamp. Valid precisions are described in the following table. -| **Serial Number** | **Time Resolution Definition** | **Meaning** | +| **No.** | **Precision** | **Description** | | -------- | --------------------------------- | -------------- | -| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) | -| 2 | TSDB_SML_TIMESTAMP_HOURS | hour | -| 3 | TSDB_SML_TIMESTAMP_MINUTES | MINUTES -| 4 | TSDB_SML_TIMESTAMP_SECONDS | SECONDS -| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | milliseconds -| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | microseconds -| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | nanoseconds | +| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) | +| 2 | TSDB_SML_TIMESTAMP_HOURS | Hours | +| 3 | TSDB_SML_TIMESTAMP_MINUTES | Minutes | +| 4 | TSDB_SML_TIMESTAMP_SECONDS | Seconds | +| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | Milliseconds | +| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | Microseconds | +| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | Nanoseconds | -In SML_TELNET_PROTOCOL and SML_JSON_PROTOCOL modes, the time precision is determined based on the length of the timestamp (in the same way as the OpenTSDB standard operation), and the user-specified time resolution is ignored at this point. +In OpenTSDB file and JSON protocol modes, the precision of the timestamp is determined from its length in the standard OpenTSDB manner. User input is ignored. -## Data schema mapping rules +## Data Model Mapping -This section describes how data for line protocols are mapped to data with a schema. The data measurement in each line protocol is mapped as follows: -- The tag name in tag_set is the name of the tag in the data schema -- The name in field_set is the column's name. - -The following data is used as an example to illustrate the mapping rules. +This section describes how data in line protocol is mapped to a schema. The data measurement in each line is mapped to a +supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped: ```json st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 ``` -The row data mapping generates a super table: `st`, which contains three labels of type NCHAR: t1, t2, t3. Five data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement. +This row is mapped to a supertable: `st` contains three NCHAR tags: t1, t2, and t3. Five columns are created: ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), and c4 (bigint). The following SQL statement is generated: ```json create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2)) ``` -## Data schema change handling +## Processing Schema Changes -This section describes the impact on the data schema for different line protocol data writing cases. +This section describes the impact on the schema caused by different data being written. -When writing to an explicitly identified field type using the line protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the +If you use line protocol to write to a specific tag field and then later change the field type, a schema error will ocur. This triggers an error on the write API. This is shown as follows: ```json -st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000 -st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000 +st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000 +st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000 ``` -The data type mapping in the first row defines column c4 as DOUBLE, but the data in the second row is declared as BIGINT by the numeric suffix, which triggers a parsing error with schemaless writing. +The first row defines c4 as a double. However, in the second row, the suffix indicates that the value of c4 is a bigint. This causes schemaless writing to throw an error. -If the line protocol before the column declares the data column as BINARY, the subsequent one requires a longer binary length, which triggers a super table schema change. +An error also occurs if data input into a binary column exceeds the defined length of the column. ```json -st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000 -st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000 +st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000 +st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000 ``` -The first line of the line protocol parsing will declare column c5 is a BINARY(4) field. The second line data write will parse column c5 as a BINARY column. But in the second line, c5's width is 6 so you need to increase the width of the BINARY field to be able to accommodate the new string. +The first row defines c5 as a binary(4). but the second row writes 6 bytes to it. This means that the length of the binary column must be expanded to contain the data. ```json -st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000 -st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000 +st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000 +st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000 ``` -The second line of data has an additional column c6 of type BINARY(6) compared to the first row. Then a column c6 of type BINARY(6) is automatically added at this point. +The preceding data includes a new entry, c6, with type binary(6). When this occurs, a new column c6 with type binary(6) is added automatically. -## Write integrity +## Write Integrity -TDengine provides idempotency guarantees for data writing, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail. +TDengine guarantees the idempotency of data writes. This means that you can repeatedly call the API to perform write operations with bad data. However, TDengine does not guarantee the atomicity of multi-row writes. In a multi-row write, some data may be written successfully and other data unsuccessfully. -## Error code +##: Error Codes -If it is an error in the data itself during the schemaless writing process, the application will get `TSDB_CODE_TSC_LINE_SYNTAX_ERROR` error message, which indicates that the error occurred in writing. The other error codes are consistent with the TDengine and can be obtained via the `taos_errstr()` to get the specific cause of the error. +The TSDB_CODE_TSC_LINE_SYNTAX_ERROR indicates an error in the schemaless writing component. +This error occurs when writing text. For other errors, schemaless writing uses the standard TDengine error codes +found in taos_errstr. From c6288fad5a6c10f6e81c8322dc04b335ac7a0c01 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 22 Aug 2022 15:14:42 +0800 Subject: [PATCH 03/29] doc: english version of operation --- docs/en/13-operation/01-pkg-install.md | 251 ++++++------------------- docs/en/13-operation/02-planning.mdx | 50 +++-- docs/en/13-operation/03-tolerance.md | 20 +- docs/en/13-operation/17-diagnose.md | 109 +++-------- 4 files changed, 112 insertions(+), 318 deletions(-) diff --git a/docs/en/13-operation/01-pkg-install.md b/docs/en/13-operation/01-pkg-install.md index c098002962..30060fb085 100644 --- a/docs/en/13-operation/01-pkg-install.md +++ b/docs/en/13-operation/01-pkg-install.md @@ -1,152 +1,63 @@ --- -title: Install & Uninstall +title: Install and Uninstall description: Install, Uninstall, Start, Stop and Upgrade --- import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; -TDengine community version provides deb and rpm packages for users to choose from, based on their system environment. The deb package supports Debian, Ubuntu and derivative systems. The rpm package supports CentOS, RHEL, SUSE and derivative systems. Furthermore, a tar.gz package is provided for TDengine Enterprise customers. +This document gives more information about installing, uninstalling, and upgrading TDengine. ## Install - - - -1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb -2. In the directory where the package is located, execute the command below - -```bash -$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb -(Reading database ... 137504 files and directories currently installed.) -Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ... -TDengine is removed successfully! -Unpacking tdengine (2.4.0.7) over (2.4.0.7) ... -Setting up tdengine (2.4.0.7) ... -Start to install TDengine... - -System hostname is: ubuntu-1804 - -Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join -OR leave it blank to build one: - -Enter your email address for priority support or enter empty to skip: -Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. - -To configure TDengine : edit /etc/taos/taos.cfg -To start TDengine : sudo systemctl start taosd -To access TDengine : taos -h ubuntu-1804 to login into TDengine server +See [Quick Install](../../get-started/package) for preliminary steps. -TDengine is installed successfully! -``` - +## Installation Directory - - -1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm; -2. In the directory where the package is located, execute the command below +TDengine is installed at /usr/local/taos if successful. ``` -$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm -Preparing... ################################# [100%] -Updating / installing... - 1:tdengine-2.4.0.7-3 ################################# [100%] -Start to install TDengine... - -System hostname is: centos7 - -Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join -OR leave it blank to build one: - -Enter your email address for priority support or enter empty to skip: - -Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service. - -To configure TDengine : edit /etc/taos/taos.cfg -To start TDengine : sudo systemctl start taosd -To access TDengine : taos -h centos7 to login into TDengine server - - -TDengine is installed successfully! -``` - - - - - -1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz; -2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script. - -```bash -$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz -TDengine-enterprise-server-2.4.0.7/ -TDengine-enterprise-server-2.4.0.7/driver/ -TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt -TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7 -TDengine-enterprise-server-2.4.0.7/install.sh -TDengine-enterprise-server-2.4.0.7/examples/ -... - +$ cd /usr/local/taos $ ll -total 43816 -drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./ -drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../ -drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/ --rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz - -$ cd TDengine-enterprise-server-2.4.0.7/ - - $ ll -total 40784 -drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./ -drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../ -drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/ -drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/ --rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh* --rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz - -$ sudo ./install.sh - -Start to update TDengine... -Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. -Nginx for TDengine is updated successfully! - -To configure TDengine : edit /etc/taos/taos.cfg -To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml -To start TDengine : sudo systemctl start taosd -To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060 - -TDengine is updated successfully! -Install taoskeeper as a standalone service -taoskeeper is installed, enable it by `systemctl enable taoskeeper` +$ ll +total 28 +drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./ +drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/ +drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ``` -:::info -Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation. - -::: - - - - -:::note -When installing on the first node in the cluster, at the "Enter FQDN:" prompt, nothing needs to be provided. When installing on subsequent nodes, at the "Enter FQDN:" prompt, you must enter the end point of the first dnode in the cluster if it is already up. You can also just ignore it and configure it later after installation is finished. - -::: +- Configuration directory, data directory, and log directory are created automatically if they don't exist +- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg +- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data +- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log +- The executables at /usr/local/taos/bin are linked to /usr/bin +- The DLL files at /usr/local/taos/driver are linked to /usr/lib +- The header files at /usr/local/taos/include are linked to /usr/include ## Uninstall + + +TBD + + Deb package of TDengine can be uninstalled as below: -```bash +``` $ sudo dpkg -r tdengine -(Reading database ... 137504 files and directories currently installed.) -Removing tdengine (2.4.0.7) ... +(Reading database ... 120119 files and directories currently installed.) +Removing tdengine (3.0.0.10002) ... TDengine is removed successfully! ``` @@ -170,115 +81,59 @@ tar.gz package of TDengine can be uninstalled as below: ``` $ rmtaos -Nginx for TDengine is running, stopping it... TDengine is removed successfully! - -taosKeeper is removed successfully! ``` + + +Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system. -:::note +:::info -- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine. -- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed. +- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine. The packages may affect each other and cause errors. -```bash - $ sudo rm -f /var/lib/dpkg/info/tdengine* -``` +- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. -- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed. + ``` + $ sudo rm -f /var/lib/dpkg/info/tdengine* + ``` -```bash - $ sudo rpm -e --noscripts tdengine -``` +You can then reinstall if needed. + +- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. + + ``` + $ sudo rpm -e --noscripts tdengine + ``` + +You can then reinstall if needed. ::: -## Installation Directory - -TDengine is installed at /usr/local/taos if successful. - -```bash -$ cd /usr/local/taos -$ ll -$ ll -total 28 -drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./ -drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../ -drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/ -drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/ -lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/ -drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/ -drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/ -drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/ -lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ -``` - -During the installation process: - -- Configuration directory, data directory, and log directory are created automatically if they don't exist -- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg -- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data -- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log -- The executables at /usr/local/taos/bin are linked to /usr/bin -- The DLL files at /usr/local/taos/driver are linked to /usr/lib -- The header files at /usr/local/taos/include are linked to /usr/include - -:::note +Uninstalling and Modifying Files - When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data. + - When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used. -## Start and Stop - -Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server. - -For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below: - -- Start server:`systemctl start taosd` - -- Stop server:`systemctl stop taosd` - -- Restart server:`systemctl restart taosd` - -- Check server status:`systemctl status taosd` - -From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`. - -If the server process is OK, the output of `systemctl status` is like below: - -``` -Active: active (running) -``` - -Otherwise, the output is as below: - -``` -Active: inactive (dead) -``` - ## Upgrade - There are two aspects in upgrade operation: upgrade installation package and upgrade a running server. To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version. Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below: - - Stop inserting data - Make sure all data is persisted to disk -- Make some simple queries (Such as total rows in stables, tables and so on. Note down the values. Follow best practices and relevant SOPs.) - Stop the cluster of TDengine - Uninstall old version and install new version - Start the cluster of TDengine -- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss +- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss - Run some simple data insertion statements to make sure the cluster works well - Restore business services :::warning - TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version. ::: diff --git a/docs/en/13-operation/02-planning.mdx b/docs/en/13-operation/02-planning.mdx index c1baf92dbf..2dffa7bb87 100644 --- a/docs/en/13-operation/02-planning.mdx +++ b/docs/en/13-operation/02-planning.mdx @@ -1,40 +1,32 @@ --- +sidebar_label: Resource Planning title: Resource Planning --- It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter. -## Memory Requirement of Server Side +## Server Memory Requirements -By default, the number of vgroups created for each database is the same as the number of CPU cores. This can be configured by the parameter `maxVgroupsPerDb`. Each vnode in a vgroup stores one replica. Each vnode consumes a fixed amount of memory, i.e. `blocks` \* `cache`. In addition, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below: +Each database creates a fixed number of vgroups. This number is 2 by default and can be configured with the `vgroups` parameter. The number of replicas can be controlled with the `replica` parameter. Each replica requires one vnode per vgroup. Altogether, the memory required by each database depends on the following configuration options: + +- vgroups +- replica +- buffer +- pages +- pagesize +- cachesize + +For more information, see [Database](../../taos-sql/database). + +The memory required by a database is therefore greater than or equal to: ``` -Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB) +vgroups * replica * (buffer + pages * pagesize + cachesize) ``` -For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` is 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M. +However, note that this requirement is spread over all dnodes in the cluster, not on a single physical machine. The physical servers that run dnodes meet the requirement together. If a cluster has multiple databases, the memory required increases accordingly. In complex environments where dnodes were added after initial deployment in response to increasing resource requirements, load may not be balanced among the original dnodes and newer dnodes. In this situation, the actual status of your dnodes is more important than theoretical calculations. -In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`. - -``` - taosd_memory = vnode_memory + mnode_memory + query_memory -``` - -In the above formula: - -1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula for Database Memory Size, mentioned above, then dividing by number of dnodes and multiplying the number of replicas. - -``` - vnode_memory = (sum(Database Memory Size) / number_of_dnodes) * replica -``` - -2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster". - -3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables". - -Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query. - -## Memory Requirement of Client Side +## Client Memory Requirements For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well. @@ -56,10 +48,10 @@ So, at least 3GB needs to be reserved for such a client. The CPU resources required depend on two aspects: -- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold. +- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. If each insert request contains more than 200 records, a single core can process more than 1 million records per second. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold. - **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users. -In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources. +In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. If possible, ensure that CPU usage remains below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources. ## Disk Requirement @@ -77,6 +69,6 @@ To increase performance, multiple disks can be setup for parallel data reading o ## Number of Hosts -A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily. +A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. If the number of data replicas is not 1, the required resources are multiplied by the number of replicas. -**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html). +Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily. diff --git a/docs/en/13-operation/03-tolerance.md b/docs/en/13-operation/03-tolerance.md index d4d48d7fcd..ba9d5d75e3 100644 --- a/docs/en/13-operation/03-tolerance.md +++ b/docs/en/13-operation/03-tolerance.md @@ -1,6 +1,5 @@ --- -sidebar_label: Fault Tolerance -title: Fault Tolerance & Disaster Recovery +title: Fault Tolerance and Disaster Recovery --- ## Fault Tolerance @@ -11,22 +10,21 @@ When a data block is received by TDengine, the original data block is first writ There are 2 configuration parameters related to WAL: -- walLevel: - - 0:wal is disabled - - 1:wal is enabled without fsync - - 2:wal is enabled with fsync -- fsync:This parameter is only valid when walLevel is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written. +- wal_level: Specifies the WAL level. 1 indicates that WAL is enabled but fsync is disabled. 2 indicates that WAL and fsync are both enabled. The default value is 1. +- wal_fsync_period: This parameter is only valid when wal_level is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written. -To achieve absolutely no data loss, walLevel should be set to 2 and fsync should be set to 1. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when fsync is set to 3,000 milliseconds. +To achieve absolutely no data loss, set wal_level to 2 and wal_fsync_period to 0. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when wal_fsync_period is set to 3000 milliseconds. ## Disaster Recovery -TDengine uses replication to provide high availability and disaster recovery capability. +TDengine uses replication to provide high availability. -A TDengine cluster is managed by mnode. To ensure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency. +A TDengine cluster is managed by mnodes. You can configure up to three mnodes to ensure high availability. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency. -The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1. +The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, the parameter `replica` is used to specify the number of replicas. To achieve high availability, set `replica` to 3. The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table. As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers. + +Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. For more information, see [taosX](../../reference/taosX). diff --git a/docs/en/13-operation/17-diagnose.md b/docs/en/13-operation/17-diagnose.md index 2b474fddba..d01d12e831 100644 --- a/docs/en/13-operation/17-diagnose.md +++ b/docs/en/13-operation/17-diagnose.md @@ -13,110 +13,59 @@ Diagnostic steps: 1. If the port range to be diagnosed is being occupied by a `taosd` server process, please first stop `taosd. 2. On the server side, execute command `taos -n server -P -l ` to monitor the port range starting from the port specified by `-P` parameter with the role of "server". 3. On the client side, execute command `taos -n client -h -P -l ` to send a testing package to the specified server and port. - --l : The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please note that the package length must be same in the above 2 commands executed on server side and client side respectively. + +-l : The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. +Please note that the package length must be same in the above 2 commands executed on server side and client side respectively. Output of the server side for the example is below: ```bash -# taos -n server -P 6000 -12/21 14:50:13.522509 0x7f536f455200 UTL work as server, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000 - -12/21 14:50:13.522659 0x7f5352242700 UTL TCP server at port:6000 is listening -12/21 14:50:13.522727 0x7f5351240700 UTL TCP server at port:6001 is listening +# taos -n server -P 6030 -l 1000 +network test server is initialized, port:6030 +request is received, size:1000 +request is received, size:1000 ... ... ... -12/21 14:50:13.523954 0x7f5342fed700 UTL TCP server at port:6011 is listening -12/21 14:50:13.523989 0x7f53437ee700 UTL UDP server at port:6010 is listening -12/21 14:50:13.524019 0x7f53427ec700 UTL UDP server at port:6011 is listening -12/21 14:50:22.192849 0x7f5352242700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6000 -12/21 14:50:22.192993 0x7f5352242700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6000 -12/21 14:50:22.237082 0x7f5351a41700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6000 -12/21 14:50:22.237203 0x7f5351a41700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6000 -12/21 14:50:22.237450 0x7f5351240700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6001 -12/21 14:50:22.237576 0x7f5351240700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6001 -12/21 14:50:22.281038 0x7f5350a3f700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6001 -12/21 14:50:22.281141 0x7f5350a3f700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6001 -... -... -... -12/21 14:50:22.677443 0x7f5342fed700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6011 -12/21 14:50:22.677576 0x7f5342fed700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6011 -12/21 14:50:22.721144 0x7f53427ec700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6011 -12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011 +request is received, size:1000 +request is received, size:1000 ``` Output of the client side for the example is below: ```bash # taos -n client -h 172.27.0.7 -P 6000 -12/21 14:50:22.192434 0x7fc95d859200 UTL work as client, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000 +taos -n client -h v3s2 -P 6030 -l 1000 +network test client is initialized, the server is v3s2:6030 +request is sent, size:1000 +response is received, size:1000 +request is sent, size:1000 +response is received, size:1000 +... +... +... +request is sent, size:1000 +response is received, size:1000 +request is sent, size:1000 +response is received, size:1000 -12/21 14:50:22.192472 0x7fc95d859200 UTL server ip:172.27.0.7 is resolved from host:172.27.0.7 -12/21 14:50:22.236869 0x7fc95d859200 UTL successed to test TCP port:6000 -12/21 14:50:22.237215 0x7fc95d859200 UTL successed to test UDP port:6000 -... -... -... -12/21 14:50:22.676891 0x7fc95d859200 UTL successed to test TCP port:6010 -12/21 14:50:22.677240 0x7fc95d859200 UTL successed to test UDP port:6010 -12/21 14:50:22.720893 0x7fc95d859200 UTL successed to test TCP port:6011 -12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011 +total succ: 100/100 cost: 16.23 ms speed: 5.87 MB/s ``` The output needs to be checked carefully for the system operator to find the root cause and resolve the problem. -## Startup Status and RPC Diagnostic - -`taos -n startup -h ` can be used to check the startup status of a `taosd` process. This is a common task which should be performed by a system operator, especially in the case of a cluster, to determine whether `taosd` has been started successfully. - -`taos -n rpc -h ` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or whether `taosd` is abnormal. - -## Sync and Arbitrator Diagnostic - -```bash -taos -n sync -P 6040 -h -taos -n sync -P 6042 -h -``` - -The above commands can be executed in a Linux shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well. - -## Network Speed Diagnostic - -`taos -n speed -h -P 6030 -N 10 -l 10000000 -S TCP` - -From version 2.2.0.0 onwards, the above command can be executed in a Linux shell to test network speed. The command sends uncompressed packages to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below: - --n:When set to "speed", it means testing network speed. --h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used. --P:The port of the server process to connect to, the default value is 6030. --N:The number of packages that will be sent in the test, range is [1,10000], default value is 100. --l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024. --S:The type of network packages to send, can be either TCP or UDP, default value is TCP. - -## FQDN Resolution Diagnostic - -`taos -n fqdn -h ` - -From version 2.2.0.0 onward, the above command can be executed in a Linux shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below: - --n:When set to "fqdn", it means testing the speed of resolving FQDN. --h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default. - ## Server Log -The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively. +The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively. -Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily and so on the server side, important information is stored in a different place from other logs. - -- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information -- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog` +Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. Ensure that the disk drive on which logs are stored has sufficient space. ## Client Log -An independent log file, named as "taoslog+" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded. +An independent log file, named as "taoslog+" is generated for each client program, i.e. a client process. The parameter `debugFlag` is used to control the log level. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively. + +The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded. The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process. -Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions. +Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions. You can configure asynclog to 0 when needed for troubleshooting purposes to ensure that no log information is lost. From 77de88a768de345124318dd5d7b7c1573d7f43f3 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 22 Aug 2022 16:08:46 +0800 Subject: [PATCH 04/29] doc: english version of third-party --- docs/en/20-third-party/01-grafana.mdx | 17 +++++++++-------- docs/en/20-third-party/06-statsd.md | 9 +++------ docs/en/20-third-party/10-hive-mq-broker.md | 4 ++-- 3 files changed, 14 insertions(+), 16 deletions(-) diff --git a/docs/en/20-third-party/01-grafana.mdx b/docs/en/20-third-party/01-grafana.mdx index 5dbeb31a23..e0fbefd5a8 100644 --- a/docs/en/20-third-party/01-grafana.mdx +++ b/docs/en/20-third-party/01-grafana.mdx @@ -6,9 +6,7 @@ title: Grafana import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; -TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard. - -You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md). +TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard. You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md). ## Prerequisites @@ -65,7 +63,6 @@ Restart Grafana service and open Grafana in web-browser, usually - Follow the installation steps in [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) with the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation. @@ -76,7 +73,7 @@ grafana-cli plugins install tdengine-datasource sudo -u grafana grafana-cli plugins install tdengine-datasource ``` -Alternatively, you can manually download the .zip file from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and unpack it into your grafana plugins directory. +You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows: ```bash GF_VERSION=3.2.2 @@ -131,7 +128,7 @@ docker run -d \ grafana/grafana ``` -You can setup a zero-configuration stack for TDengine + Grafana by [docker-compose](https://docs.docker.com/compose/) and [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) file: +You can setup a zero-configuration stack for TDengine + Grafana by [docker-compose](https://docs.docker.com/compose/) and [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) file: 1. Save the provisioning configuration file to `tdengine.yml`. @@ -196,7 +193,7 @@ Go back to the main interface to create a dashboard and click Add Query to enter As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. -- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, custom template variables are also supported. +- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart, avg(mem_system) from log.dnodes_info where ts >= $from and ts < $to interval($interval)`. In this statement, $from, $to, and $interval are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported. - ALIAS BY: This allows you to set the current query alias. - GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement. @@ -208,7 +205,11 @@ Follow the default prompt to query the average system memory usage for the speci ### Importing the Dashboard -You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. The dashboard is published in Grafana as [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167). Check the [TDinsight User Manual](/reference/tdinsight/) for the details. +You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. + +![TDengine Database Grafana plugine import dashboard](./import_dashboard.webp) + +A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。 Check the [TDinsight User Manual](/reference/tdinsight/) for the details. For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list: diff --git a/docs/en/20-third-party/06-statsd.md b/docs/en/20-third-party/06-statsd.md index 40e927b9fd..32b1bbb97a 100644 --- a/docs/en/20-third-party/06-statsd.md +++ b/docs/en/20-third-party/06-statsd.md @@ -1,6 +1,6 @@ --- sidebar_label: StatsD -title: StatsD writing +title: StatsD Writing --- import StatsD from "../14-reference/_statsd.mdx" @@ -12,8 +12,8 @@ You can write StatsD data to TDengine by simply modifying the configuration file ## Prerequisites To write StatsD data to TDengine requires the following preparations. -- The TDengine cluster has been deployed and is working properly -- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. +1. The TDengine cluster is deployed and functioning properly +2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details. - StatsD has been installed. To install StatsD, please refer to [official documentation](https://github.com/statsd/statsd) ## Configuration steps @@ -39,9 +39,6 @@ $ echo "foo:1|c" | nc -u -w0 127.0.0.1 8125 Use the TDengine CLI to verify that StatsD data is written to TDengine and can read out correctly. ``` -Welcome to the TDengine shell from Linux, Client Version:2.4.0.0 -Copyright (c) 2020 by TAOS Data, Inc. All rights reserved. - taos> show databases; name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | ==================================================================================================================================================================================================================================================================================== diff --git a/docs/en/20-third-party/10-hive-mq-broker.md b/docs/en/20-third-party/10-hive-mq-broker.md index 333e00fa0e..828a62ac5b 100644 --- a/docs/en/20-third-party/10-hive-mq-broker.md +++ b/docs/en/20-third-party/10-hive-mq-broker.md @@ -1,6 +1,6 @@ --- sidebar_label: HiveMQ Broker -title: HiveMQ Broker writing +title: HiveMQ Broker Writing --- -[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. Please refer to the [HiveMQ extension - TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md) for details on how to use it. \ No newline at end of file +[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. For more information, see [HiveMQ TDengine Extension](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md). From e51e7d9410198d4a117b472a4a41de9ae0dac08d Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Thu, 25 Aug 2022 18:12:26 +0800 Subject: [PATCH 05/29] doc: english faq for tdengine 3.0 --- docs/en/27-train-faq/01-faq.md | 185 +++++++++++++++++++++------------ 1 file changed, 117 insertions(+), 68 deletions(-) diff --git a/docs/en/27-train-faq/01-faq.md b/docs/en/27-train-faq/01-faq.md index c10bca1d05..733b418474 100644 --- a/docs/en/27-train-faq/01-faq.md +++ b/docs/en/27-train-faq/01-faq.md @@ -1,114 +1,163 @@ --- -sidebar_label: FAQ title: Frequently Asked Questions --- ## Submit an Issue -If the tips in FAQ don't help much, please submit an issue on [GitHub](https://github.com/taosdata/TDengine) to describe your problem. In your description please include the TDengine version, hardware and OS information, the steps to reproduce the problem and any other relevant information. It would be very helpful if you can package the contents in `/var/log/taos` and `/etc/taos` and upload. These two are the default directories used by TDengine. If you have changed the default directories in your configuration, please package the files in your configured directories. We recommended setting `debugFlag` to 135 in `taos.cfg`, restarting `taosd`, then reproducing the problem and collecting the logs. If you don't want to restart, an alternative way of setting `debugFlag` is executing `alter dnode debugFlag 135` command in TDengine CLI `taos`. During normal running, however, please make sure `debugFlag` is set to 131. +If your issue could not be resolved by reviewing this documentation, you can submit your issue on GitHub and receive support from the TDengine Team. When you submit an issue, attach the following directories from your TDengine deployment: + +1. The directory containing TDengine logs (`/var/log/taos` by default) +2. The directory containing TDengine configuration files (`/etc/taos` by default) + +In your GitHub issue, provide the version of TDengine and the operating system and environment for your deployment, the operations that you performed when the issue occurred, and the time of occurrence and affected tables. + +To obtain more debugging information, open `taos.cfg` and set the `debugFlag` parameter to `135`. Then restart TDengine Server and reproduce the issue. The debug-level logs generated help the TDengine Team to resolve your issue. If it is not possible to restart TDengine Server, you can run the following command in the TDengine CLI to set the debug flag: + +``` + alter dnode 'debugFlag' '135'; +``` + +You can run the `SHOW DNODES` command to determine the dnode ID. + +When debugging information is no longer needed, set `debugFlag` to 131. ## Frequently Asked Questions -### 1. How to upgrade to TDengine 2.0 from older version? +### 1. What are the best practices for upgrading a previous version of TDengine to version 3.0? -version 2.x is not compatible with version 1.x. With regard to the configuration and data files, please perform the following steps before upgrading. Please follow data integrity, security, backup and other relevant SOPs, best practices before removing/deleting any data. +TDengine 3.0 is not compatible with the configuration and data files from previous versions. Before upgrading, perform the following steps: -1. Delete configuration files: `sudo rm -rf /etc/taos/taos.cfg` -2. Delete log files: `sudo rm -rf /var/log/taos/` -3. Delete data files if the data doesn't need to be kept: `sudo rm -rf /var/lib/taos/` -4. Install latest 2.x version -5. If the data needs to be kept and migrated to newer version, please contact professional service at TDengine for assistance. +1. Run `sudo rm -rf /etc/taos/taos.cfg` to delete your configuration file. +2. Run `sudo rm -rf /var/log/taos/` to delete your log files. +3. Run `sudo rm -rf /var/lib/taos/` to delete your data files. +4. Install TDengine 3.0. +5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support). -### 2. How to handle "Unable to establish connection"? +### 4. How can I resolve the "Unable to establish connection" error? -When the client is unable to connect to the server, you can try the following ways to troubleshoot and resolve the problem. +This error indicates that the client could not connect to the server. Perform the following troubleshooting steps: -1. Check the network +1. Check the network. - - Check if the hosts where the client and server are running are accessible to each other, for example by `ping` command. - - Check if the TCP/UDP on port 6030-6042 are open for access if firewall is enabled. If possible, disable the firewall for diagnostics, but please ensure that you are following security and other relevant protocols. - - Check if the FQDN and serverPort are configured correctly in `taos.cfg` used by the server side. - - Check if the `firstEp` is set properly in the `taos.cfg` used by the client side. + - For machines deployed in the cloud, verify that your security group can access ports 6030 and 6031 (TCP and UDP). + - For virtual machines deployed locally, verify that the hosts where the client and server are running are accessible to each other. Do not use localhost as the hostname. + - For machines deployed on a corporate network, verify that your NAT configuration allows the server to respond to the client. -2. Make sure the client version and server version are same. +2. Verify that the client and server are running the same version of TDengine. -3. On server side, check the running status of `taosd` by executing `systemctl status taosd` . If your server is started using another way instead of `systemctl`, use the proper method to check whether the server process is running normally. +3. On the server, run `systemctl status taosd` to verify that taosd is running normally. If taosd is stopped, run `systemctl start taosd`. -4. If using connector of Python, Java, Go, Rust, C#, node.JS on Linux to connect to the server, please make sure `libtaos.so` is in directory `/usr/local/taos/driver` and `/usr/local/taos/driver` is in system lib search environment variable `LD_LIBRARY_PATH`. +4. Verify that the client is configured with the correct FQDN for the server. -5. If using connector on Windows, please make sure `C:\TDengine\driver\taos.dll` is in your system lib search path. We recommend putting `taos.dll` under `C:\Windows\System32`. +5. If the server cannot be reached with the `ping` command, verify that network and DNS or hosts file settings are correct. For a TDengine cluster, the client must be able to ping the FQDN of every node in the cluster. -6. Some advanced network diagnostics tools +6. Verify that your firewall settings allow all hosts in the cluster to communicate on ports 6030 and 6041 (TCP and UDP). You can run `ufw status` (Ubuntu) or `firewall-cmd --list-port` (CentOS) to check the configuration. - - On Linux system tool `nc` can be used to check whether the TCP/UDP can be accessible on a specified port - Check whether a UDP port is open: `nc -vuz {hostIP} {port} ` - Check whether a TCP port on server side is open: `nc -l {port}` - Check whether a TCP port on client side is open: `nc {hostIP} {port}` +7. If you are using the Python, Java, Go, Rust, C#, or Node.js connector on Linux to connect to the server, verify that `libtaos.so` is in the `/usr/local/taos/driver` directory and `/usr/local/taos/driver` is in the `LD_LIBRARY_PATH` environment variable. - - On Windows system `Test-NetConnection -ComputerName {fqdn} -Port {port}` on PowerShell can be used to check whether the port on server side is open for access. +8. If you are using Windows, verify that `C:\TDengine\driver\taos.dll` is in the `PATH` environment variable. If possible, move `taos.dll` to the `C:\Windows\System32` directory. -7. TDengine CLI `taos` can also be used to check network, please refer to [TDengine CLI](/reference/taos-shell). +9. On Linux systems, you can use the `nc` tool to check whether a port is accessible: + - To check whether a UDP port is open, run `nc -vuz {hostIP} {port}`. + - To check whether a TCP port on the server side is open, run `nc -l {port}`. + - To check whether a TCP port on client side is open, run `nc {hostIP} {port}`. -### 3. How to handle "Unexpected generic error in RPC" or "Unable to resolve FQDN" ? +10. On Windows systems, you can run `Test-NetConnection -ComputerName {fqdn} -Port {port}` in PowerShell to check whether a port on the server side is accessible. -This error is caused because the FQDN can't be resolved. Please try following ways: +11. You can also use the TDengine CLI to diagnose network issues. For more information, see [Problem Diagnostics](https://docs.tdengine.com/operation/diagnose/). -1. Check whether the FQDN is configured properly on the server side -2. If DSN server is configured in the network, please check whether it works; otherwise, check `/etc/hosts` to see whether the FQDN is configured with correct IP -3. If the network configuration on the server side is OK, try to ping the server from the client side. -4. If TDengine has been used before with an old hostname then the hostname has been changed, please check `/var/lib/taos/taos/dnode/dnodeEps.json`. Before setting up a new TDengine cluster, it's better to cleanup the directories configured. +### 5. How can I resolve the "Unable to resolve FQDN" error? -### 4. "Invalid SQL" is returned even though the Syntax is correct +Clients and dnodes must be able to resolve the FQDN of each required node. You can confirm your configuration as follows: -"Invalid SQL" is returned when the length of SQL statement exceeds maximum allowed length or the syntax is not correct. +1. Verify that the FQDN is configured properly on the server. +2. If your network has a DNS server, verify that it is operational. +3. If your network does not have a DNS server, verify that the FQDNs in the `hosts` file are correct. +4. On the client, use the `ping` command to test your connection to the server. If you cannot ping an FQDN, TDengine cannot reach it. +5. If TDengine has been previously installed and the `hostname` was modified, open `dnode.json` in the `data` folder and verify that the endpoint configuration is correct. The default location of the dnode file is `/var/lib/taos/dnode`. Ensure that you clean up previous installations before reinstalling TDengine. +6. Confirm whether FQDNs are preconfigured in `/etc/hosts` and `/etc/hostname`. -### 5. Whether validation queries are supported? +### 6. What is the most effective way to write data to TDengine? -It's suggested to use a builtin database named as `log` to monitor. +Writing data in batches provides higher efficiency in most situations. You can insert one or more data records into one or more tables in a single SQL statement. - +### 9. Why are table names not fully displayed? -### 6. Can I delete a record? +The number of columns in the TDengine CLI terminal display is limited. This can cause table names to be cut off, and if you use an incomplete name in a statement, the "Table does not exist" error will occur. You can increase the display size with the `maxBinaryDisplayWidth` parameter or the SQL statement `set max_binary_display_width`. You can also append `\G` to your SQL statement to bypass this limitation. -From version 2.6.0.0 Enterprise version, deleting data can be supported. +### 10. How can I migrate data? -### 7. How to create a table of over 1024 columns? +In TDengine, the `hostname` uniquely identifies a machine. When you move data files to a new machine, you must configure the new machine to have the same `host name` as the original machine. -From version 2.1.7.0, at most 4096 columns can be defined for a table. +:::note -### 8. How to improve the efficiency of inserting data? +The data structure of previous versions of TDengine is not compatible with version 3.0. To migrate from TDengine 1.x or 2.x to 3.0, you must export data from your older deployment and import it back into TDengine 3.0. -Inserting data in batch is a good practice. Single SQL statement can insert data for one or multiple tables in batch. +::: -### 9. JDBC Error: the executed SQL is not a DML or a DDL? +### 11. How can I temporary change the log level from the TDengine Client? -Please upgrade to latest JDBC driver, for details please refer to [Java Connector](/reference/connector/java) - -### 10. Failed to connect with error "invalid timestamp" - -The most common reason is that the time setting is not aligned on the client side and the server side. On Linux system, please use `ntpdate` command. On Windows system, please enable automatic sync in system time setting. - -### 11. Table name is not shown in full - -There is a display width setting in TDengine CLI `taos`. It can be controlled by configuration parameter `maxBinaryDisplayWidth`, or can be set using SQL command `set max_binary_display_width`. A more convenient way is to append `\G` in a SQL command to bypass this limitation. - -### 12. How to change log level temporarily? - -Below SQL command can be used to adjust log level temporarily +To change the log level for debugging purposes, you can use the following command: ```sql -ALTER LOCAL flag_name flag_value; +ALTER LOCAL local_option + +local_option: { + 'resetLog' + | 'rpcDebugFlag' value + | 'tmrDebugFlag' value + | 'cDebugFlag' value + | 'uDebugFlag' value + | 'debugFlag' value +} ``` - - flag_name can be: debugFlag,cDebugFlag,tmrDebugFlag,uDebugFlag,rpcDebugFlag - - flag_value can be: 131 (INFO/WARNING/ERROR), 135 (plus DEBUG), 143 (plus TRACE) - +Use `resetlog` to remove all logs generated on the local client. Use the other parameters to specify a log level for a specific component. -### 13. What to do if go compilation fails? +For each parameter, you can set the value to `131` (error and warning), `135` (error, warning, and debug), or `143` (error, warning, debug, and trace). -From version 2.3.0.0, a new component named `taosAdapter` is introduced. Its' developed in Go. If you want to compile from source code and meet go compilation problems, try to do below steps to resolve Go environment problems. +### Why do TDengine components written in Go fail to compile? -```sh -go env -w GO111MODULE=on -go env -w GOPROXY=https://goproxy.cn,direct -``` +TDengine includes taosAdapter, an independent component written in Go. This component provides the REST API as well as data access for other products such as Prometheus and Telegraf. +When using the develop branch, you must run `git submodule update --init --recursive` to download the taosAdapter repository and then compile it. + +TDengine Go components require Go version 1.14 or later. + +### 13. How can I query the storage space being used by my data? + +The TDengine data files are stored in `/var/lib/taos` by default. Log files are stored in `/var/log/taos`. + +To see how much space your data files occupy, run `du -sh /var/lib/taos/vnode --exclude='wal'`. This excludes the write-ahead log (WAL) because its size is relatively fixed while writes are occurring, and it is written to disk and cleared when you shut down TDengine. + +If you want to see how much space is occupied by a single database, first determine which vgroup is storing the database by running `show vgroups`. Then check `/var/lib/taos/vnode` for the files associated with the vgroup ID. + +### 15. How is timezone information processed for timestamps? + +TDengine uses the timezone of the client for timestamps. The server timezone does not affect timestamps. The client converts Unix timestamps in SQL statements to UTC before sending them to the server. When you query data on the server, it provides timestamps in UTC to the client, which converts them to its local time. + +Timestamps are processed as follows: + +1. The client uses its system timezone unless it has been configured otherwise. +2. A timezone configured in `taos.cfg` takes precedence over the system timezone. +3. A timezone explicitly specified when establishing a connection to TDengine through a connector takes precedence over `taos.cfg` and the system timezone. For example, the Java connector allows you to specify a timezone in the JDBC URL. +4. If you use an RFC 3339 timestamp (2013-04-12T15:52:01.123+08:00), or an ISO 8601 timestamp (2013-04-12T15:52:01.123+0800), the timezone specified in the timestamp is used instead of the timestamps configured using any other method. + +### 16. Which network ports are required by TDengine? + +See [serverPort](https://docs.tdengine.com/reference/config/#serverport) in Configuration Parameters. + +Note that ports are specified using 6030 as the default first port. If you change this port, all other ports change as well. + +### 17. Why do applications such as Grafana fail to connect to TDengine over the REST API? + +In TDengine, the REST API is provided by taosAdapter. Ensure that taosAdapter is running before you connect an application to TDengine over the REST API. You can run `systemctl start taosadapter` to start the service. + +Note that the log path for taosAdapter must be configured separately. The default path is `/var/log/taos`. You can choose one of eight log levels. The default is `info`. You can set the log level to `panic` to disable log output. You can modify the taosAdapter configuration file to change these settings. The default location is `/etc/taos/taosadapter.toml`. + +For more information, see [taosAdapter](https://docs.tdengine.com/reference/taosadapter/). + +### 18. How can I resolve out-of-memory (OOM) errors? + +OOM errors are thrown by the operating system when its memory, including swap, becomes insufficient and it needs to terminate processes to remain operational. Most OOM errors in TDengine occur for one of the following reasons: free memory is less than the value of `vm.min_free_kbytes` or free memory is less than the size of the request. If TDengine occupies reserved memory, an OOM error can occur even when free memory is sufficient. + +TDengine preallocates memory to each vnode. The number of vnodes per database is determined by the `vgroups` parameter, and the amount of memory per vnode is determined by the `buffer` parameter. To prevent OOM errors from occurring, ensure that you prepare sufficient memory on your hosts to support the number of vnodes that your deployment requires. Configure an appropriately sized swap space. If you continue to receive OOM errors, your SQL statements may be querying too much data for your system. TDengine Enterprise Edition includes optimized memory management that increases stability for enterprise customers. From 4895f1987619000ae6069acc69da187e183639e1 Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang Date: Thu, 25 Aug 2022 18:39:23 +0800 Subject: [PATCH 06/29] fix: plan problem with sorting the first column of the system table --- source/libs/parser/src/parTranslater.c | 2 +- source/libs/planner/src/planOptimizer.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 5a32c87fc3..d938325ef2 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -5001,7 +5001,7 @@ static int32_t checkCreateStream(STranslateContext* pCxt, SCreateStreamStmt* pSt return TSDB_CODE_SUCCESS; } - if (QUERY_NODE_SELECT_STMT != nodeType(pStmt->pQuery) || + if (QUERY_NODE_SELECT_STMT != nodeType(pStmt->pQuery) || NULL == ((SSelectStmt*)pStmt->pQuery)->pFromTable || QUERY_NODE_REAL_TABLE != nodeType(((SSelectStmt*)pStmt->pQuery)->pFromTable)) { return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY, "Unsupported stream query"); } diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 45ab3903a9..653291d3a4 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -1084,7 +1084,7 @@ static int32_t sortPriKeyOptGetSequencingNodesImpl(SLogicNode* pNode, bool* pNot switch (nodeType(pNode)) { case QUERY_NODE_LOGIC_PLAN_SCAN: { SScanLogicNode* pScan = (SScanLogicNode*)pNode; - if (NULL != pScan->pGroupTags) { + if (NULL != pScan->pGroupTags || TSDB_SYSTEM_TABLE == pScan->tableType) { *pNotOptimize = true; return TSDB_CODE_SUCCESS; } From 8df8f90e19d04a891320f60c29b07a03a86b4c72 Mon Sep 17 00:00:00 2001 From: afwerar <1296468573@qq.com> Date: Fri, 26 Aug 2022 16:03:28 +0800 Subject: [PATCH 07/29] os: fix mac run error --- cmake/cmake.define | 5 +- cmake/cmake.install | 16 ++++ cmake/cmake.options | 6 ++ include/os/os.h | 1 + include/os/osSemaphore.h | 5 +- packaging/deb/DEBIAN/prerm | 2 +- packaging/release.bat | 18 +--- source/client/src/clientEnv.c | 2 + source/libs/wal/src/walMeta.c | 1 - source/os/src/osDir.c | 2 + source/os/src/osFile.c | 7 +- source/os/src/osSemaphore.c | 163 ++-------------------------------- source/os/src/osSysinfo.c | 8 +- source/util/src/tlog.c | 3 +- 14 files changed, 55 insertions(+), 184 deletions(-) diff --git a/cmake/cmake.define b/cmake/cmake.define index 989b69a89b..5d64815a9a 100644 --- a/cmake/cmake.define +++ b/cmake/cmake.define @@ -2,8 +2,6 @@ cmake_minimum_required(VERSION 3.0) set(CMAKE_VERBOSE_MAKEFILE OFF) -SET(BUILD_SHARED_LIBS "OFF") - #set output directory SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib) SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/bin) @@ -103,6 +101,9 @@ IF (TD_WINDOWS) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}") ELSE () + IF (${TD_DARWIN}) + set(CMAKE_MACOSX_RPATH 0) + ENDIF () IF (${COVER} MATCHES "true") MESSAGE(STATUS "Test coverage mode, add extra flags") SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage") diff --git a/cmake/cmake.install b/cmake/cmake.install index 6dc6864975..fd1e080dda 100644 --- a/cmake/cmake.install +++ b/cmake/cmake.install @@ -1,3 +1,19 @@ +SET(PREPARE_ENV_CMD "prepare_env_cmd") +SET(PREPARE_ENV_TARGET "prepare_env_target") +ADD_CUSTOM_COMMAND(OUTPUT ${PREPARE_ENV_CMD} + POST_BUILD + COMMAND echo "make test directory" + DEPENDS taosd + COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/cfg/ + COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/log/ + COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/data/ + COMMAND ${CMAKE_COMMAND} -E echo dataDir ${TD_TESTS_OUTPUT_DIR}/data > ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg + COMMAND ${CMAKE_COMMAND} -E echo logDir ${TD_TESTS_OUTPUT_DIR}/log >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg + COMMAND ${CMAKE_COMMAND} -E echo charset UTF-8 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg + COMMAND ${CMAKE_COMMAND} -E echo monitor 0 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg + COMMENT "prepare taosd environment") +ADD_CUSTOM_TARGET(${PREPARE_ENV_TARGET} ALL WORKING_DIRECTORY ${TD_EXECUTABLE_OUTPUT_PATH} DEPENDS ${PREPARE_ENV_CMD}) + IF (TD_LINUX) SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh") INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")") diff --git a/cmake/cmake.options b/cmake/cmake.options index bec64f7bf0..3baccde4d7 100644 --- a/cmake/cmake.options +++ b/cmake/cmake.options @@ -90,6 +90,12 @@ ELSE () ENDIF () ENDIF () +option( + BUILD_SHARED_LIBS + "" + OFF + ) + option( RUST_BINDINGS "If build with rust-bindings" diff --git a/include/os/os.h b/include/os/os.h index b036002f8a..71966061a1 100644 --- a/include/os/os.h +++ b/include/os/os.h @@ -79,6 +79,7 @@ extern "C" { #include #include +#include "taoserror.h" #include "osAtomic.h" #include "osDef.h" #include "osDir.h" diff --git a/include/os/osSemaphore.h b/include/os/osSemaphore.h index 7fca20d75e..e52da96f01 100644 --- a/include/os/osSemaphore.h +++ b/include/os/osSemaphore.h @@ -23,10 +23,9 @@ extern "C" { #include #if defined(_TD_DARWIN_64) - +#include // typedef struct tsem_s *tsem_t; -typedef struct bosal_sem_t *tsem_t; - +typedef dispatch_semaphore_t tsem_t; int tsem_init(tsem_t *sem, int pshared, unsigned int value); int tsem_wait(tsem_t *sem); diff --git a/packaging/deb/DEBIAN/prerm b/packaging/deb/DEBIAN/prerm index 4953102842..65f261db2c 100644 --- a/packaging/deb/DEBIAN/prerm +++ b/packaging/deb/DEBIAN/prerm @@ -1,6 +1,6 @@ #!/bin/bash -if [ $1 -eq "abort-upgrade" ]; then +if [ "$1"x = "abort-upgrade"x ]; then exit 0 fi diff --git a/packaging/release.bat b/packaging/release.bat index ffd3a68048..14534c8d7e 100644 --- a/packaging/release.bat +++ b/packaging/release.bat @@ -40,10 +40,12 @@ if not exist %work_dir%\debug\ver-%2-x86 ( ) cd %work_dir%\debug\ver-%2-x64 call vcvarsall.bat x64 -cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x64 +cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -BUILD_TEST=false -DVERNUMBER=%2 -DCPUTYPE=x64 cmake --build . rd /s /Q C:\TDengine cmake --install . +for /r c:\TDengine %%i in (*.dll) do signtool sign /f D:\\123.pfx /p taosdata %%i +for /r c:\TDengine %%i in (*.exe) do signtool sign /f D:\\123.pfx /p taosdata %%i if not %errorlevel% == 0 ( call :RUNFAILED build x64 failed & exit /b 1) cd %package_dir% iscc /DMyAppInstallName="%packagServerName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release @@ -51,19 +53,7 @@ if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x64% faile iscc /DMyAppInstallName="%packagClientName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x64% failed & exit /b 1) -cd %work_dir%\debug\ver-%2-x86 -call vcvarsall.bat x86 -cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x86 -cmake --build . -rd /s /Q C:\TDengine -cmake --install . -if not %errorlevel% == 0 ( call :RUNFAILED build x86 failed & exit /b 1) -cd %package_dir% -@REM iscc /DMyAppInstallName="%packagServerName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release -@REM if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x86% failed & exit /b 1) -iscc /DMyAppInstallName="%packagClientName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release -if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x86% failed & exit /b 1) - +for /r ..\release %%i in (*.exe) do signtool sign /f d:\\123.pfx /p taosdata %%i goto EXIT0 :USAGE diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index ff1b9322c9..99ecab9642 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -393,7 +393,9 @@ void taos_init_imp(void) { schedulerInit(); tscDebug("starting to initialize TAOS driver"); +#ifndef WINDOWS taosSetCoreDump(true); +#endif initTaskQueue(); fmFuncMgtInit(); diff --git a/source/libs/wal/src/walMeta.c b/source/libs/wal/src/walMeta.c index a8da680910..0983d344c1 100644 --- a/source/libs/wal/src/walMeta.c +++ b/source/libs/wal/src/walMeta.c @@ -221,7 +221,6 @@ int walCheckAndRepairMeta(SWal* pWal) { int code = walSaveMeta(pWal); if (code < 0) { - taosArrayDestroy(actualLog); return -1; } } diff --git a/source/os/src/osDir.c b/source/os/src/osDir.c index b755a35815..30aaa01dae 100644 --- a/source/os/src/osDir.c +++ b/source/os/src/osDir.c @@ -133,6 +133,7 @@ int32_t taosMulMkDir(const char *dirname) { code = mkdir(temp, 0755); #endif if (code < 0 && errno != EEXIST) { + terrno = TAOS_SYSTEM_ERROR(errno); return code; } *pos = TD_DIRSEP[0]; @@ -146,6 +147,7 @@ int32_t taosMulMkDir(const char *dirname) { code = mkdir(temp, 0755); #endif if (code < 0 && errno != EEXIST) { + terrno = TAOS_SYSTEM_ERROR(errno); return code; } } diff --git a/source/os/src/osFile.c b/source/os/src/osFile.c index 2d9cfe3246..fab933755a 100644 --- a/source/os/src/osFile.c +++ b/source/os/src/osFile.c @@ -203,10 +203,11 @@ int32_t taosRenameFile(const char *oldName, const char *newName) { } int32_t taosStatFile(const char *path, int64_t *size, int32_t *mtime) { - struct stat fileStat; #ifdef WINDOWS - int32_t code = _stat(path, &fileStat); + struct _stati64 fileStat; + int32_t code = _stati64(path, &fileStat); #else + struct stat fileStat; int32_t code = stat(path, &fileStat); #endif if (code < 0) { @@ -312,6 +313,7 @@ TdFilePtr taosOpenFile(const char *path, int32_t tdFileOptions) { assert(!(tdFileOptions & TD_FILE_EXCL)); fp = fopen(path, mode); if (fp == NULL) { + terrno = TAOS_SYSTEM_ERROR(errno); return NULL; } } else { @@ -334,6 +336,7 @@ TdFilePtr taosOpenFile(const char *path, int32_t tdFileOptions) { fd = open(path, access, S_IRWXU | S_IRWXG | S_IRWXO); #endif if (fd == -1) { + terrno = TAOS_SYSTEM_ERROR(errno); return NULL; } } diff --git a/source/os/src/osSemaphore.c b/source/os/src/osSemaphore.c index a7d2ba8531..8cc6f0ef2e 100644 --- a/source/os/src/osSemaphore.c +++ b/source/os/src/osSemaphore.c @@ -392,179 +392,32 @@ int32_t tsem_timewait(tsem_t* sem, int64_t nanosecs) { // *sem = NULL; // return 0; // } -typedef struct { - pthread_mutex_t count_lock; - pthread_cond_t count_bump; - unsigned int count; -} bosal_sem_t; int tsem_init(tsem_t *psem, int flags, unsigned int count) { - bosal_sem_t *pnewsem; - int result; - - pnewsem = (bosal_sem_t *)malloc(sizeof(bosal_sem_t)); - if (!pnewsem) { - return -1; - } - result = pthread_mutex_init(&pnewsem->count_lock, NULL); - if (result) { - free(pnewsem); - return result; - } - result = pthread_cond_init(&pnewsem->count_bump, NULL); - if (result) { - pthread_mutex_destroy(&pnewsem->count_lock); - free(pnewsem); - return result; - } - pnewsem->count = count; - *psem = (tsem_t)pnewsem; + *psem = dispatch_semaphore_create(count); + if (*psem == NULL) return -1; return 0; } int tsem_destroy(tsem_t *psem) { - bosal_sem_t *poldsem; - - if (!psem) { - return EINVAL; - } - poldsem = (bosal_sem_t *)*psem; - - pthread_mutex_destroy(&poldsem->count_lock); - pthread_cond_destroy(&poldsem->count_bump); - free(poldsem); return 0; } int tsem_post(tsem_t *psem) { - bosal_sem_t *pxsem; - int result, xresult; - - if (!psem) { - return EINVAL; - } - pxsem = (bosal_sem_t *)*psem; - - result = pthread_mutex_lock(&pxsem->count_lock); - if (result) { - return result; - } - pxsem->count = pxsem->count + 1; - - xresult = pthread_cond_signal(&pxsem->count_bump); - - result = pthread_mutex_unlock(&pxsem->count_lock); - if (result) { - return result; - } - if (xresult) { - errno = xresult; - return -1; - } - return 0; -} - -int tsem_trywait(tsem_t *psem) { - bosal_sem_t *pxsem; - int result, xresult; - - if (!psem) { - return EINVAL; - } - pxsem = (bosal_sem_t *)*psem; - - result = pthread_mutex_lock(&pxsem->count_lock); - if (result) { - return result; - } - xresult = 0; - - if (pxsem->count > 0) { - pxsem->count--; - } else { - xresult = EAGAIN; - } - result = pthread_mutex_unlock(&pxsem->count_lock); - if (result) { - return result; - } - if (xresult) { - errno = xresult; - return -1; - } + if (psem == NULL || *psem == NULL) return -1; + dispatch_semaphore_signal(*psem); return 0; } int tsem_wait(tsem_t *psem) { - bosal_sem_t *pxsem; - int result, xresult; - - if (!psem) { - return EINVAL; - } - pxsem = (bosal_sem_t *)*psem; - - result = pthread_mutex_lock(&pxsem->count_lock); - if (result) { - return result; - } - xresult = 0; - - if (pxsem->count == 0) { - xresult = pthread_cond_wait(&pxsem->count_bump, &pxsem->count_lock); - } - if (!xresult) { - if (pxsem->count > 0) { - pxsem->count--; - } - } - result = pthread_mutex_unlock(&pxsem->count_lock); - if (result) { - return result; - } - if (xresult) { - errno = xresult; - return -1; - } + if (psem == NULL || *psem == NULL) return -1; + dispatch_semaphore_wait(*psem, DISPATCH_TIME_FOREVER); return 0; } int tsem_timewait(tsem_t *psem, int64_t nanosecs) { - struct timespec abstim = { - .tv_sec = 0, - .tv_nsec = nanosecs, - }; - - bosal_sem_t *pxsem; - int result, xresult; - - if (!psem) { - return EINVAL; - } - pxsem = (bosal_sem_t *)*psem; - - result = pthread_mutex_lock(&pxsem->count_lock); - if (result) { - return result; - } - xresult = 0; - - if (pxsem->count == 0) { - xresult = pthread_cond_timedwait(&pxsem->count_bump, &pxsem->count_lock, &abstim); - } - if (!xresult) { - if (pxsem->count > 0) { - pxsem->count--; - } - } - result = pthread_mutex_unlock(&pxsem->count_lock); - if (result) { - return result; - } - if (xresult) { - errno = xresult; - return -1; - } + if (psem == NULL || *psem == NULL) return -1; + dispatch_semaphore_wait(*psem, nanosecs); return 0; } diff --git a/source/os/src/osSysinfo.c b/source/os/src/osSysinfo.c index 3a75e18a7f..19e9568bbe 100644 --- a/source/os/src/osSysinfo.c +++ b/source/os/src/osSysinfo.c @@ -595,6 +595,7 @@ int32_t taosGetDiskSize(char *dataDir, SDiskSize *diskSize) { #else struct statvfs info; if (statvfs(dataDir, &info)) { + terrno = TAOS_SYSTEM_ERROR(errno); return -1; } else { diskSize->total = info.f_blocks * info.f_frsize; @@ -851,13 +852,12 @@ char *taosGetCmdlineByPID(int pid) { } void taosSetCoreDump(bool enable) { + if (!enable) return; #ifdef WINDOWS - // SetUnhandledExceptionFilter(exceptionHandler); - // SetUnhandledExceptionFilter(&FlCrashDump); + SetUnhandledExceptionFilter(exceptionHandler); + SetUnhandledExceptionFilter(&FlCrashDump); #elif defined(_TD_DARWIN_64) #else - if (!enable) return; - // 1. set ulimit -c unlimited struct rlimit rlim; struct rlimit rlim_new; diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c index 2e8239c68f..a2d65d6a54 100644 --- a/source/util/src/tlog.c +++ b/source/util/src/tlog.c @@ -429,7 +429,7 @@ static inline int32_t taosBuildLogHead(char *buffer, const char *flags) { } static inline void taosPrintLogImp(ELogLevel level, int32_t dflag, const char *buffer, int32_t len) { - if ((dflag & DEBUG_FILE) && tsLogObj.logHandle && tsLogObj.logHandle->pFile != NULL) { + if ((dflag & DEBUG_FILE) && tsLogObj.logHandle && tsLogObj.logHandle->pFile != NULL && osLogSpaceAvailable()) { taosUpdateLogNums(level); if (tsAsyncLog) { taosPushLogBuffer(tsLogObj.logHandle, buffer, len); @@ -451,7 +451,6 @@ static inline void taosPrintLogImp(ELogLevel level, int32_t dflag, const char *b } void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...) { - if (!osLogSpaceAvailable()) return; if (!(dflag & DEBUG_FILE) && !(dflag & DEBUG_SCREEN)) return; char buffer[LOG_MAX_LINE_BUFFER_SIZE]; From d18e7cd739d82221d41610441c8c779fcb3c2630 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Fri, 26 Aug 2022 16:20:07 +0800 Subject: [PATCH 08/29] enh: remove compare type convertion --- include/util/tcompare.h | 91 ++++ source/libs/scalar/inc/filterInt.h | 1 + source/libs/scalar/inc/sclInt.h | 1 + source/libs/scalar/src/filter.c | 158 +++++++ source/libs/scalar/src/sclvector.c | 38 +- source/util/src/tcompare.c | 722 +++++++++++++++++++++++++++++ 6 files changed, 996 insertions(+), 15 deletions(-) diff --git a/include/util/tcompare.h b/include/util/tcompare.h index cc9e8ae464..c7a3ca20f2 100644 --- a/include/util/tcompare.h +++ b/include/util/tcompare.h @@ -105,6 +105,97 @@ int32_t compareStrPatternNotMatch(const void *pLeft, const void *pRight); int32_t compareWStrPatternMatch(const void *pLeft, const void *pRight); int32_t compareWStrPatternNotMatch(const void *pLeft, const void *pRight); +int32_t compareInt8Int16(const void *pLeft, const void *pRight); +int32_t compareInt8Int32(const void *pLeft, const void *pRight); +int32_t compareInt8Int64(const void *pLeft, const void *pRight); +int32_t compareInt8Float(const void *pLeft, const void *pRight); +int32_t compareInt8Double(const void *pLeft, const void *pRight); +int32_t compareInt8Uint8(const void *pLeft, const void *pRight); +int32_t compareInt8Uint16(const void *pLeft, const void *pRight); +int32_t compareInt8Uint32(const void *pLeft, const void *pRight); +int32_t compareInt8Uint64(const void *pLeft, const void *pRight); +int32_t compareInt16Int8(const void *pLeft, const void *pRight); +int32_t compareInt16Int32(const void *pLeft, const void *pRight); +int32_t compareInt16Int64(const void *pLeft, const void *pRight); +int32_t compareInt16Float(const void *pLeft, const void *pRight); +int32_t compareInt16Double(const void *pLeft, const void *pRight); +int32_t compareInt16Uint8(const void *pLeft, const void *pRight); +int32_t compareInt16Uint16(const void *pLeft, const void *pRight); +int32_t compareInt16Uint32(const void *pLeft, const void *pRight); +int32_t compareInt16Uint64(const void *pLeft, const void *pRight); +int32_t compareInt32Int8(const void *pLeft, const void *pRight); +int32_t compareInt32Int16(const void *pLeft, const void *pRight); +int32_t compareInt32Int64(const void *pLeft, const void *pRight); +int32_t compareInt32Float(const void *pLeft, const void *pRight); +int32_t compareInt32Double(const void *pLeft, const void *pRight); +int32_t compareInt32Uint8(const void *pLeft, const void *pRight); +int32_t compareInt32Uint16(const void *pLeft, const void *pRight); +int32_t compareInt32Uint32(const void *pLeft, const void *pRight); +int32_t compareInt32Uint64(const void *pLeft, const void *pRight); +int32_t compareInt64Int8(const void *pLeft, const void *pRight); +int32_t compareInt64Int16(const void *pLeft, const void *pRight); +int32_t compareInt64Int32(const void *pLeft, const void *pRight); +int32_t compareInt64Float(const void *pLeft, const void *pRight); +int32_t compareInt64Double(const void *pLeft, const void *pRight); +int32_t compareInt64Uint8(const void *pLeft, const void *pRight); +int32_t compareInt64Uint16(const void *pLeft, const void *pRight); +int32_t compareInt64Uint32(const void *pLeft, const void *pRight); +int32_t compareInt64Uint64(const void *pLeft, const void *pRight); +int32_t compareFloatInt8(const void *pLeft, const void *pRight); +int32_t compareFloatInt16(const void *pLeft, const void *pRight); +int32_t compareFloatInt32(const void *pLeft, const void *pRight); +int32_t compareFloatInt64(const void *pLeft, const void *pRight); +int32_t compareFloatDouble(const void *pLeft, const void *pRight); +int32_t compareFloatUint8(const void *pLeft, const void *pRight); +int32_t compareFloatUint16(const void *pLeft, const void *pRight); +int32_t compareFloatUint32(const void *pLeft, const void *pRight); +int32_t compareFloatUint64(const void *pLeft, const void *pRight); +int32_t compareDoubleInt8(const void *pLeft, const void *pRight); +int32_t compareDoubleInt16(const void *pLeft, const void *pRight); +int32_t compareDoubleInt32(const void *pLeft, const void *pRight); +int32_t compareDoubleInt64(const void *pLeft, const void *pRight); +int32_t compareDoubleFloat(const void *pLeft, const void *pRight); +int32_t compareDoubleUint8(const void *pLeft, const void *pRight); +int32_t compareDoubleUint16(const void *pLeft, const void *pRight); +int32_t compareDoubleUint32(const void *pLeft, const void *pRight); +int32_t compareDoubleUint64(const void *pLeft, const void *pRight); +int32_t compareUint8Int8(const void *pLeft, const void *pRight); +int32_t compareUint8Int16(const void *pLeft, const void *pRight); +int32_t compareUint8Int32(const void *pLeft, const void *pRight); +int32_t compareUint8Int64(const void *pLeft, const void *pRight); +int32_t compareUint8Float(const void *pLeft, const void *pRight); +int32_t compareUint8Double(const void *pLeft, const void *pRight); +int32_t compareUint8Uint16(const void *pLeft, const void *pRight); +int32_t compareUint8Uint32(const void *pLeft, const void *pRight); +int32_t compareUint8Uint64(const void *pLeft, const void *pRight); +int32_t compareUint16Int8(const void *pLeft, const void *pRight); +int32_t compareUint16Int16(const void *pLeft, const void *pRight); +int32_t compareUint16Int32(const void *pLeft, const void *pRight); +int32_t compareUint16Int64(const void *pLeft, const void *pRight); +int32_t compareUint16Float(const void *pLeft, const void *pRight); +int32_t compareUint16Double(const void *pLeft, const void *pRight); +int32_t compareUint16Uint8(const void *pLeft, const void *pRight); +int32_t compareUint16Uint32(const void *pLeft, const void *pRight); +int32_t compareUint16Uint64(const void *pLeft, const void *pRight); +int32_t compareUint32Int8(const void *pLeft, const void *pRight); +int32_t compareUint32Int16(const void *pLeft, const void *pRight); +int32_t compareUint32Int32(const void *pLeft, const void *pRight); +int32_t compareUint32Int64(const void *pLeft, const void *pRight); +int32_t compareUint32Float(const void *pLeft, const void *pRight); +int32_t compareUint32Double(const void *pLeft, const void *pRight); +int32_t compareUint32Uint8(const void *pLeft, const void *pRight); +int32_t compareUint32Uint16(const void *pLeft, const void *pRight); +int32_t compareUint32Uint64(const void *pLeft, const void *pRight); +int32_t compareUint64Int8(const void *pLeft, const void *pRight); +int32_t compareUint64Int16(const void *pLeft, const void *pRight); +int32_t compareUint64Int32(const void *pLeft, const void *pRight); +int32_t compareUint64Int64(const void *pLeft, const void *pRight); +int32_t compareUint64Float(const void *pLeft, const void *pRight); +int32_t compareUint64Double(const void *pLeft, const void *pRight); +int32_t compareUint64Uint8(const void *pLeft, const void *pRight); +int32_t compareUint64Uint16(const void *pLeft, const void *pRight); +int32_t compareUint64Uint32(const void *pLeft, const void *pRight); + __compar_fn_t getComparFunc(int32_t type, int32_t optr); __compar_fn_t getKeyComparFunc(int32_t keyType, int32_t order); int32_t doCompare(const char *a, const char *b, int32_t type, size_t size); diff --git a/source/libs/scalar/inc/filterInt.h b/source/libs/scalar/inc/filterInt.h index 23693c785a..87327a3657 100644 --- a/source/libs/scalar/inc/filterInt.h +++ b/source/libs/scalar/inc/filterInt.h @@ -350,6 +350,7 @@ struct SFilterInfo { extern bool filterDoCompare(__compar_fn_t func, uint8_t optr, void *left, void *right); extern __compar_fn_t filterGetCompFunc(int32_t type, int32_t optr); +extern __compar_fn_t filterGetCompFuncEx(int32_t lType, int32_t rType, int32_t optr); #ifdef __cplusplus } diff --git a/source/libs/scalar/inc/sclInt.h b/source/libs/scalar/inc/sclInt.h index 36d2c5a49c..15e9026ddb 100644 --- a/source/libs/scalar/inc/sclInt.h +++ b/source/libs/scalar/inc/sclInt.h @@ -47,6 +47,7 @@ typedef struct SScalarCtx { #define SCL_IS_NULL_VALUE_NODE(_node) ((QUERY_NODE_VALUE == nodeType(_node)) && (TSDB_DATA_TYPE_NULL == ((SValueNode *)_node)->node.resType.type)) #define SCL_IS_COMPARISON_OPERATOR(_opType) ((_opType) >= OP_TYPE_GREATER_THAN && (_opType) < OP_TYPE_IS_NOT_UNKNOWN) #define SCL_DOWNGRADE_DATETYPE(_type) ((_type) == TSDB_DATA_TYPE_BIGINT || TSDB_DATA_TYPE_DOUBLE == (_type) || (_type) == TSDB_DATA_TYPE_UBIGINT) +#define SCL_NO_NEED_CONVERT_COMPARISION(_ltype, _rtype, _optr) (IS_NUMERIC_TYPE(_ltype) && IS_NUMERIC_TYPE(_rtype) && ((_optr) >= OP_TYPE_GREATER_THAN && (_optr) <= OP_TYPE_NOT_EQUAL)) #define sclFatal(...) qFatal(__VA_ARGS__) #define sclError(...) qError(__VA_ARGS__) diff --git a/source/libs/scalar/src/filter.c b/source/libs/scalar/src/filter.c index 4377dbf14e..5555a52c8e 100644 --- a/source/libs/scalar/src/filter.c +++ b/source/libs/scalar/src/filter.c @@ -132,6 +132,77 @@ __compar_fn_t gDataCompare[] = {compareInt32Val, compareInt8Val, compareInt16Val compareChkNotInString, compareStrPatternNotMatch, compareWStrPatternNotMatch }; +__compar_fn_t gInt8SignCompare[] = { + compareInt8Val, compareInt8Int16, compareInt8Int32, compareInt8Int64, compareInt8Float, compareInt8Double +}; +__compar_fn_t gInt8UsignCompare[] = { + compareInt8Uint8, compareInt8Uint16, compareInt8Uint32, compareInt8Uint64 +}; + +__compar_fn_t gInt16SignCompare[] = { + compareInt16Int8, compareInt16Val, compareInt16Int32, compareInt16Int64, compareInt16Float, compareInt16Double +}; +__compar_fn_t gInt16UsignCompare[] = { + compareInt16Uint8, compareInt16Uint16, compareInt16Uint32, compareInt16Uint64 +}; + +__compar_fn_t gInt32SignCompare[] = { + compareInt32Int8, compareInt32Int16, compareInt32Val, compareInt32Int64, compareInt32Float, compareInt32Double +}; +__compar_fn_t gInt32UsignCompare[] = { + compareInt32Uint8, compareInt32Uint16, compareInt32Uint32, compareInt32Uint64 +}; + +__compar_fn_t gInt64SignCompare[] = { + compareInt64Int8, compareInt64Int16, compareInt64Int32, compareInt64Val, compareInt64Float, compareInt64Double +}; +__compar_fn_t gInt64UsignCompare[] = { + compareInt64Uint8, compareInt64Uint16, compareInt64Uint32, compareInt64Uint64 +}; + +__compar_fn_t gFloatSignCompare[] = { + compareFloatInt8, compareFloatInt16, compareFloatInt32, compareFloatInt64, compareFloatVal, compareFloatDouble +}; +__compar_fn_t gFloatUsignCompare[] = { + compareFloatUint8, compareFloatUint16, compareFloatUint32, compareFloatUint64 +}; + +__compar_fn_t gDoubleSignCompare[] = { + compareDoubleInt8, compareDoubleInt16, compareDoubleInt32, compareDoubleInt64, compareDoubleFloat, compareDoubleVal +}; +__compar_fn_t gDoubleUsignCompare[] = { + compareDoubleUint8, compareDoubleUint16, compareDoubleUint32, compareDoubleUint64 +}; + +__compar_fn_t gUint8SignCompare[] = { + compareUint8Int8, compareUint8Int16, compareUint8Int32, compareUint8Int64, compareUint8Float, compareUint8Double +}; +__compar_fn_t gUint8UsignCompare[] = { + compareUint8Val, compareUint8Uint16, compareUint8Uint32, compareUint8Uint64 +}; + +__compar_fn_t gUint16SignCompare[] = { + compareUint16Int8, compareUint16Int16, compareUint16Int32, compareUint16Int64, compareUint16Float, compareUint16Double +}; +__compar_fn_t gUint16UsignCompare[] = { + compareUint16Uint8, compareUint16Val, compareUint16Uint32, compareUint16Uint64 +}; + +__compar_fn_t gUint32SignCompare[] = { + compareUint32Int8, compareUint32Int16, compareUint32Int32, compareUint32Int64, compareUint32Float, compareUint32Double +}; +__compar_fn_t gUint32UsignCompare[] = { + compareUint32Uint8, compareUint32Uint16, compareUint32Val, compareUint32Uint64 +}; + +__compar_fn_t gUint64SignCompare[] = { + compareUint64Int8, compareUint64Int16, compareUint64Int32, compareUint64Int64, compareUint64Float, compareUint64Double +}; +__compar_fn_t gUint64UsignCompare[] = { + compareUint64Uint8, compareUint64Uint16, compareUint64Uint32, compareUint64Val +}; + + int8_t filterGetCompFuncIdx(int32_t type, int32_t optr) { int8_t comparFn = 0; @@ -257,6 +328,93 @@ __compar_fn_t filterGetCompFunc(int32_t type, int32_t optr) { return gDataCompare[filterGetCompFuncIdx(type, optr)]; } +__compar_fn_t filterGetCompFuncEx(int32_t lType, int32_t rType, int32_t optr) { + switch (lType) { + case TSDB_DATA_TYPE_TINYINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gInt8SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gInt8UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_SMALLINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gInt16SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gInt16UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_INT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gInt32SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gInt32UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_BIGINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gInt64SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gInt64UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_FLOAT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gFloatSignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gFloatUsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_DOUBLE: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gDoubleSignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gDoubleUsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_UTINYINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gUint8SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gUint8UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_USMALLINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gUint16SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gUint16UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_UINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gUint32SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gUint32UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + case TSDB_DATA_TYPE_UBIGINT: { + if (IS_SIGNED_NUMERIC_TYPE(rType) || IS_FLOAT_TYPE(rType)) { + return gUint64SignCompare[rType - TSDB_DATA_TYPE_TINYINT]; + } else { + return gUint64UsignCompare[rType - TSDB_DATA_TYPE_UTINYINT]; + } + break; + } + default: + break; + } + return NULL; +} static FORCE_INLINE int32_t filterCompareGroupCtx(const void *pLeft, const void *pRight) { SFilterGroupCtx *left = *((SFilterGroupCtx**)pLeft), *right = *((SFilterGroupCtx**)pRight); diff --git a/source/libs/scalar/src/sclvector.c b/source/libs/scalar/src/sclvector.c index aaa70ef5ae..caccf4bfac 100644 --- a/source/libs/scalar/src/sclvector.c +++ b/source/libs/scalar/src/sclvector.c @@ -1681,10 +1681,14 @@ void vectorBitOr(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam *pOut, void vectorCompareImpl(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam *pOut, int32_t _ord, int32_t optr) { int32_t i = ((_ord) == TSDB_ORDER_ASC) ? 0 : TMAX(pLeft->numOfRows, pRight->numOfRows) - 1; int32_t step = ((_ord) == TSDB_ORDER_ASC) ? 1 : -1; - - __compar_fn_t fp = filterGetCompFunc(GET_PARAM_TYPE(pLeft), optr); - if(terrno != TSDB_CODE_SUCCESS){ - return; + int32_t lType = GET_PARAM_TYPE(pLeft); + int32_t rType = GET_PARAM_TYPE(pRight); + __compar_fn_t fp = NULL; + + if (lType == rType) { + fp = filterGetCompFunc(lType, optr); + } else { + fp = filterGetCompFuncEx(lType, rType, optr); } pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows); @@ -1716,22 +1720,26 @@ void vectorCompareImpl(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam * void vectorCompare(SScalarParam* pLeft, SScalarParam* pRight, SScalarParam *pOut, int32_t _ord, int32_t optr) { SScalarParam pLeftOut = {0}; SScalarParam pRightOut = {0}; - - vectorConvert(pLeft, pRight, &pLeftOut, &pRightOut); - SScalarParam *param1 = NULL; SScalarParam *param2 = NULL; - if (pLeftOut.columnData != NULL) { - param1 = &pLeftOut; - } else { + if (SCL_NO_NEED_CONVERT_COMPARISION(GET_PARAM_TYPE(pLeft), GET_PARAM_TYPE(pRight), optr)) { param1 = pLeft; - } - - if (pRightOut.columnData != NULL) { - param2 = &pRightOut; - } else { param2 = pRight; + } else { + vectorConvert(pLeft, pRight, &pLeftOut, &pRightOut); + + if (pLeftOut.columnData != NULL) { + param1 = &pLeftOut; + } else { + param1 = pLeft; + } + + if (pRightOut.columnData != NULL) { + param2 = &pRightOut; + } else { + param2 = pRight; + } } vectorCompareImpl(param1, param2, pOut, _ord, optr); diff --git a/source/util/src/tcompare.c b/source/util/src/tcompare.c index fe3065b2b7..adebb09912 100644 --- a/source/util/src/tcompare.c +++ b/source/util/src/tcompare.c @@ -246,6 +246,728 @@ int32_t compareJsonVal(const void *pLeft, const void *pRight) { } } +int32_t compareInt8Int16(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Int32(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Int64(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Float(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Double(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Uint8(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Uint16(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Uint32(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt8Uint64(const void *pLeft, const void *pRight) { + int8_t left = GET_INT32_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Int8(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Int32(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Int64(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Float(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Double(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Uint8(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Uint16(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Uint32(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt16Uint64(const void *pLeft, const void *pRight) { + int16_t left = GET_INT32_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + + +int32_t compareInt32Int8(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Int16(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Int64(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Float(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Double(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Uint8(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Uint16(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Uint32(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt32Uint64(const void *pLeft, const void *pRight) { + int32_t left = GET_INT32_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Int8(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Int16(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Int32(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Float(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Double(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Uint8(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Uint16(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Uint32(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareInt64Uint64(const void *pLeft, const void *pRight) { + int64_t left = GET_INT32_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatInt8(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatInt16(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatInt32(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatInt64(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatDouble(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatUint8(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatUint16(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatUint32(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareFloatUint64(const void *pLeft, const void *pRight) { + float left = GET_FLOAT_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleInt8(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleInt16(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleInt32(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleInt64(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleFloat(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleUint8(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleUint16(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleUint32(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareDoubleUint64(const void *pLeft, const void *pRight) { + double left = GET_DOUBLE_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Int8(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Int16(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Int32(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Int64(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Float(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Double(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Uint16(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Uint32(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint8Uint64(const void *pLeft, const void *pRight) { + uint8_t left = GET_UINT8_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Int8(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Int16(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Int32(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Int64(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Float(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Double(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Uint8(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Uint32(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint16Uint64(const void *pLeft, const void *pRight) { + uint16_t left = GET_UINT16_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Int8(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Int16(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Int32(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Int64(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Float(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Double(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Uint8(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Uint16(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint32Uint64(const void *pLeft, const void *pRight) { + uint32_t left = GET_UINT32_VAL(pLeft); + uint64_t right = GET_UINT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Int8(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + int8_t right = GET_INT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Int16(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + int16_t right = GET_INT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Int32(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + int32_t right = GET_INT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Int64(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + int64_t right = GET_INT64_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Float(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + float right = GET_FLOAT_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Double(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + double right = GET_DOUBLE_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Uint8(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + uint8_t right = GET_UINT8_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Uint16(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + uint16_t right = GET_UINT16_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + +int32_t compareUint64Uint32(const void *pLeft, const void *pRight) { + uint64_t left = GET_UINT64_VAL(pLeft); + uint32_t right = GET_UINT32_VAL(pRight); + if (left > right) return 1; + if (left < right) return -1; + return 0; +} + + int32_t compareJsonValDesc(const void *pLeft, const void *pRight) { return compareJsonVal(pRight, pLeft); } From e2fd1cb2837876f5df12f7dcbab55eb9376bcdbb Mon Sep 17 00:00:00 2001 From: afwerar <1296468573@qq.com> Date: Fri, 26 Aug 2022 16:21:33 +0800 Subject: [PATCH 09/29] os: fix mac run error --- packaging/tools/make_install.sh | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/packaging/tools/make_install.sh b/packaging/tools/make_install.sh index 6a95ace99e..f554942ce3 100755 --- a/packaging/tools/make_install.sh +++ b/packaging/tools/make_install.sh @@ -381,8 +381,7 @@ function install_header() { ${install_main_dir}/include || ${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \ ${install_main_2_dir}/include && - ${csudo}chmod 644 ${install_main_dir}/include/* ||: - ${csudo}chmod 644 ${install_main_2_dir}/include/* + ${csudo}chmod 644 ${install_main_dir}/include/* || ${csudo}chmod 644 ${install_main_2_dir}/include/* fi } From 44dcaf2517328d6e3965cd523bb3fa02c3d2d02a Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Mon, 29 Aug 2022 09:28:38 +0800 Subject: [PATCH 10/29] fix: fix type convertion issue --- source/libs/scalar/src/sclvector.c | 4 +-- source/util/src/tcompare.c | 40 +++++++++++++++++++++++++----- 2 files changed, 36 insertions(+), 8 deletions(-) diff --git a/source/libs/scalar/src/sclvector.c b/source/libs/scalar/src/sclvector.c index caccf4bfac..a003315fca 100644 --- a/source/libs/scalar/src/sclvector.c +++ b/source/libs/scalar/src/sclvector.c @@ -909,11 +909,11 @@ int32_t vectorConvertImpl(const SScalarParam* pIn, SScalarParam* pOut, int32_t* int8_t gConvertTypes[TSDB_DATA_TYPE_BLOB+1][TSDB_DATA_TYPE_BLOB+1] = { /* NULL BOOL TINY SMAL INT BIG FLOA DOUB VARC TIME NCHA UTIN USMA UINT UBIG JSON VARB DECI BLOB */ /*NULL*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -/*BOOL*/ 0, 0, 0, 3, 4, 5, 6, 7, 7, 9, 7, 0, 12, 13, 14, 0, 7, 0, 0, +/*BOOL*/ 0, 0, 2, 3, 4, 5, 6, 7, 7, 9, 7, 11, 12, 13, 14, 0, 7, 0, 0, /*TINY*/ 0, 0, 0, 3, 4, 5, 6, 7, 7, 9, 7, 3, 4, 5, 7, 0, 7, 0, 0, /*SMAL*/ 0, 0, 0, 0, 4, 5, 6, 7, 7, 9, 7, 3, 4, 5, 7, 0, 7, 0, 0, /*INT */ 0, 0, 0, 0, 0, 5, 6, 7, 7, 9, 7, 4, 4, 5, 7, 0, 7, 0, 0, -/*BIGI*/ 0, 0, 0, 0, 0, 0, 6, 7, 7, 0, 7, 5, 5, 5, 7, 0, 7, 0, 0, +/*BIGI*/ 0, 0, 0, 0, 0, 0, 6, 7, 7, 9, 7, 5, 5, 5, 7, 0, 7, 0, 0, /*FLOA*/ 0, 0, 0, 0, 0, 0, 0, 7, 7, 6, 7, 6, 6, 6, 6, 0, 7, 0, 0, /*DOUB*/ 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 7, 7, 7, 7, 7, 0, 7, 0, 0, /*VARC*/ 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 8, 7, 7, 7, 7, 0, 0, 0, 0, diff --git a/source/util/src/tcompare.c b/source/util/src/tcompare.c index adebb09912..a8bab19fb5 100644 --- a/source/util/src/tcompare.c +++ b/source/util/src/tcompare.c @@ -570,9 +570,23 @@ int32_t compareFloatInt64(const void *pLeft, const void *pRight) { int32_t compareFloatDouble(const void *pLeft, const void *pRight) { float left = GET_FLOAT_VAL(pLeft); double right = GET_DOUBLE_VAL(pRight); - if (left > right) return 1; - if (left < right) return -1; - return 0; + + if (isnan(left) && isnan(right)) { + return 0; + } + + if (isnan(left)) { + return -1; + } + + if (isnan(right)) { + return 1; + } + + if (FLT_EQUAL(left, right)) { + return 0; + } + return FLT_GREATER(left, right) ? 1 : -1; } int32_t compareFloatUint8(const void *pLeft, const void *pRight) { @@ -642,9 +656,23 @@ int32_t compareDoubleInt64(const void *pLeft, const void *pRight) { int32_t compareDoubleFloat(const void *pLeft, const void *pRight) { double left = GET_DOUBLE_VAL(pLeft); float right = GET_FLOAT_VAL(pRight); - if (left > right) return 1; - if (left < right) return -1; - return 0; + + if (isnan(left) && isnan(right)) { + return 0; + } + + if (isnan(left)) { + return -1; + } + + if (isnan(right)) { + return 1; + } + + if (FLT_EQUAL(left, right)) { + return 0; + } + return FLT_GREATER(left, right) ? 1 : -1; } int32_t compareDoubleUint8(const void *pLeft, const void *pRight) { From 2bf49b0c8ec12b49dac804cc63cf547f96610a6c Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Mon, 29 Aug 2022 10:34:36 +0800 Subject: [PATCH 11/29] fix: fix uint64_t value issue --- source/util/src/tcompare.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/source/util/src/tcompare.c b/source/util/src/tcompare.c index 386018ffec..7032f39744 100644 --- a/source/util/src/tcompare.c +++ b/source/util/src/tcompare.c @@ -465,7 +465,7 @@ int32_t compareInt32Uint64(const void *pLeft, const void *pRight) { } int32_t compareInt64Int8(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); int8_t right = GET_INT8_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -473,7 +473,7 @@ int32_t compareInt64Int8(const void *pLeft, const void *pRight) { } int32_t compareInt64Int16(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); int16_t right = GET_INT16_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -481,7 +481,7 @@ int32_t compareInt64Int16(const void *pLeft, const void *pRight) { } int32_t compareInt64Int32(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); int32_t right = GET_INT32_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -489,7 +489,7 @@ int32_t compareInt64Int32(const void *pLeft, const void *pRight) { } int32_t compareInt64Float(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); float right = GET_FLOAT_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -497,7 +497,7 @@ int32_t compareInt64Float(const void *pLeft, const void *pRight) { } int32_t compareInt64Double(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); double right = GET_DOUBLE_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -505,7 +505,7 @@ int32_t compareInt64Double(const void *pLeft, const void *pRight) { } int32_t compareInt64Uint8(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); uint8_t right = GET_UINT8_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -513,7 +513,7 @@ int32_t compareInt64Uint8(const void *pLeft, const void *pRight) { } int32_t compareInt64Uint16(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); uint16_t right = GET_UINT16_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -521,7 +521,7 @@ int32_t compareInt64Uint16(const void *pLeft, const void *pRight) { } int32_t compareInt64Uint32(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); uint32_t right = GET_UINT32_VAL(pRight); if (left > right) return 1; if (left < right) return -1; @@ -529,7 +529,7 @@ int32_t compareInt64Uint32(const void *pLeft, const void *pRight) { } int32_t compareInt64Uint64(const void *pLeft, const void *pRight) { - int64_t left = GET_INT32_VAL(pLeft); + int64_t left = GET_INT64_VAL(pLeft); uint64_t right = GET_UINT64_VAL(pRight); if (left > right) return 1; if (left < right) return -1; From 081e844645c76a9e47b3b6c95ba649c432aa515f Mon Sep 17 00:00:00 2001 From: afwerar <1296468573@qq.com> Date: Mon, 29 Aug 2022 10:36:16 +0800 Subject: [PATCH 12/29] build: link-libtaos.so.1 --- source/client/CMakeLists.txt | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/source/client/CMakeLists.txt b/source/client/CMakeLists.txt index f52edbe71f..e8e3c87849 100644 --- a/source/client/CMakeLists.txt +++ b/source/client/CMakeLists.txt @@ -27,11 +27,18 @@ else() INCLUDE_DIRECTORIES(jni/linux) endif() +set_target_properties( + taos + PROPERTIES + CLEAN_DIRECT_OUTPUT + 1 +) + set_target_properties( taos PROPERTIES VERSION ${TD_VER_NUMBER} - SOVERSION ${TD_VER_NUMBER} + SOVERSION 1 ) add_library(taos_static STATIC ${CLIENT_SRC}) From 08cc6cb23a7fe2bf757dbf95d6ea1a6bf3e2127d Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Mon, 29 Aug 2022 13:18:47 +0800 Subject: [PATCH 13/29] Update index.md --- docs/en/02-intro/index.md | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/en/02-intro/index.md b/docs/en/02-intro/index.md index 5d21fbaf90..51df831948 100644 --- a/docs/en/02-intro/index.md +++ b/docs/en/02-intro/index.md @@ -3,7 +3,7 @@ title: Introduction toc_max_heading_level: 2 --- -TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/stream), [data subscription](/develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation. +TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation. This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine. @@ -12,32 +12,32 @@ This section introduces the major features, competitive advantages, typical use- The major features are listed below: 1. Insert data - * supports [using SQL to insert](/develop/insert-data/sql-writing). - * supports [schemaless writing](/reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others. - * supports seamless integration with third-party tools like [Telegraf](/third-party/telegraf/), [Prometheus](/third-party/prometheus/), [collectd](/third-party/collectd/), [StatsD](/third-party/statsd/), [TCollector](/third-party/tcollector/) and [icinga2/](/third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code. + * supports [using SQL to insert](../develop/insert-data/sql-writing). + * supports [schemaless writing](../reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB LINE](../develop/insert-data/influxdb-line),[OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others. + * supports seamless integration with third-party tools like [Telegraf](../third-party/telegraf/), [Prometheus](../third-party/prometheus/), [collectd](../third-party/collectd/), [StatsD](../third-party/statsd/), [TCollector](../third-party/tcollector/) and [icinga2/](../third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code. 2. Query data - * supports standard [SQL](/taos-sql/), including nested query. - * supports [time series specific functions](/taos-sql/function/#time-series-extensions) and [time series specific queries](/taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others. - * supports [user defined functions](/taos-sql/udf). -3. [Caching](/develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing. -4. [Stream Processing](/develop/stream/): not only is the continuous query is supported, but TDengine also supports even driven stream processing, so Flink or spark is not needed for time-series daata processing. -5. [Data Dubscription](/develop/tmq/): application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions. + * supports standard [SQL](../taos-sql/), including nested query. + * supports [time series specific functions](../taos-sql/function/#time-series-extensions) and [time series specific queries](../taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others. + * supports [user defined functions](../taos-sql/udf). +3. [Caching](../develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing. +4. [Stream Processing](../develop/stream/): not only is the continuous query is supported, but TDengine also supports even driven stream processing, so Flink or spark is not needed for time-series daata processing. +5. [Data Dubscription](../develop/tmq/): application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions. 6. Visualization - * supports seamless integration with [Grafana](/third-party/grafana/) for visualization. + * supports seamless integration with [Grafana](../third-party/grafana/) for visualization. * supports seamless integration with Google Data Studio. 7. Cluster - * supports [cluster](/deployment/) with the capability of increasing processing power by adding more nodes. - * supports [deployment on Kubernetes](/deployment/k8s/) + * supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes. + * supports [deployment on Kubernetes](../deployment/k8s/) * supports high availability via data replication. 8. Administration - * provides [monitoring](/operation/monitor) on running instances of TDengine. - * provides many ways to [import](/operation/import) and [export](/operation/export) data. + * provides [monitoring](../operation/monitor) on running instances of TDengine. + * provides many ways to [import](../operation/import) and [export](../operation/export) data. 9. Tools - * provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries. - * provides a tool [taosBenchmark](/reference/taosbenchmark/) for testing the performance of TDengine. + * provides an interactive [command-line interface](../reference/taos-shell) for management, maintenance and ad-hoc queries. + * provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine. 10. Programming - * provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages. - * provides a [REST API](/reference/rest-api/). + * provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages. + * provides a [REST API](../reference/rest-api/). For more details on features, please read through the entire documentation. From 7caf6bd66d2d97a82a7bcccd819d270625e582ae Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Mon, 29 Aug 2022 13:23:47 +0800 Subject: [PATCH 14/29] doc: rewrite introduction of chinese version according to english version change --- docs/zh/02-intro.md | 77 +++++++++++++++++++++++++++++---------------- 1 file changed, 50 insertions(+), 27 deletions(-) diff --git a/docs/zh/02-intro.md b/docs/zh/02-intro.md index f6779b8776..3c788478f0 100644 --- a/docs/zh/02-intro.md +++ b/docs/zh/02-intro.md @@ -4,7 +4,7 @@ description: 简要介绍 TDengine 的主要功能 toc_max_heading_level: 2 --- -TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/tdengine/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的时序数据库Time Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。 +TDengine 是一款开源、高性能、云原生的[时序数据库](https://tdengine.com/tsdb/),且针对物联网、车联网以及工业互联网进行了优化。TDengine 的代码,包括其集群功能,都在 GNU AGPL v3.0 下开源。除核心的时序数据库功能外,TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等其它功能以降低系统复杂度及研发和运维成本。 本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对TDengine有个整体的了解。 @@ -12,47 +12,70 @@ TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-se TDengine的主要功能如下: -1. 高速数据写入,除 [SQL 写入](../develop/insert-data/sql-writing)外,还支持 [Schemaless 写入](../reference/schemaless/),支持 [InfluxDB LINE 协议](../develop/insert-data/influxdb-line),[OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json)等协议写入; -2. 第三方数据采集工具 [Telegraf](../third-party/telegraf),[Prometheus](../third-party/prometheus),[StatsD](../third-party/statsd),[collectd](../third-party/collectd),[icinga2](../third-party/icinga2), [TCollector](../third-party/tcollector), [EMQ](../third-party/emq-broker), [HiveMQ](../third-party/hive-mq-broker) 等都可以进行配置后,不用任何代码,即可将数据写入; -3. 支持[各种查询](../develop/query-data),包括聚合查询、嵌套查询、降采样查询、插值等 -4. 支持[用户自定义函数](../develop/udf) -5. 支持[缓存](../develop/cache),将每张表的最后一条记录缓存起来,这样无需 Redis -6. 支持[流式计算](../develop/stream)(Stream Processing) -7. 支持[数据订阅](../develop/tmq),而且可以指定过滤条件 -8. 支持[集群](../deployment/),可以通过多节点进行水平扩展,并通过多副本实现高可靠 -9. 提供[命令行程序](../reference/taos-shell),便于管理集群,检查系统状态,做即席查询 -10. 提供多种数据的[导入](../operation/import)、[导出](../operation/export) -11. 支持对[TDengine 集群本身的监控](../operation/monitor) -12. 提供各种语言的[连接器](../connector): 如 C/C++, Java, Go, Node.JS, Rust, Python, C# 等 -13. 支持 [REST 接口](../connector/rest-api/) -14. 支持与[ Grafana 无缝集成](../third-party/grafana) -15. 支持与 Google Data Studio 无缝集成 -16. 支持 [Kubernetes 部署](../deployment/k8s) +1. 写入数据,支持 + - [SQL 写入](../develop/insert-data/sql-writing) + - [Schemaless 写入](../reference/schemaless/),支持多种标准写入协议 + - [InfluxDB LINE 协议](../develop/insert-data/influxdb-line) + - [OpenTSDB Telnet 协议](../develop/insert-data/opentsdb-telnet) + - [OpenTSDB JSON 协议](../develop/insert-data/opentsdb-json) + - 与多种第三方工具的无缝集成,它们都可以仅通过配置而无需任何代码即可将数据写入 TDengine + - [Telegraf](../third-party/telegraf) + - [Prometheus](../third-party/prometheus) + - [StatsD](../third-party/statsd) + - [collectd](../third-party/collectd) + - [icinga2](../third-party/icinga2) + - [TCollector](../third-party/tcollector) + - [EMQ](../third-party/emq-broker) + - [HiveMQ](../third-party/hive-mq-broker) ; +2. 查询数据,支持 + - [标准SQL](../taos-sql),含嵌套查询 + - [时序数据特色函数](../taos-sql/function/#time-series-extensions) + - [时序顺序特色查询](../taos-sql/distinguished),例如降采样、插值、累加和、时间加权平均、状态窗口、会话窗口等 + - [用户自定义函数](../taos-sql/udf) +3. [缓存](../develop/cache),将每张表的最后一条记录缓存起来,这样无需 Redis 就能对时序数据进行高效处理 +4. [流式计算](../develop/stream)(Stream Processing),TDengine 不仅支持连续查询,还支持事件驱动的流式计算,这样在处理时序数据时就无需 Flink 或 Spark 这样流计算组件 +5. [数据订阅](../develop/tmq),应用程序可以订阅一张表或一组表的数据,API 与 Kafka 相同,而且可以指定过滤条件 +6. 可视化 + - 支持与 [Grafana](../third-party/grafana/) 的无缝集成 + - 支持与 Google Data Studio 的无缝集成 +7. 集群 + - 集群部署(../deployment/),可以通过增加节点进行水平扩展以提升处理能力 + - 可以通过 [Kubernets 部署 TDengine](../deployment/k8s/) + - 通过多副本提供高可用能力 +8. 管理 + - [监控](..//operation/monitor)运行中的 TDengine 实例 + - 多种[数据导入](../operation/import)方式 + - 多种[数据导出](../operation/export)方式 +9. 工具 + - 提供交互式[命令行程序](../reference/taos-shell),便于管理集群,检查系统状态,做即席查询 + - 提供压力测试工具[taosBenchmark](../reference/taosbenchmark),用于测试 TDengine 的性能 +10. 编程 + - 提供各种语言的[连接器](../connector): 如 [C/C++](../connector/cpp), [Java](../connector/java), [Go](../connector/go), [Node.JS](../connector/node), [Rust](../connector/rust), [Python](../connector/python), [C#](../connector/csharp) 等 + - 支持 [REST 接口](../connector/rest-api/) -更多细小的功能,请阅读整个文档。 +更多细节功能,请阅读整个文档。 ## 竞争优势 -由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,设计了全新的针对时序数据的存储引擎和计算引擎,因此与其他时序数据库相比,TDengine 有以下特点: +由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,因此与其他时序数据库相比,TDengine 有以下特点: -- **[高性能](https://www.taosdata.com/tdengine/fast)**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。 +- **[高性能](https://www.taosdata.com/tdengine/fast)**:TDengine 是唯一一个解决了时序数据存储的高基数难题的时序数据库,支持上亿数据采集点,并在数据插入、查询和数据压缩上远胜其它时序数据库。 -- **[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)**:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。 +- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**:TDengine 内建缓存、流式计算和数据订阅等功能,为时序数据的处理提供了极简的解决方案,从而大幅降低了业务系统的设计复杂度和运维成本。 -- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。 +- **[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)**:通过原生的分布式设计、数据分片和分区、存算分离、RAFT 协议、Kubernets 部署和完整的可观测性,TDengine 是一款云原生时序数据库并且能够部署在公有云、私有云和混合云上。 -- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。 +- **[简单易用](https://www.taosdata.com/tdengine/ease_of_use)**:对系统管理员来说,TDengine 大幅降低了管理和维护的代价。对开发者来说, TDengine 提供了简单的接口、极简的解决方案和与第三方工具的无缝集成。对数据分析专家来说,TDengine 提供了便捷的数据访问。 -- **[简单易用](https://www.taosdata.com/tdengine/ease_of_use)**:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。 +- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**:通过超级表、存储计算分离、分区分片、预计算和其它技术,TDengine 能够高效地浏览、格式化和访问数据。 -- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。 +- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**:TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例,GitHub Star 19k,且拥有一个活跃的开发者社区。 采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面: 1. 由于其超强性能,它能将系统需要的计算资源和存储资源大幅降低 2. 因为支持 SQL,能与众多第三方软件无缝集成,学习迁移成本大幅下降 3. 因为是一极简的时序数据平台,系统复杂度、研发和运营成本大幅降低 -4. 因为维护简单,运营维护成本能大幅降低 ## 技术生态 @@ -67,7 +90,7 @@ TDengine的主要功能如下: 上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka, 他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序 (CLI) 以及可视化管理管理。 -## 总体适用场景 +## 典型适用场景 作为一个高性能、分布式、支持 SQL 的时序数据库 (Database),TDengine 的典型适用场景包括但不限于 IoT、工业互联网、车联网、IT 运维、能源、金融证券等领域。需要指出的是,TDengine 是针对时序数据场景设计的专用数据库和专用大数据处理工具,因充分利用了时序大数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。本文对适用场景做更多详细的分析。 From cf0dd63b888e632c07a82e2da9b1dbb07057e93f Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Mon, 29 Aug 2022 13:46:23 +0800 Subject: [PATCH 15/29] doc: fix broken link in introduction --- docs/zh/02-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/02-intro.md b/docs/zh/02-intro.md index 3c788478f0..152d03d9e3 100644 --- a/docs/zh/02-intro.md +++ b/docs/zh/02-intro.md @@ -43,7 +43,7 @@ TDengine的主要功能如下: - 可以通过 [Kubernets 部署 TDengine](../deployment/k8s/) - 通过多副本提供高可用能力 8. 管理 - - [监控](..//operation/monitor)运行中的 TDengine 实例 + - [监控](../operation/monitor)运行中的 TDengine 实例 - 多种[数据导入](../operation/import)方式 - 多种[数据导出](../operation/export)方式 9. 工具 From cde4621e77630a875e7de2a9658eda30ccbd28de Mon Sep 17 00:00:00 2001 From: Xuefeng Tan <1172915550@qq.com> Date: Mon, 29 Aug 2022 13:53:43 +0800 Subject: [PATCH 16/29] fix(taosAdapter): fix taosAdapter version shows unknown (#16472) --- tools/CMakeLists.txt | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/tools/CMakeLists.txt b/tools/CMakeLists.txt index d27fb99ef7..03097e31b9 100644 --- a/tools/CMakeLists.txt +++ b/tools/CMakeLists.txt @@ -82,6 +82,7 @@ ELSE () COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR}/taosadapter ) EXECUTE_PROCESS( + WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/taosadapter COMMAND git rev-parse --short HEAD RESULT_VARIABLE commit_sha1 OUTPUT_VARIABLE taosadapter_commit_sha1 @@ -118,8 +119,8 @@ ELSE () PATCH_COMMAND COMMAND git clean -f -d BUILD_COMMAND - COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" - COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" + COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" + COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" INSTALL_COMMAND COMMAND ${_upx_prefix}/src/upx/upx taosadapter COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin @@ -141,8 +142,8 @@ ELSE () PATCH_COMMAND COMMAND git clean -f -d BUILD_COMMAND - COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" - COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" + COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" + COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" INSTALL_COMMAND COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/ @@ -174,8 +175,8 @@ ELSE () BUILD_COMMAND COMMAND set CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client COMMAND set CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib - COMMAND go build -a -o taosadapter.exe -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" - COMMAND go build -a -o taosadapter-debug.exe -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}" + COMMAND go build -a -o taosadapter.exe -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" + COMMAND go build -a -o taosadapter-debug.exe -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}" INSTALL_COMMAND COMMAND ${_upx_prefix}/src/upx/upx taosadapter.exe COMMAND cmake -E copy taosadapter.exe ${CMAKE_BINARY_DIR}/build/bin From 3574abcc6ed983610417ec09662adc752ce39862 Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang <59301069+xiao-yu-wang@users.noreply.github.com> Date: Mon, 29 Aug 2022 14:39:12 +0800 Subject: [PATCH 17/29] Update 12-distinguished.md --- docs/zh/12-taos-sql/12-distinguished.md | 35 +++++++++++-------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/docs/zh/12-taos-sql/12-distinguished.md b/docs/zh/12-taos-sql/12-distinguished.md index b9e06033d6..016c1929fe 100644 --- a/docs/zh/12-taos-sql/12-distinguished.md +++ b/docs/zh/12-taos-sql/12-distinguished.md @@ -6,11 +6,11 @@ description: TDengine 提供的时序数据特有的查询功能 TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。 -TDengine 提供的特色查询包括标签切分查询和窗口切分查询。 +TDengine 提供的特色查询包括数据切分查询和窗口切分查询。 -## 标签切分查询 +## 数据切分查询 -超级表查询中,当需要针对标签进行数据切分然后在切分出的数据空间内再进行一系列的计算时使用标签切分子句,标签切分的语句如下: +当需要按一定的维度对数据进行切分然后在切分出的数据空间内再进行一系列的计算时使用数据切分子句,数据切分语句的语法如下: ```sql PARTITION BY part_list @@ -18,22 +18,23 @@ PARTITION BY part_list part_list 可以是任意的标量表达式,包括列、常量、标量函数和它们的组合。 -当 PARTITION BY 和标签一起使用时,TDengine 按如下方式处理标签切分子句: +TDengine 按如下方式处理数据切分子句: -- 标签切分子句位于 WHERE 子句之后,且不能和 JOIN 子句一起使用。 -- 标签切分子句将超级表数据按指定的标签组合进行切分,每个切分的分片进行指定的计算。计算由之后的子句定义(窗口子句、GROUP BY 子句或 SELECT 子句)。 -- 标签切分子句可以和窗口切分子句(或 GROUP BY 子句)一起使用,此时后面的子句作用在每个切分的分片上。例如,将数据按标签 location 进行分组,并对每个组按 10 分钟进行降采样,取其最大值。 +- 数据切分子句位于 WHERE 子句之后。 +- 数据切分子句将表数据按指定的维度进行切分,每个切分的分片进行指定的计算。计算由之后的子句定义(窗口子句、GROUP BY 子句或 SELECT 子句)。 +- 数据切分子句可以和窗口切分子句(或 GROUP BY 子句)一起使用,此时后面的子句作用在每个切分的分片上。例如,将数据按标签 location 进行分组,并对每个组按 10 分钟进行降采样,取其最大值。 ```sql select max(current) from meters partition by location interval(10m) ``` +数据切分子句最常见的用法就是在超级表查询中,按标签将子表数据进行切分,然后分别进行计算。特别是 PARTITION BY TBNAME 用法,它将每个子表的数据独立出来,形成一条条独立的时间序列,极大的方便了各种时序场景的统计分析。 ## 窗口切分查询 TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下: ```sql -SELECT function_list FROM tb_name +SELECT select_list FROM tb_name [WHERE where_condition] [SESSION(ts_col, tol_val)] [STATE_WINDOW(col)] @@ -43,19 +44,15 @@ SELECT function_list FROM tb_name 在上述语法中的具体限制如下 -### 窗口切分查询中使用函数的限制 - -- 在聚合查询中,function_list 位置允许使用聚合和选择函数,并要求每个函数仅输出单个结果(例如:COUNT、AVG、SUM、STDDEV、LEASTSQUARES、PERCENTILE、MIN、MAX、FIRST、LAST),而不能使用具有多行输出结果的函数(例如:DIFF 以及四则运算)。 -- 此外 LAST_ROW 查询也不能与窗口聚合同时出现。 -- 标量函数(如:CEIL/FLOOR 等)也不能使用在窗口聚合查询中。 - ### 窗口子句的规则 -- 窗口子句位于标签切分子句之后,GROUP BY 子句之前,且不可以和 GROUP BY 子句一起使用。 +- 窗口子句位于数据切分子句之后,GROUP BY 子句之前,且不可以和 GROUP BY 子句一起使用。 - 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算,SELECT 列表中的表达式只能包含: - 常量。 - - 聚集函数。 + - _wstart伪列、_wend伪列和_wduration伪列。 + - 聚集函数(包括选择函数和可以由参数确定输出行数的时序特有函数)。 - 包含上面表达式的表达式。 + - 且至少包含一个聚集函数。 - 窗口子句不可以和 GROUP BY 子句一起使用。 - WHERE 语句可以指定查询的起止时间和其他过滤条件。 @@ -74,7 +71,7 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填 1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。 2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。 -3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 GROUP BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 GROUP BY 语句分组,则返回结果中每个 GROUP 内不按照时间序列严格单调递增。 +3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 PARTITION BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 PARTITION BY 语句分组,则返回结果中每个 PARTITION 内不按照时间序列严格单调递增。 ::: @@ -106,7 +103,7 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m); ### 状态窗口 -使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。(状态窗口暂不支持对超级表使用) +使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。 ![TDengine Database 时间窗口示意图](./timewindow-3.webp) @@ -122,7 +119,7 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); ![TDengine Database 时间窗口示意图](./timewindow-2.webp) -在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用) +在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。 ``` From a51cfe837b12032594c42d2dbf86e0a2432b3a6f Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Mon, 29 Aug 2022 14:39:55 +0800 Subject: [PATCH 18/29] feat: update taostools aa45ad4 for3.0 (#16469) * feat: update taos-tools for 3.0 [TD-14141] * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools for 3.0 * feat: update taos-tools 8e3b3ee * fix: remove submodules * feat: update taos-tools c529299 * feat: update taos-tools 9dc2fec for 3.0 * fix: optim upx * feat: update taos-tools f4e456a for 3.0 * feat: update taos-tools 2a2def1 for 3.0 * feat: update taos-tools c9cc20f for 3.0 * feat: update taostoosl 8a5e336 for 3.0 * feat: update taostools 3c7dafe for 3.0 * feat: update taos-tools 2d68404 for 3.0 * feat: update taos-tools 57bdfbf for 3.0 * fix: jenkinsfile2 to upgrade pip * feat: update taostoosl 11d23e5 for 3.0 * feat: update taostools 43924b8 for 3.0 * feat: update taostools 53a0103 for 3.0 * feat: update taostoosl d237772 for 3.0 * feat: update taos-tools 6bde102 for 3.0 * feat: upate taos-tools 2af2222 for 3.0 * feat: update taos-tools 833b721 for 3.0 * feat: update taostools e8bfca6 for 3.0 * feat: update taos-tools aa45ad4 for 3.0 --- cmake/taostools_CMakeLists.txt.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmake/taostools_CMakeLists.txt.in b/cmake/taostools_CMakeLists.txt.in index 9547323acf..c0de75c6dd 100644 --- a/cmake/taostools_CMakeLists.txt.in +++ b/cmake/taostools_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taos-tools ExternalProject_Add(taos-tools GIT_REPOSITORY https://github.com/taosdata/taos-tools.git - GIT_TAG e8bfca6 + GIT_TAG aa45ad4 SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" BINARY_DIR "" #BUILD_IN_SOURCE TRUE From 430086e17bbbd1acf9c7265a7804be8ecbfe33e2 Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang <59301069+xiao-yu-wang@users.noreply.github.com> Date: Mon, 29 Aug 2022 15:19:33 +0800 Subject: [PATCH 19/29] Update 25-grant.md --- docs/zh/12-taos-sql/25-grant.md | 57 +++++++++++++++++++++++++++++---- 1 file changed, 50 insertions(+), 7 deletions(-) diff --git a/docs/zh/12-taos-sql/25-grant.md b/docs/zh/12-taos-sql/25-grant.md index 6f7024d32e..7fb9447101 100644 --- a/docs/zh/12-taos-sql/25-grant.md +++ b/docs/zh/12-taos-sql/25-grant.md @@ -9,14 +9,51 @@ description: 企业版中才具有的权限管理功能 ## 创建用户 ```sql -CREATE USER use_name PASS 'password'; +CREATE USER use_name PASS 'password' [SYSINFO {1|0}]; ``` 创建用户。 -use_name最长为23字节。 +use_name 最长为 23 字节。 -password最长为128字节,合法字符包括"a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。 +password 最长为 128 字节,合法字符包括"a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。 + +SYSINFO 表示用户是否可以查看系统信息。1 表示可以查看,0 表示不可以查看。系统信息包括服务端配置信息、服务端各种节点信息(如 DNODE、QNODE等)、存储相关的信息等。默认为可以查看系统信息。 + +例如,创建密码为123456且可以查看系统信息的用户test如下: + +```sql +taos> create user test pass '123456' sysinfo 1; +Query OK, 0 of 0 rows affected (0.001254s) +``` + +## 查看用户 + +```sql +SHOW USERS; +``` + +查看用户信息。 + +```sql +taos> show users; + name | super | enable | sysinfo | create_time | +================================================================================ + test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 | + root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 | +Query OK, 2 rows in database (0.001657s) +``` + +也可以通过查询INFORMATION_SCHEMA.INS_USERS系统表来查看用户信息,例如: + +```sql +taos> select * from information_schema.ins_users; + name | super | enable | sysinfo | create_time | +================================================================================ + test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 | + root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 | +Query OK, 2 rows in database (0.001953s) +``` ## 删除用户 @@ -37,9 +74,15 @@ alter_user_clause: { ``` - PASS:修改用户密码。 -- ENABLE:修改用户是否启用。1表示启用此用户,0表示禁用此用户。 -- SYSINFO:修改用户是否可查看系统信息。1表示可以查看系统信息,0表示不可以查看系统信息。 +- ENABLE:修改用户是否启用。1 表示启用此用户,0 表示禁用此用户。 +- SYSINFO:修改用户是否可查看系统信息。1 表示可以查看系统信息,0 表示不可以查看系统信息。 +例如,禁用 test 用户: + +```sql +taos> alter user test enable 0; +Query OK, 0 of 0 rows affected (0.001160s) +``` ## 授权 @@ -62,7 +105,7 @@ priv_level : { } ``` -对用户授权。 +对用户授权。授权功能只包含在企业版中。 授权级别支持到DATABASE,权限有READ和WRITE两种。 @@ -92,4 +135,4 @@ priv_level : { ``` -收回对用户的授权。 +收回对用户的授权。授权功能只包含在企业版中。 From 95e591efede2d6200254d38df471364e86ed341e Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 29 Aug 2022 14:41:56 +0800 Subject: [PATCH 20/29] fix: release pages with drop case --- source/libs/tdb/src/db/tdbBtree.c | 14 ++++++-------- source/libs/tdb/src/db/tdbPCache.c | 1 + source/libs/tdb/src/db/tdbPager.c | 1 + source/libs/tdb/src/inc/tdbInt.h | 4 ++-- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/source/libs/tdb/src/db/tdbBtree.c b/source/libs/tdb/src/db/tdbBtree.c index 4701318779..1480920f90 100644 --- a/source/libs/tdb/src/db/tdbBtree.c +++ b/source/libs/tdb/src/db/tdbBtree.c @@ -509,7 +509,7 @@ static int tdbBtreeBalanceDeeper(SBTree *pBt, SPage *pRoot, SPage **ppChild, TXN static int tdbBtreeBalanceNonRoot(SBTree *pBt, SPage *pParent, int idx, TXN *pTxn) { int ret; - int nOlds; + int nOlds, pageIdx; SPage *pOlds[3] = {0}; SCell *pDivCell[3] = {0}; int szDivCell[3]; @@ -849,13 +849,11 @@ static int tdbBtreeBalanceNonRoot(SBTree *pBt, SPage *pParent, int idx, TXN *pTx } } - // TODO: here is not corrent for drop case - for (int i = 0; i < nNews; i++) { - if (i < nOlds) { - tdbPagerReturnPage(pBt->pPager, pOlds[i], pTxn); - } else { - tdbPagerReturnPage(pBt->pPager, pNews[i], pTxn); - } + for (pageIdx = 0; pageIdx < nOlds; ++pageIdx) { + tdbPagerReturnPage(pBt->pPager, pOlds[pageIdx], pTxn); + } + for (; pageIdx < nNews; ++pageIdx) { + tdbPagerReturnPage(pBt->pPager, pNews[pageIdx], pTxn); } return 0; diff --git a/source/libs/tdb/src/db/tdbPCache.c b/source/libs/tdb/src/db/tdbPCache.c index 76d95cbb91..6254158591 100644 --- a/source/libs/tdb/src/db/tdbPCache.c +++ b/source/libs/tdb/src/db/tdbPCache.c @@ -98,6 +98,7 @@ SPage *tdbPCacheFetch(SPCache *pCache, const SPgid *pPgid, TXN *pTxn) { // printf("thread %" PRId64 " fetch page %d pgno %d pPage %p nRef %d\n", taosGetSelfPthreadId(), pPage->id, // TDB_PAGE_PGNO(pPage), pPage, nRef); + tdbDebug("pcache/fetch page %p/%d/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id, nRef); return pPage; } diff --git a/source/libs/tdb/src/db/tdbPager.c b/source/libs/tdb/src/db/tdbPager.c index 4de99e8b1b..f90c392788 100644 --- a/source/libs/tdb/src/db/tdbPager.c +++ b/source/libs/tdb/src/db/tdbPager.c @@ -166,6 +166,7 @@ int tdbPagerWrite(SPager *pPager, SPage *pPage) { // ref page one more time so the page will not be release tdbRefPage(pPage); + tdbDebug("pcache/mdirty page %p/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id); // Set page as dirty pPage->isDirty = 1; diff --git a/source/libs/tdb/src/inc/tdbInt.h b/source/libs/tdb/src/inc/tdbInt.h index 49126b80b6..6a694cf8f1 100644 --- a/source/libs/tdb/src/inc/tdbInt.h +++ b/source/libs/tdb/src/inc/tdbInt.h @@ -280,13 +280,13 @@ struct SPage { static inline i32 tdbRefPage(SPage *pPage) { i32 nRef = atomic_add_fetch_32(&((pPage)->nRef), 1); - tdbTrace("ref page %d, nRef %d", pPage->id, nRef); + tdbTrace("ref page %p/%d, nRef %d", pPage, pPage->id, nRef); return nRef; } static inline i32 tdbUnrefPage(SPage *pPage) { i32 nRef = atomic_sub_fetch_32(&((pPage)->nRef), 1); - tdbTrace("unref page %d, nRef %d", pPage->id, nRef); + tdbTrace("unref page %p/%d, nRef %d", pPage, pPage->id, nRef); return nRef; } From c36119bd704f94a3c8a8fa1d30aa8ace6c44620d Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang <59301069+xiao-yu-wang@users.noreply.github.com> Date: Mon, 29 Aug 2022 15:36:59 +0800 Subject: [PATCH 21/29] Update 06-select.md --- docs/zh/12-taos-sql/06-select.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/docs/zh/12-taos-sql/06-select.md b/docs/zh/12-taos-sql/06-select.md index 0c305231e0..d8ff3f04ed 100644 --- a/docs/zh/12-taos-sql/06-select.md +++ b/docs/zh/12-taos-sql/06-select.md @@ -355,19 +355,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...; :::info -- 目前仅支持一层嵌套,也即不能在子查询中再嵌入子查询。 -- 内层查询的返回结果将作为“虚拟表”供外层查询使用,此虚拟表可以使用 AS 语法做重命名,以便于外层查询中方便引用。 -- 目前不能在“连续查询”功能中使用子查询。 +- 内层查询的返回结果将作为“虚拟表”供外层查询使用,此虚拟表建议起别名,以便于外层查询中方便引用。 - 在内层和外层查询中,都支持普通的表间/超级表间 JOIN。内层查询的计算结果也可以再参与数据子表的 JOIN 操作。 -- 目前内层查询、外层查询均不支持 UNION 操作。 - 内层查询支持的功能特性与非嵌套的查询语句能力是一致的。 - 内层查询的 ORDER BY 子句一般没有意义,建议避免这样的写法以免无谓的资源消耗。 - 与非嵌套的查询语句相比,外层查询所能支持的功能特性存在如下限制: - 计算函数部分: - - 如果内层查询的结果数据未提供时间戳,那么计算过程依赖时间戳的函数在外层会无法正常工作。例如:TOP, BOTTOM, FIRST, LAST, DIFF。 - - 计算过程需要两遍扫描的函数,在外层查询中无法正常工作。例如:此类函数包括:STDDEV, PERCENTILE。 - - 外层查询中不支持 IN 算子,但在内层中可以使用。 - - 外层查询不支持 GROUP BY。 + - 如果内层查询的结果数据未提供时间戳,那么计算过程隐式依赖时间戳的函数在外层会无法正常工作。例如:INTERP, DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE。 + - 如果内层查询的结果数据不是有效的时间序列,那么计算过程依赖数据为时间序列的函数在外层会无法正常工作。例如:LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE。 + - 计算过程需要两遍扫描的函数,在外层查询中无法正常工作。例如:此类函数包括:PERCENTILE。 ::: From 8a59420ccc529047a25b88375b2a12e389e7bdf1 Mon Sep 17 00:00:00 2001 From: tomchon Date: Mon, 29 Aug 2022 15:44:57 +0800 Subject: [PATCH 22/29] test: add checkpackages scritps --- packaging/MPtestJenkinsfile | 38 +++++++++++++--------- packaging/checkPackageRuning.py | 4 ++- packaging/testpackage.sh | 57 ++++++++++++++++++++++++++++++--- 3 files changed, 79 insertions(+), 20 deletions(-) diff --git a/packaging/MPtestJenkinsfile b/packaging/MPtestJenkinsfile index 1e2e69a977..0782038c03 100644 --- a/packaging/MPtestJenkinsfile +++ b/packaging/MPtestJenkinsfile @@ -1,6 +1,7 @@ def sync_source(branch_name) { sh ''' hostname + IP env echo ''' + branch_name + ''' ''' @@ -15,6 +16,7 @@ def sync_source(branch_name) { cd ${TDENGINE_ROOT_DIR} git reset --hard git fetch || git fetch + rm -rf examples/rust/ git checkout ''' + branch_name + ''' -f git branch git pull || git pull @@ -53,6 +55,16 @@ pipeline { defaultValue:'3.0.0.1', description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1' ) + string ( + name:'toolsVersion', + defaultValue:'2.1.2', + description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1' + ) + string ( + name:'toolsBaseVersion', + defaultValue:'2.1.2', + description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1' + ) } environment{ WORK_DIR = '/var/lib/jenkins/workspace' @@ -86,26 +98,18 @@ pipeline { TD_CLIENT_EXE = "TDengine-client-${version}-Windows-x64.exe" + TD_TOOLS_TAR = "taosTools-${toolsVersion}-Linux-x64.tar.gz" + } stages { stage ('RUN') { - stage('get check package scritps'){ - agent{label 'ubuntu18'} - steps { - catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') { - script{ - sync_source("${BRANCH_NAME}") - } - - } - } - } parallel { stage('ubuntu16') { agent{label " ubuntu16 "} steps { - timeout(time: 3, unit: 'MINUTES'){ + timeout(time: 10, unit: 'MINUTES'){ + sync_source("${BRANCH_NAME}") sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server @@ -128,7 +132,8 @@ pipeline { stage('ubuntu18') { agent{label " ubuntu18 "} steps { - timeout(time: 3, unit: 'MINUTES'){ + timeout(time: 10, unit: 'MINUTES'){ + sync_source("${BRANCH_NAME}") sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server @@ -151,7 +156,8 @@ pipeline { stage('centos7') { agent{label " centos7_9 "} steps { - timeout(time: 240, unit: 'MINUTES'){ + timeout(time: 10, unit: 'MINUTES'){ + sync_source("${BRANCH_NAME}") sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server @@ -161,6 +167,7 @@ pipeline { sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server + bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server python3 checkPackageRuning.py ''' sh ''' @@ -174,7 +181,8 @@ pipeline { stage('centos8') { agent{label " centos8_3 "} steps { - timeout(time: 240, unit: 'MINUTES'){ + timeout(time: 10, unit: 'MINUTES'){ + sync_source("${BRANCH_NAME}") sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server diff --git a/packaging/checkPackageRuning.py b/packaging/checkPackageRuning.py index e53cc3bdbc..c0d1e8b86c 100755 --- a/packaging/checkPackageRuning.py +++ b/packaging/checkPackageRuning.py @@ -22,10 +22,12 @@ import time # install taospy out = subprocess.getoutput("pip3 show taospy|grep Version| awk -F ':' '{print $2}' ") -print(out) +print("taospy version %s "%out) if (out == "" ): os.system("pip install git+https://github.com/taosdata/taos-connector-python.git") print("install taos python connector") +else: + os.system("pip3 install --upgrade taospy ") diff --git a/packaging/testpackage.sh b/packaging/testpackage.sh index 173fa3a3c3..3ca31ea7a4 100755 --- a/packaging/testpackage.sh +++ b/packaging/testpackage.sh @@ -1,8 +1,32 @@ #!/bin/sh -# function installPkgAndCheckFile{ +# # ============================= get input parameters ================================================= -echo "Download package" +# # install.sh -v [server | client] -e [yes | no] -i [systemd | service | ...] + +# # set parameters by default value +# interactiveFqdn=yes # [yes | no] +# verType=server # [server | client] +# initType=systemd # [systemd | service | ...] + +# while getopts "hv:d:" arg +# do +# case $arg in +# d) +# #echo "interactiveFqdn=$OPTARG" +# script_dir=$( echo $OPTARG ) +# ;; +# h) +# echo "Usage: `basename $0` -d scripy_path" +# exit 0 +# ;; +# ?) #unknow option +# echo "unkonw argument" +# exit 1 +# ;; +# esac +# done +# echo "Download package" packgeName=$1 version=$2 @@ -25,22 +49,42 @@ elif [ ${testFile} = "tools" ];then installCmd="install-taostools.sh" fi +function cmdInstall { +comd=$1 +if command -v ${comd} ;then + echo "${comd} is already installed" +else + if command -v apt ;then + apt-get install ${comd} + elif command -v yum ;then + yum install ${comd} + else + echo "you should install ${comd} manually" + fi +fi +} + + echo "Uninstall all components of TDeingne" if command -v rmtaos ;then echo "uninstall all components of TDeingne:rmtaos" - echo " " + rmtaos else echo "os doesn't include TDengine " fi if command -v rmtaostools ;then echo "uninstall all components of TDeingne:rmtaostools" - echo " " + rmtaostools else echo "os doesn't include rmtaostools " fi + +cmdInstall tree +cmdInstall wget + echo "new workroom path" installPath="/usr/local/src/packageTest" oriInstallPath="/usr/local/src/packageTest/3.1" @@ -104,6 +148,11 @@ elif [[ ${packgeName} =~ "tar" ]];then else bash ${installCmd} fi + if [[ ${packgeName} =~ "Lite" ]];then + cd ${installPath} + wget https://www.taosdata.com/assets-download/3.0/taosTools-2.1.2-Linux-x64.tar.gz + tar xvf taosTools-2.1.2-Linux-x64.tar.gz + cd taosTools-2.1.2 && bash install-taostools.sh fi # } From 97052fccc586c42ccc95d14fc743029947988ad1 Mon Sep 17 00:00:00 2001 From: Pan YANG Date: Mon, 29 Aug 2022 16:07:12 +0800 Subject: [PATCH 23/29] docs: change TAOS SQL to TDengine SQL --- docs/en/12-taos-sql/index.md | 4 +- .../13-schemaless/13-schemaless.md | 56 +++++++++---------- docs/en/21-tdinternal/01-arch.md | 20 ++++--- docs/zh/07-develop/02-model/index.mdx | 4 +- .../03-insert-data/01-sql-writing.mdx | 4 +- docs/zh/07-develop/04-query-data/index.mdx | 6 +- docs/zh/12-taos-sql/index.md | 8 +-- .../13-schemaless/13-schemaless.md | 28 +++++----- docs/zh/21-tdinternal/01-arch.md | 2 +- 9 files changed, 69 insertions(+), 63 deletions(-) diff --git a/docs/en/12-taos-sql/index.md b/docs/en/12-taos-sql/index.md index e243cd2318..a5ffc9dc8d 100644 --- a/docs/en/12-taos-sql/index.md +++ b/docs/en/12-taos-sql/index.md @@ -1,6 +1,6 @@ --- title: TDengine SQL -description: "The syntax supported by TDengine SQL " +description: 'The syntax supported by TDengine SQL ' --- This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes). @@ -15,7 +15,7 @@ Syntax Specifications used in this chapter: - | means one of a few options, excluding | itself. - … means the item prior to it can be repeated multiple times. -To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below: +To better demonstrate the syntax, usage and rules of TDengine SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below: ``` taos> DESCRIBE meters; diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 816ebe0047..6de4810212 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -1,6 +1,6 @@ --- title: Schemaless Writing -description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface." +description: 'The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface.' --- In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing @@ -25,7 +25,7 @@ where: - measurement will be used as the data table name. It will be separated from tag_set by a comma. - `tag_set` will be used as tags, with format like `=,=` Enter a space between `tag_set` and `field_set`. - `field_set`will be used as data columns, with format like `=,=` Enter a space between `field_set` and `timestamp`. -- `timestamp` is the primary key timestamp corresponding to this row of data +- `timestamp` is the primary key timestamp corresponding to this row of data All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes ("). @@ -36,14 +36,14 @@ In the schemaless writing data line protocol, each data item in the field_set ne - Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\\) in front. (All refer to the ASCII character) - Numeric types will be distinguished from data types by the suffix. -| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** | -| -------- | -------- | ------------ | -------------- | -| 1 | None or f64 | double | 8 | -| 2 | f32 | float | 4 | -| 3 | i8/u8 | TinyInt/UTinyInt | 1 | -| 4 | i16/u16 | SmallInt/USmallInt | 2 | -| 5 | i32/u32 | Int/UInt | 4 | -| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 | +| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** | +| ----------------- | ----------- | ----------------------------- | ---------------- | +| 1 | None or f64 | double | 8 | +| 2 | f32 | float | 4 | +| 3 | i8/u8 | TinyInt/UTinyInt | 1 | +| 4 | i16/u16 | SmallInt/USmallInt | 2 | +| 5 | i32/u32 | Int/UInt | 4 | +| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 | - `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types. @@ -61,14 +61,14 @@ Note that if the wrong case is used when describing the data type suffix, or if Schemaless writes process row data according to the following principles. -1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string: +1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string: ```json "measurement,tag_key1=tag_value1,tag_key2=tag_value2" ``` -Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. -The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has. +Note that tag*key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. +The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. You can configure smlChildTableName to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. @@ -82,7 +82,7 @@ You can configure smlChildTableName to specify table names, for example, `smlChi :::tip All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed -16KB. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. +16KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. ::: @@ -90,23 +90,23 @@ All processing logic of schemaless will still follow TDengine's underlying restr Three specified modes are supported in the schemaless writing process, as follows: -| **Serial** | **Value** | **Description** | -| -------- | ------------------- | ------------------------------- | -| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol | -| 2 | SML_TELNET_PROTOCOL | OpenTSDB file protocol | -| 3 | SML_JSON_PROTOCOL | OpenTSDB JSON protocol | +| **Serial** | **Value** | **Description** | +| ---------- | ------------------- | ---------------------- | +| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol | +| 2 | SML_TELNET_PROTOCOL | OpenTSDB file protocol | +| 3 | SML_JSON_PROTOCOL | OpenTSDB JSON protocol | In InfluxDB line protocol mode, you must specify the precision of the input timestamp. Valid precisions are described in the following table. -| **No.** | **Precision** | **Description** | -| -------- | --------------------------------- | -------------- | -| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) | -| 2 | TSDB_SML_TIMESTAMP_HOURS | Hours | -| 3 | TSDB_SML_TIMESTAMP_MINUTES | Minutes | -| 4 | TSDB_SML_TIMESTAMP_SECONDS | Seconds | -| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | Milliseconds | -| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | Microseconds | -| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | Nanoseconds | +| **No.** | **Precision** | **Description** | +| ------- | --------------------------------- | --------------------- | +| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) | +| 2 | TSDB_SML_TIMESTAMP_HOURS | Hours | +| 3 | TSDB_SML_TIMESTAMP_MINUTES | Minutes | +| 4 | TSDB_SML_TIMESTAMP_SECONDS | Seconds | +| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | Milliseconds | +| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | Microseconds | +| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | Nanoseconds | In OpenTSDB file and JSON protocol modes, the precision of the timestamp is determined from its length in the standard OpenTSDB manner. User input is ignored. diff --git a/docs/en/21-tdinternal/01-arch.md b/docs/en/21-tdinternal/01-arch.md index 44651c0496..2f876adffc 100644 --- a/docs/en/21-tdinternal/01-arch.md +++ b/docs/en/21-tdinternal/01-arch.md @@ -12,6 +12,7 @@ The design of TDengine is based on the assumption that any hardware or software Logical structure diagram of TDengine's distributed architecture is as follows: ![TDengine Database architecture diagram](structure.webp) +
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. @@ -38,15 +39,16 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc **Cluster external connection**: TDengine cluster can accommodate a single, multiple or even thousands of data nodes. The application only needs to initiate a connection to any data node in the cluster. The network parameter required for connection is the End Point (FQDN plus configured port number) of a data node. When starting the application taos through CLI, the FQDN of the data node can be specified through the option `-h`, and the configured port number can be specified through `-p`. If the port is not configured, the system configuration parameter “serverPort” of TDengine will be adopted. -**Inter-cluster communication**: Data nodes connect with each other through TCP/UDP. When a data node starts, it will obtain the EP information of the dnode where the mnode is located, and then establish a connection with the mnode in the system to exchange information. There are three steps to obtain EP information of the mnode: +**Inter-cluster communication**: Data nodes connect with each other through TCP/UDP. When a data node starts, it will obtain the EP information of the dnode where the mnode is located, and then establish a connection with the mnode in the system to exchange information. There are three steps to obtain EP information of the mnode: -1. Check whether the mnodeEpList file exists, if it does not exist or cannot be opened normally to obtain EP information of the mnode, skip to the second step; +1. Check whether the mnodeEpList file exists, if it does not exist or cannot be opened normally to obtain EP information of the mnode, skip to the second step; 2. Check the system configuration file taos.cfg to obtain node configuration parameters “firstEp” and “secondEp” (the node specified by these two parameters can be a normal node without mnode, in this case, the node will try to redirect to the mnode node when connected). If these two configuration parameters do not exist or do not exist in taos.cfg, or are invalid, skip to the third step; 3. Set your own EP as a mnode EP and run it independently. After obtaining the mnode EP list, the data node initiates the connection. It will successfully join the working cluster after connection. If not successful, it will try the next item in the mnode EP list. If all attempts are made, but the connection still fails, sleep for a few seconds before trying again. **The choice of MNODE**: TDengine logically has a management node, but there is no separate execution code. The server-side only has one set of execution code, taosd. So which data node will be the management node? This is determined automatically by the system without any manual intervention. The principle is as follows: when a data node starts, it will check its End Point and compare it with the obtained mnode EP List. If its EP exists in it, the data node shall start the mnode module and become a mnode. If your own EP is not in the mnode EP List, the mnode module will not start. During the system operation, due to load balancing, downtime and other reasons, mnode may migrate to the new dnode, totally transparently and without manual intervention. The modification of configuration parameters is the decision made by mnode itself according to resources usage. -**Add new data nodes:** After the system has a data node, it has become a working system. There are two steps to add a new node into the cluster. +**Add new data nodes:** After the system has a data node, it has become a working system. There are two steps to add a new node into the cluster. + - Step1: Connect to the existing working data node using TDengine CLI, and then add the End Point of the new data node with the command "create dnode" - Step 2: In the system configuration parameter file taos.cfg of the new data node, set the “firstEp” and “secondEp” parameters to the EP of any two data nodes in the existing cluster. Please refer to the user tutorial for detailed steps. In this way, the cluster will be established step by step. @@ -57,6 +59,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process. ![typical process of TDengine Database](message.webp) +
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. @@ -121,16 +124,17 @@ The load balancing process does not require any manual intervention, and it is t If a database has N replicas, a virtual node group has N virtual nodes. But only one is the Leader and all others are slaves. When the application writes a new record to system, only the Leader vnode can accept the writing request. If a follower vnode receives a writing request, the system will notifies TAOSC to redirect. -### Leader vnode Writing Process +### Leader vnode Writing Process Leader Vnode uses a writing process as follows: ![TDengine Database Leader Writing Process](write_master.webp) +
Figure 3: TDengine Leader writing process
1. Leader vnode receives the application data insertion request, verifies, and moves to next step; 2. If the system configuration parameter `“walLevel”` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file; -3. If there are multiple replicas, vnode will forward data packet to follower vnodes in the same virtual node group, and the forwarded packet has a version number with data; +3. If there are multiple replicas, vnode will forward data packet to follower vnodes in the same virtual node group, and the forwarded packet has a version number with data; 4. Write into memory and add the record to “skip list”; 5. Leader vnode returns a confirmation message to the application, indicating a successful write. 6. If any of Step 2, 3 or 4 fails, the error will directly return to the application. @@ -140,6 +144,7 @@ Leader Vnode uses a writing process as follows: For a follower vnode, the write process as follows: ![TDengine Database Follower Writing Process](write_slave.webp) +
Figure 4: TDengine Follower Writing Process
1. Follower vnode receives a data insertion request forwarded by Leader vnode; @@ -212,6 +217,7 @@ When data is written to disk, the system decideswhether to compress the data bas By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter “dataDir” to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data older than a week is stored on local hard disk, and data older than four weeks is stored on network storage device. This reduces storage costs and ensures efficient data access. The movement of data on different storage media is automatically done by the system and is completely transparent to applications. Tiered storage of data is also configured through the system parameter “dataDir”. dataDir format is as follows: + ``` dataDir data_path [tier_level] ``` @@ -270,6 +276,7 @@ For the data collected by device D1001, the number of records per hour is counte TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable (super table). STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. There can be multiple tags which can be added, deleted and modified at any time. Applications can aggregate or statistically operate on all or a subset of tables under a STABLE by specifying tag filters. This greatly simplifies the development of applications. The process is shown in the following figure: ![TDengine Database Diagram of multi-table aggregation query](multi_tables.webp) +
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system; @@ -279,9 +286,8 @@ TDengine creates a separate table for each data collection point, but in practic 5. Each vnode first finds the set of tables within its own node that meet the tag filters from memory, then scans the stored time-series data, completes corresponding aggregation calculations, and returns result to TAOSC; 6. TAOSC finally aggregates the results returned by multiple data nodes and send them back to application. -Since TDengine stores tag data and time-series data separately in vnode, by filtering tag data in memory, the set of tables that need to participate in aggregation operation is first found, which reduces the volume of data to be scanned and improves aggregation speed. At the same time, because the data is distributed in multiple vnodes/dnodes, the aggregation operation is carried out concurrently in multiple vnodes, which further improves the aggregation speed. Aggregation functions for ordinary tables and most operations are applicable to STables. The syntax is exactly the same. Please see TAOS SQL for details. +Since TDengine stores tag data and time-series data separately in vnode, by filtering tag data in memory, the set of tables that need to participate in aggregation operation is first found, which reduces the volume of data to be scanned and improves aggregation speed. At the same time, because the data is distributed in multiple vnodes/dnodes, the aggregation operation is carried out concurrently in multiple vnodes, which further improves the aggregation speed. Aggregation functions for ordinary tables and most operations are applicable to STables. The syntax is exactly the same. Please see TDengine SQL for details. ### Precomputation In order to effectively improve the performance of query processing, based-on the unchangeable feature of IoT data, statistical information of data stored in data block is recorded in the head of data block, including max value, min value, and sum. We call it a precomputing unit. If the query processing involves all the data of a whole data block, the pre-calculated results are directly used, and no need to read the data block contents at all. Since the amount of pre-calculated data is much smaller than the actual size of data block stored on disk, for query processing with disk IO as bottleneck, the use of pre-calculated results can greatly reduce the pressure of reading IO and accelerate the query process. The precomputation mechanism is similar to the BRIN (Block Range Index) of PostgreSQL. - diff --git a/docs/zh/07-develop/02-model/index.mdx b/docs/zh/07-develop/02-model/index.mdx index 634c8a98d4..d66059c2cd 100644 --- a/docs/zh/07-develop/02-model/index.mdx +++ b/docs/zh/07-develop/02-model/index.mdx @@ -41,7 +41,7 @@ USE power; CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int); ``` -与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 schema 可以事后增加、删除、修改。具体定义以及细节请见 [TAOS SQL 的超级表管理](/taos-sql/stable) 章节。 +与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 schema 可以事后增加、删除、修改。具体定义以及细节请见 [TDengine SQL 的超级表管理](/taos-sql/stable) 章节。 每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。 @@ -55,7 +55,7 @@ TDengine 对每个数据采集点需要独立建表。与标准的关系型数 CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2); ``` -其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值 "California.SanFrancisco",标签 groupId 的具体标签值 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TAOS SQL 的表管理](/taos-sql/table) 章节。 +其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值 "California.SanFrancisco",标签 groupId 的具体标签值 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TDengine SQL 的表管理](/taos-sql/table) 章节。 TDengine 建议将数据采集点的全局唯一 ID 作为表名(比如设备序列号)。但对于有的场景,并没有唯一的 ID,可以将多个 ID 组合成一个唯一的 ID。不建议将具有唯一性的 ID 作为标签值。 diff --git a/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx b/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx index 2920fa35a4..8818eaae3d 100644 --- a/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx +++ b/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx @@ -26,6 +26,7 @@ import PhpStmt from "./_php_stmt.mdx"; 应用通过连接器执行 INSERT 语句来插入数据,用户还可以通过 TDengine CLI,手动输入 INSERT 语句插入数据。 ### 一次写入一条 + 下面这条 INSERT 就将一条记录写入到表 d1001 中: ```sql @@ -48,7 +49,7 @@ TDengine 也支持一次向多个表写入数据,比如下面这条命令就 INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31); ``` -详细的 SQL INSERT 语法规则参考 [TAOS SQL 的数据写入](/taos-sql/insert)。 +详细的 SQL INSERT 语法规则参考 [TDengine SQL 的数据写入](/taos-sql/insert)。 :::info @@ -134,4 +135,3 @@ TDengine 也提供了支持参数绑定的 Prepare API,与 MySQL 类似,这
- diff --git a/docs/zh/07-develop/04-query-data/index.mdx b/docs/zh/07-develop/04-query-data/index.mdx index 92cb1906d9..d6156c8a59 100644 --- a/docs/zh/07-develop/04-query-data/index.mdx +++ b/docs/zh/07-develop/04-query-data/index.mdx @@ -44,7 +44,7 @@ Query OK, 2 row(s) in set (0.001100s) 为满足物联网场景的需求,TDengine 支持几个特殊的函数,比如 twa(时间加权平均),spread (最大值与最小值的差),last_row(最后一条记录)等,更多与物联网场景相关的函数将添加进来。 -具体的查询语法请看 [TAOS SQL 的数据查询](../../taos-sql/select) 章节。 +具体的查询语法请看 [TDengine SQL 的数据查询](../../taos-sql/select) 章节。 ## 多表聚合查询 @@ -75,7 +75,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2; Query OK, 1 row(s) in set (0.002136s) ``` -在 [TAOS SQL 的数据查询](../../taos-sql/select) 一章,查询类操作都会注明是否支持超级表。 +在 [TDengine SQL 的数据查询](../../taos-sql/select) 一章,查询类操作都会注明是否支持超级表。 ## 降采样查询、插值 @@ -123,7 +123,7 @@ Query OK, 6 rows in database (0.005515s) 如果一个时间间隔里,没有采集的数据,TDengine 还提供插值计算的功能。 -语法规则细节请见 [TAOS SQL 的按时间窗口切分聚合](../../taos-sql/distinguished) 章节。 +语法规则细节请见 [TDengine SQL 的按时间窗口切分聚合](../../taos-sql/distinguished) 章节。 ## 示例代码 diff --git a/docs/zh/12-taos-sql/index.md b/docs/zh/12-taos-sql/index.md index 821679551c..739d26b224 100644 --- a/docs/zh/12-taos-sql/index.md +++ b/docs/zh/12-taos-sql/index.md @@ -1,11 +1,11 @@ --- -title: TAOS SQL -description: "TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容" +title: TDengine SQL +description: 'TDengine SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容' --- -本文档说明 TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。TDengine 3.0 版本相比 2.x 版本做了大量改进和优化,特别是查询引擎进行了彻底的重构,因此 SQL 语法相比 2.x 版本有很多变更。详细的变更内容请见 [3.0 版本语法变更](/taos-sql/changes) 章节 +本文档说明 TDengine SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。TDengine 3.0 版本相比 2.x 版本做了大量改进和优化,特别是查询引擎进行了彻底的重构,因此 SQL 语法相比 2.x 版本有很多变更。详细的变更内容请见 [3.0 版本语法变更](/taos-sql/changes) 章节 -TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 提供标准的 SQL 语法,并针对时序数据和业务的特点优化和新增了许多语法和功能。TAOS SQL 语句的最大长度为 1M。TAOS SQL 不支持关键字的缩写,例如 DELETE 不能缩写为 DEL。 +TDengine SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TDengine SQL 提供标准的 SQL 语法,并针对时序数据和业务的特点优化和新增了许多语法和功能。TDengine SQL 语句的最大长度为 1M。TDengine SQL 不支持关键字的缩写,例如 DELETE 不能缩写为 DEL。 本章节 SQL 语法遵循如下约定: diff --git a/docs/zh/14-reference/13-schemaless/13-schemaless.md b/docs/zh/14-reference/13-schemaless/13-schemaless.md index ae4280e26a..6c40a6ecb1 100644 --- a/docs/zh/14-reference/13-schemaless/13-schemaless.md +++ b/docs/zh/14-reference/13-schemaless/13-schemaless.md @@ -3,7 +3,7 @@ title: Schemaless 写入 description: 'Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构' --- -在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine提供调用 Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构。并且在必要时,Schemaless +在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine 提供调用 Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构。并且在必要时,Schemaless 将自动增加必要的数据列,保证用户写入的数据可以被正确存储。 无模式写入方式建立的超级表及其对应的子表与通过 SQL 直接建立的超级表和子表完全没有区别,你也可以通过,SQL 语句直接向其中写入数据。需要注意的是,通过无模式写入方式建立的表,其表名是基于标签值按照固定的映射规则生成,所以无法明确地进行表意,缺乏可读性。 @@ -36,14 +36,14 @@ tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要 - 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号) - 数值类型将通过后缀来区分数据类型: -| **序号** | **后缀** | **映射类型** | **大小(字节)** | -| -------- | -------- | ------------ | -------------- | -| 1 | 无或 f64 | double | 8 | -| 2 | f32 | float | 4 | -| 3 | i8/u8 | TinyInt/UTinyInt | 1 | -| 4 | i16/u16 | SmallInt/USmallInt | 2 | -| 5 | i32/u32 | Int/UInt | 4 | -| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 | +| **序号** | **后缀** | **映射类型** | **大小(字节)** | +| -------- | ----------- | ----------------------------- | -------------- | +| 1 | 无或 f64 | double | 8 | +| 2 | f32 | float | 4 | +| 3 | i8/u8 | TinyInt/UTinyInt | 1 | +| 4 | i16/u16 | SmallInt/USmallInt | 2 | +| 5 | i32/u32 | Int/UInt | 4 | +| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 | - t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。 @@ -67,9 +67,9 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 "measurement,tag_key1=tag_value1,tag_key2=tag_value2" ``` -需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 -排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 -为了让用户可以指定生成的表名,可以通过配置smlChildTableName来指定(比如 配置smlChildTableName=tname 插入数据为st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为cpu1,注意如果多行数据tname相同,但是后面的tag_set不同,则使用第一次自动建表时指定的tag_set,其他的会忽略)。 +需要注意的是,这里的 tag*key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 +排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t*” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 +为了让用户可以指定生成的表名,可以通过配置 smlChildTableName 来指定(比如 配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一次自动建表时指定的 tag_set,其他的会忽略)。 2. 如果解析行协议获得的超级表不存在,则会创建这个超级表(不建议手动创建超级表,不然插入数据可能异常)。 3. 如果解析行协议获得子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。 @@ -78,11 +78,11 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 NULL。 6. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,自动增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。 7. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。 -8. 为了提高写入的效率,默认假设同一个超级表中field_set的顺序是一样的(第一条数据包含所有的field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数smlDataFormat为false,否则,数据写入按照相同顺序写入,库中数据会异常。 +8. 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。 :::tip 无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过 -16KB。这方面的具体限制约束请参见 [TAOS SQL 边界限制](/taos-sql/limit) +16KB。这方面的具体限制约束请参见 [TDengine SQL 边界限制](/taos-sql/limit) ::: diff --git a/docs/zh/21-tdinternal/01-arch.md b/docs/zh/21-tdinternal/01-arch.md index d74366d129..704524fd21 100644 --- a/docs/zh/21-tdinternal/01-arch.md +++ b/docs/zh/21-tdinternal/01-arch.md @@ -288,7 +288,7 @@ TDengine 对每个数据采集点单独建表,但在实际应用中经常需 7. vnode 返回本节点的查询计算结果; 8. qnode 完成多节点数据聚合后将最终查询结果返回给客户端; -由于 TDengine 在 vnode 内将标签数据与时序数据分离存储,通过在内存里过滤标签数据,先找到需要参与聚合操作的表的集合,将需要扫描的数据集大幅减少,大幅提升聚合计算速度。同时,由于数据分布在多个 vnode/dnode,聚合计算操作在多个 vnode 里并发进行,又进一步提升了聚合的速度。 对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看 TAOS SQL。 +由于 TDengine 在 vnode 内将标签数据与时序数据分离存储,通过在内存里过滤标签数据,先找到需要参与聚合操作的表的集合,将需要扫描的数据集大幅减少,大幅提升聚合计算速度。同时,由于数据分布在多个 vnode/dnode,聚合计算操作在多个 vnode 里并发进行,又进一步提升了聚合的速度。 对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看 TDengine SQL。 ### 预计算 From 6b46ac5f54f3ddad2f039386a5237492cc13ad1e Mon Sep 17 00:00:00 2001 From: tomchon Date: Mon, 29 Aug 2022 16:11:33 +0800 Subject: [PATCH 24/29] test: add checkpackages scritps --- packaging/MPtestJenkinsfile | 11 +++-------- packaging/testpackage.sh | 5 +++-- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/packaging/MPtestJenkinsfile b/packaging/MPtestJenkinsfile index 0782038c03..5e259d56be 100644 --- a/packaging/MPtestJenkinsfile +++ b/packaging/MPtestJenkinsfile @@ -1,8 +1,7 @@ def sync_source(branch_name) { sh ''' hostname - IP - env + ip addr|grep 192|awk '{print $2}'|sed "s/\\/.*//" echo ''' + branch_name + ''' ''' sh ''' @@ -73,7 +72,7 @@ pipeline { BRANCH_NAME = '3.0' TD_SERVER_TAR = "TDengine-server-${version}-Linux-x64.tar.gz" - BASE_TD_SERVER_TAR = "TDengine-server-${baseVersion}-arm64-x64.tar.gz" + BASE_TD_SERVER_TAR = "TDengine-server-${baseVersion}-Linux-x64.tar.gz" TD_SERVER_ARM_TAR = "TDengine-server-${version}-Linux-arm64.tar.gz" BASE_TD_SERVER_ARM_TAR = "TDengine-server-${baseVersion}-Linux-arm64.tar.gz" @@ -82,7 +81,7 @@ pipeline { BASE_TD_SERVER_LITE_TAR = "TDengine-server-${baseVersion}-Linux-x64-Lite.tar.gz" TD_CLIENT_TAR = "TDengine-client-${version}-Linux-x64.tar.gz" - BASE_TD_CLIENT_TAR = "TDengine-client-${baseVersion}-arm64-x64.tar.gz" + BASE_TD_CLIENT_TAR = "TDengine-client-${baseVersion}-Linux-x64.tar.gz" TD_CLIENT_ARM_TAR = "TDengine-client-${version}-Linux-arm64.tar.gz" BASE_TD_CLIENT_ARM_TAR = "TDengine-client-${baseVersion}-Linux-arm64.tar.gz" @@ -114,7 +113,6 @@ pipeline { cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server python3 checkPackageRuning.py - rmtaos ''' sh ''' cd ${TDENGINE_ROOT_DIR}/packaging @@ -138,7 +136,6 @@ pipeline { cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server python3 checkPackageRuning.py - rmtaos ''' sh ''' cd ${TDENGINE_ROOT_DIR}/packaging @@ -162,12 +159,10 @@ pipeline { cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server python3 checkPackageRuning.py - rmtaos ''' sh ''' cd ${TDENGINE_ROOT_DIR}/packaging bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server - bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server python3 checkPackageRuning.py ''' sh ''' diff --git a/packaging/testpackage.sh b/packaging/testpackage.sh index 3ca31ea7a4..148aabc091 100755 --- a/packaging/testpackage.sh +++ b/packaging/testpackage.sh @@ -55,9 +55,9 @@ if command -v ${comd} ;then echo "${comd} is already installed" else if command -v apt ;then - apt-get install ${comd} + apt-get install ${comd} -y elif command -v yum ;then - yum install ${comd} + yum install ${comd} - y else echo "you should install ${comd} manually" fi @@ -153,6 +153,7 @@ elif [[ ${packgeName} =~ "tar" ]];then wget https://www.taosdata.com/assets-download/3.0/taosTools-2.1.2-Linux-x64.tar.gz tar xvf taosTools-2.1.2-Linux-x64.tar.gz cd taosTools-2.1.2 && bash install-taostools.sh + fi fi # } From df7daebf4264318936f160473283e432290565a3 Mon Sep 17 00:00:00 2001 From: tomchon Date: Mon, 29 Aug 2022 16:25:50 +0800 Subject: [PATCH 25/29] test: add checkpackages scritps --- packaging/testpackage.sh | 35 ++--------------------------------- 1 file changed, 2 insertions(+), 33 deletions(-) diff --git a/packaging/testpackage.sh b/packaging/testpackage.sh index 148aabc091..669b0c9e1e 100755 --- a/packaging/testpackage.sh +++ b/packaging/testpackage.sh @@ -1,32 +1,5 @@ #!/bin/sh -# # ============================= get input parameters ================================================= - -# # install.sh -v [server | client] -e [yes | no] -i [systemd | service | ...] - -# # set parameters by default value -# interactiveFqdn=yes # [yes | no] -# verType=server # [server | client] -# initType=systemd # [systemd | service | ...] - -# while getopts "hv:d:" arg -# do -# case $arg in -# d) -# #echo "interactiveFqdn=$OPTARG" -# script_dir=$( echo $OPTARG ) -# ;; -# h) -# echo "Usage: `basename $0` -d scripy_path" -# exit 0 -# ;; -# ?) #unknow option -# echo "unkonw argument" -# exit 1 -# ;; -# esac -# done -# echo "Download package" packgeName=$1 version=$2 @@ -57,8 +30,7 @@ else if command -v apt ;then apt-get install ${comd} -y elif command -v yum ;then - yum install ${comd} - y - else + yum -y install ${comd} echo "you should install ${comd} manually" fi fi @@ -155,8 +127,5 @@ elif [[ ${packgeName} =~ "tar" ]];then cd taosTools-2.1.2 && bash install-taostools.sh fi -fi -# } - -# installPkgAndCheckFile +fi From 3b83cf9d9771774d4ac2e873e7cc906a16ac0cda Mon Sep 17 00:00:00 2001 From: Pan YANG Date: Mon, 29 Aug 2022 16:36:17 +0800 Subject: [PATCH 26/29] docs: fix errors by auto formatter of markdown --- docs/en/14-reference/13-schemaless/13-schemaless.md | 4 ++-- docs/zh/14-reference/13-schemaless/13-schemaless.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 6de4810212..4f50c38cbb 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -67,8 +67,8 @@ Schemaless writes process row data according to the following principles. "measurement,tag_key1=tag_value1,tag_key2=tag_value2" ``` -Note that tag*key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. -The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. +Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. +The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has. You can configure smlChildTableName to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. diff --git a/docs/zh/14-reference/13-schemaless/13-schemaless.md b/docs/zh/14-reference/13-schemaless/13-schemaless.md index 6c40a6ecb1..a33abafaf8 100644 --- a/docs/zh/14-reference/13-schemaless/13-schemaless.md +++ b/docs/zh/14-reference/13-schemaless/13-schemaless.md @@ -67,8 +67,8 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 "measurement,tag_key1=tag_value1,tag_key2=tag_value2" ``` -需要注意的是,这里的 tag*key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 -排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t*” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 +需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 +排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 为了让用户可以指定生成的表名,可以通过配置 smlChildTableName 来指定(比如 配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一次自动建表时指定的 tag_set,其他的会忽略)。 2. 如果解析行协议获得的超级表不存在,则会创建这个超级表(不建议手动创建超级表,不然插入数据可能异常)。 From c563948c8eabc2db69ef835af9cd9449afab9fe0 Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Mon, 29 Aug 2022 17:34:44 +0800 Subject: [PATCH 27/29] doc: correct taosdata.com to tdengine.com --- docs/en/04-concept/index.md | 2 +- docs/en/07-develop/01-connect/index.md | 2 +- docs/en/10-deployment/01-deploy.md | 16 ++++++++-------- docs/en/14-reference/06-taosdump.md | 1 - docs/en/14-reference/07-tdinsight/index.md | 2 +- docs/en/20-third-party/09-emq-broker.md | 6 ++---- docs/en/25-application/03-immigrate.md | 6 +++--- 7 files changed, 16 insertions(+), 19 deletions(-) diff --git a/docs/en/04-concept/index.md b/docs/en/04-concept/index.md index 5a9c55fdd6..b0a0c25d85 100644 --- a/docs/en/04-concept/index.md +++ b/docs/en/04-concept/index.md @@ -162,7 +162,7 @@ To better understand the data model using metri, tags, super table and subtable, ## Database -A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases. +A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases. In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database. diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/07-develop/01-connect/index.md index 2053706421..901fe69d24 100644 --- a/docs/en/07-develop/01-connect/index.md +++ b/docs/en/07-develop/01-connect/index.md @@ -279,6 +279,6 @@ Prior to establishing connection, please make sure TDengine is already running a :::tip -If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq). +If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq). ::: diff --git a/docs/en/10-deployment/01-deploy.md b/docs/en/10-deployment/01-deploy.md index a445b684dc..5dfcd3108d 100644 --- a/docs/en/10-deployment/01-deploy.md +++ b/docs/en/10-deployment/01-deploy.md @@ -39,18 +39,18 @@ To get the hostname on any host, the command `hostname -f` can be executed. On the physical machine running the application, ping the dnode that is running taosd. If the dnode is not accessible, the application cannot connect to taosd. In this case, verify the DNS and hosts settings on the physical node running the application. -The end point of each dnode is the output hostname and port, such as h1.taosdata.com:6030. +The end point of each dnode is the output hostname and port, such as h1.tdengine.com:6030. ### Step 5 -Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following. +Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.tdengine.com:6030", its `taos.cfg` is configured as following. ```c // firstEp is the end point to connect to when any dnode starts -firstEp h1.taosdata.com:6030 +firstEp h1.tdengine.com:6030 // must be configured to the FQDN of the host where the dnode is launched -fqdn h1.taosdata.com +fqdn h1.tdengine.com // the port used by the dnode, default is 6030 serverPort 6030 @@ -76,13 +76,13 @@ The first dnode can be started following the instructions in [Get Started](/get- taos> show dnodes; id | endpoint | vnodes | support_vnodes | status | create_time | note | ============================================================================================================================================ -1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | | +1 | h1.tdengine.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | | Query OK, 1 rows affected (0.007984s) ``` -From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster. +From the above output, it is shown that the end point of the started dnode is "h1.tdengine.com:6030", which is the `firstEp` of the cluster. ## Add DNODE @@ -90,7 +90,7 @@ There are a few steps necessary to add other dnodes in the cluster. Second, we can start `taosd` as instructed in [Get Started](/get-started/). -Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command: +Then, on the first dnode i.e. h1.tdengine.com in our example, use TDengine CLI `taos` to execute the following command: ```sql CREATE DNODE "h2.taos.com:6030"; @@ -98,7 +98,7 @@ CREATE DNODE "h2.taos.com:6030"; This adds the end point of the new dnode (from Step 4) into the end point list of the cluster. In the command "fqdn:port" should be quoted using double quotes. Change `"h2.taos.com:6030"` to the end point of your new dnode. -Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` +Then on the first dnode h1.tdengine.com, execute `show dnodes` in `taos` ```sql SHOW DNODES; diff --git a/docs/en/14-reference/06-taosdump.md b/docs/en/14-reference/06-taosdump.md index 2105ba83fa..e73441a96b 100644 --- a/docs/en/14-reference/06-taosdump.md +++ b/docs/en/14-reference/06-taosdump.md @@ -116,5 +116,4 @@ Usage: taosdump [OPTION...] dbname [tbname ...] Mandatory or optional arguments to long options are also mandatory or optional for any corresponding short options. -Report bugs to . ``` diff --git a/docs/en/14-reference/07-tdinsight/index.md b/docs/en/14-reference/07-tdinsight/index.md index e74c9de7b2..2e56203525 100644 --- a/docs/en/14-reference/07-tdinsight/index.md +++ b/docs/en/14-reference/07-tdinsight/index.md @@ -263,7 +263,7 @@ Once the import is complete, the full page view of TDinsight is shown below. ## TDinsight dashboard details -The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources [dnodes, mnodes, vnodes](https://www.taosdata.com/cn/documentation/architecture#cluster) or databases. +The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources [dnodes, mnodes, vnodes](../../taos-sql/node/) or databases. Details of the metrics are as follows. diff --git a/docs/en/20-third-party/09-emq-broker.md b/docs/en/20-third-party/09-emq-broker.md index 0900dd3d75..2ead1bbaf4 100644 --- a/docs/en/20-third-party/09-emq-broker.md +++ b/docs/en/20-third-party/09-emq-broker.md @@ -9,7 +9,7 @@ MQTT is a popular IoT data transfer protocol. [EMQX](https://github.com/emqx/emq The following preparations are required for EMQX to add TDengine data sources correctly. - The TDengine cluster is deployed and working properly -- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. +- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](../../reference/taosadapter) for details. - If you use the emulated writers described later, you need to install the appropriate version of Node.js. V12 is recommended. ## Install and start EMQX @@ -28,8 +28,6 @@ USE test; CREATE TABLE sensor_data (ts TIMESTAMP, temperature FLOAT, humidity FLOAT, volume FLOAT, pm10 FLOAT, pm25 FLOAT, so2 FLOAT, no2 FLOAT, co FLOAT, sensor_id NCHAR(255), area TINYINT, coll_time TIMESTAMP); ``` -Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario. - ## Configuring EMQX Rules Since the configuration interface of EMQX differs from version to version, here is v4.4.5 as an example. For other versions, please refer to the corresponding official documentation. @@ -137,5 +135,5 @@ Use the TDengine CLI program to log in and query the appropriate databases and t ![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp) -Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine. +Please refer to the [TDengine official documentation](https://docs.tdengine.com/) for more details on how to use TDengine. EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX. diff --git a/docs/en/25-application/03-immigrate.md b/docs/en/25-application/03-immigrate.md index 9614574c71..5c1c92fcb3 100644 --- a/docs/en/25-application/03-immigrate.md +++ b/docs/en/25-application/03-immigrate.md @@ -41,7 +41,7 @@ The agents deployed in the application nodes are responsible for providing opera - **TDengine installation and deployment** -First of all, please install TDengine. Download the latest stable version of TDengine from the official website and install it. For help with using various installation packages, please refer to the blog ["Installation and Uninstallation of TDengine Multiple Installation Packages"](https://www.taosdata.com/blog/2019/08/09/566.html). +First of all, please install TDengine. Download the latest stable version of TDengine from the official website and install it. For help with using various installation packages, please refer to [Install TDengine](../../get-started/package) Note that once the installation is complete, do not start the `taosd` service before properly configuring the parameters. @@ -51,7 +51,7 @@ TDengine version 2.4 and later version includes `taosAdapter`. taosAdapter is a Users can flexibly deploy taosAdapter instances, based on their requirements, to improve data writing throughput and provide guarantees for data writes in different application scenarios. -Through taosAdapter, users can directly write the data collected by `collectd` or `StatsD` to TDengine to achieve easy, convenient and seamless migration in application scenarios. taosAdapter also supports Telegraf, Icinga, TCollector, and node_exporter data. For more details, please refer to [taosAdapter](/reference/taosadapter/). +Through taosAdapter, users can directly write the data collected by `collectd` or `StatsD` to TDengine to achieve easy, convenient and seamless migration in application scenarios. taosAdapter also supports Telegraf, Icinga, TCollector, and node_exporter data. For more details, please refer to [taosAdapter](../../reference/taosadapter/). If using collectd, modify the configuration file in its default location `/etc/collectd/collectd.conf` to point to the IP address and port of the node where to deploy taosAdapter. For example, assuming the taosAdapter IP address is 192.168.1.130 and port 6046, configure it as follows. @@ -411,7 +411,7 @@ TDengine provides a wealth of help documents to explain many aspects of cluster ### Cluster Deployment -The first is TDengine installation. Download the latest stable version of TDengine from the official website, and install it. Please refer to the blog ["Installation and Uninstallation of Various Installation Packages of TDengine"](https://www.taosdata.com/blog/2019/08/09/566.html) for the various installation package formats. +The first is TDengine installation. Download the latest stable version of TDengine from the official website, and install it. Please refer to [Install TDengine](../../get-started/pacakge) for more details. Note that once the installation is complete, do not immediately start the `taosd` service, but start it after correctly configuring the parameters. From 8b5283ceff97089630827fa3967cfa4e49001ad7 Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Mon, 29 Aug 2022 17:45:51 +0800 Subject: [PATCH 28/29] doc: remove improper taosdata.com --- docs/en/10-deployment/05-helm.md | 2 +- docs/en/14-reference/02-rest-api/02-rest-api.mdx | 4 ++-- docs/en/14-reference/03-connector/04-java.mdx | 2 -- docs/en/14-reference/03-connector/09-csharp.mdx | 1 - docs/en/14-reference/03-connector/_preparation.mdx | 2 +- docs/en/14-reference/11-docker/index.md | 6 +++--- 6 files changed, 7 insertions(+), 10 deletions(-) diff --git a/docs/en/10-deployment/05-helm.md b/docs/en/10-deployment/05-helm.md index 302730f1b5..a4fa681000 100644 --- a/docs/en/10-deployment/05-helm.md +++ b/docs/en/10-deployment/05-helm.md @@ -152,7 +152,7 @@ clusterDomainSuffix: "" # converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`, # to a camelCase taos config variable `debugFlag`. # -# See the variable list at https://www.taosdata.com/cn/documentation/administrator . +# See the [Configuration Variables](../../reference/config) # # Note: # 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up. diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx index 74ba78b7fc..ce28ee87d9 100644 --- a/docs/en/14-reference/02-rest-api/02-rest-api.mdx +++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx @@ -18,12 +18,12 @@ If the TDengine server is already installed, it can be verified as follows: The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment. -The following example lists all databases on the host h1.taosdata.com. To use it in your environment, replace `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number. +The following example lists all databases on the host h1.tdengine.com. To use it in your environment, replace `h1.tdengine.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number. ```bash curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \ -d "select name, ntables, status from information_schema.ins_databases;" \ - h1.taosdata.com:6041/rest/sql + h1.tdengine.com:6041/rest/sql ``` The following return value results indicate that the verification passed. diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 0f977393f1..129d90ea85 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -133,8 +133,6 @@ The configuration parameters in the URL are as follows: - batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is true. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large. - batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false. -For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html). - **Connect using the TDengine client-driven configuration file ** When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below: diff --git a/docs/en/14-reference/03-connector/09-csharp.mdx b/docs/en/14-reference/03-connector/09-csharp.mdx index c745b8dd1a..823e907599 100644 --- a/docs/en/14-reference/03-connector/09-csharp.mdx +++ b/docs/en/14-reference/03-connector/09-csharp.mdx @@ -172,7 +172,6 @@ namespace TDengineExample `Taos` is an ADO.NET connector for TDengine, supporting Linux and Windows platforms. Community contributor `Maikebing@@maikebing contributes the connector`. Please refer to: * Interface download: -* Usage notes: ## Frequently Asked Questions diff --git a/docs/en/14-reference/03-connector/_preparation.mdx b/docs/en/14-reference/03-connector/_preparation.mdx index 07ebdbca3d..c6e42ce023 100644 --- a/docs/en/14-reference/03-connector/_preparation.mdx +++ b/docs/en/14-reference/03-connector/_preparation.mdx @@ -2,7 +2,7 @@ :::info -Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installation package](/get-started/). For Windows development, you need to install the corresponding [Windows client](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) for TDengine. +Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installation package](/get-started/). For Windows development, you need to install the corresponding Windows client, please refer to [Install TDengine](../../get-started/package). - libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately. - taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately. diff --git a/docs/en/14-reference/11-docker/index.md b/docs/en/14-reference/11-docker/index.md index be1d72ff9c..7cd1e810dc 100644 --- a/docs/en/14-reference/11-docker/index.md +++ b/docs/en/14-reference/11-docker/index.md @@ -116,7 +116,7 @@ If you want to start your application in a container, you need to add the corres FROM ubuntu:20.04 RUN apt-get update && apt-get install -y wget ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ +RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && cd TDengine-client-${TDENGINE_VERSION} \ && ./install_client.sh \ @@ -217,7 +217,7 @@ Here is the full Dockerfile: ```docker FROM golang:1.17.6-buster as builder ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ +RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && cd TDengine-client-${TDENGINE_VERSION} \ && ./install_client.sh \ @@ -233,7 +233,7 @@ RUN go build FROM ubuntu:20.04 RUN apt-get update && apt-get install -y wget ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ +RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ && cd TDengine-client-${TDENGINE_VERSION} \ && ./install_client.sh \ From 798e10d048c299497c996a117f72dfe7a0de0ea6 Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Mon, 29 Aug 2022 17:48:43 +0800 Subject: [PATCH 29/29] doc: fix broken links --- docs/en/25-application/03-immigrate.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/25-application/03-immigrate.md b/docs/en/25-application/03-immigrate.md index 5c1c92fcb3..1aabaa43e7 100644 --- a/docs/en/25-application/03-immigrate.md +++ b/docs/en/25-application/03-immigrate.md @@ -411,7 +411,7 @@ TDengine provides a wealth of help documents to explain many aspects of cluster ### Cluster Deployment -The first is TDengine installation. Download the latest stable version of TDengine from the official website, and install it. Please refer to [Install TDengine](../../get-started/pacakge) for more details. +The first is TDengine installation. Download the latest stable version of TDengine from the official website, and install it. Please refer to [Install TDengine](../../get-started/package) for more details. Note that once the installation is complete, do not immediately start the `taosd` service, but start it after correctly configuring the parameters.