diff --git a/docs/en/01-index.md b/docs/en/01-index.md index b5b01745da..a0b7eba510 100644 --- a/docs/en/01-index.md +++ b/docs/en/01-index.md @@ -13,15 +13,15 @@ TDengine greatly improves the efficiency of data ingestion, querying, and storag If you are a developer, please read the [Developer Guide](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, and make a few changes to accommodate your application, and it will work. -We live in the era of big data, and scale-up is unable to meet the growing needs of the business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to [Cluster Deployment](./deployment). +We live in the era of big data, and scale-up is unable to meet the growing needs of the business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to [Cluster Deployment](./operation/deployment). -TDengine uses ubiquitous SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll-up, interpolation, and time-weighted average, among many others. The [SQL Reference](./taos-sql) chapter describes the SQL syntax in detail and lists the various supported commands and functions. +TDengine uses ubiquitous SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll-up, interpolation, and time-weighted average, among many others. The [SQL Reference](./reference/taos-sql) chapter describes the SQL syntax in detail and lists the various supported commands and functions. If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the [Administration](./operation) section. If you want to know more about TDengine tools and the REST API, please see the [Reference](./reference) chapter. -For information about connecting to TDengine with different programming languages, see [Client Libraries](./client-libraries/). +For information about connecting to TDengine with different programming languages, see [Client Libraries](./reference/connectors). If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully. diff --git a/docs/en/04-concept/index.md b/docs/en/02-concept.md similarity index 100% rename from docs/en/04-concept/index.md rename to docs/en/02-concept.md diff --git a/docs/en/02-intro/_category_.yml b/docs/en/02-intro/_category_.yml deleted file mode 100644 index a3d691e87b..0000000000 --- a/docs/en/02-intro/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Introduction diff --git a/docs/en/02-intro/index.md b/docs/en/03-intro.md similarity index 92% rename from docs/en/02-intro/index.md rename to docs/en/03-intro.md index 28b94a5236..4e0089950a 100644 --- a/docs/en/02-intro/index.md +++ b/docs/en/03-intro.md @@ -17,9 +17,9 @@ The major features are listed below: - Supports [schemaless writing](../reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB Line](../develop/insert-data/influxdb-line), [OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others. - Supports seamless integration with third-party tools like [Telegraf](../third-party/telegraf/), [Prometheus](../third-party/prometheus/), [collectd](../third-party/collectd/), [StatsD](../third-party/statsd/), [TCollector](../third-party/tcollector/), [EMQX](../third-party/emq-broker), [HiveMQ](../third-party/hive-mq-broker), and [Icinga2](../third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code. 2. Query data - - Supports standard [SQL](../taos-sql/), including nested query. - - Supports [time series specific functions](../taos-sql/function/#time-series-extensions) and [time series specific queries](../taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others. - - Supports [User Defined Functions (UDF)](../taos-sql/udf). + - Supports standard [SQL](../reference/taos-sql/), including nested query. + - Supports [time series specific functions](../reference/taos-sql/function/#time-series-extensions) and [time series specific queries](../reference/taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others. + - Supports [User Defined Functions (UDF)](../reference/taos-sql/udf). 3. [Caching](../develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing. 4. [Stream Processing](../develop/stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing. 5. [Data Subscription](../develop/tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions. @@ -27,18 +27,18 @@ The major features are listed below: - Supports seamless integration with [Grafana](../third-party/grafana/). - Supports seamless integration with [Google Data Studio](../third-party/google-data-studio/). 7. Cluster - - Supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes. - - Supports [deployment on Kubernetes](../deployment/k8s/). + - Supports [cluster](../operation/deployment/) with the capability of increasing processing power by adding more nodes. + - Supports [deployment on Kubernetes](../operation/deployment). - Supports high availability via data replication. 8. Administration - Provides [monitoring](../operation/monitor) on running instances of TDengine. - Provides many ways to [import](../operation/import) and [export](../operation/export) data. 9. Tools - - Provides an interactive [Command Line Interface (CLI)](../reference/taos-shell) for management, maintenance and ad-hoc queries. - - Provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine. + - Provides an interactive [Command Line Interface (CLI)](../reference/components/taos-shell) for management, maintenance and ad-hoc queries. + - Provides a tool [taosBenchmark](../reference/components/taosbenchmark/) for testing the performance of TDengine. 10. Programming - - Provides [client libraries](../client-libraries/) for [C/C++](../client-libraries/cpp), [Java](../client-libraries/java), [Python](../client-libraries/python), [Go](../client-libraries/go), [Rust](../client-libraries/rust), [Node.js](../client-libraries/node) and other programming languages. - - Provides a [REST API](../reference/rest-api/). + - Provides [client libraries](../reference/connectors/) for [C/C++](../reference/connectors/cpp), [Java](../reference/connectors/java), [Python](../reference/connectors/python), [Go](../reference/connectors/go), [Rust](../reference/connectors/rust), [Node.js](../reference/connectors/node) and other programming languages. + - Provides a [REST API](../reference/connectors/rest-api). For more details on features, please read through the entire documentation. diff --git a/docs/en/04-concept/_category_.yml b/docs/en/04-concept/_category_.yml deleted file mode 100644 index 12c659a926..0000000000 --- a/docs/en/04-concept/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Concepts \ No newline at end of file diff --git a/docs/en/05-get-started/01-docker.md b/docs/en/04-get-started/01-docker.md similarity index 96% rename from docs/en/05-get-started/01-docker.md rename to docs/en/04-get-started/01-docker.md index aed9e4b9f5..882e2ef194 100644 --- a/docs/en/05-get-started/01-docker.md +++ b/docs/en/04-get-started/01-docker.md @@ -93,7 +93,7 @@ This command creates the `meters` supertable in the `test` database. In the `met The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. -You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark). +You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/components/taosbenchmark). ## Test data query performance @@ -129,9 +129,9 @@ Query the average, maximum, and minimum values for table `d10` in 10 second inte SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s); ``` -In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/). +In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../reference/taos-sql/distinguished/). ## Additional Information -For more information about deploying TDengine in a Docker environment, see [Deploying TDengine with Docker](../../deployment/docker). +For more information about deploying TDengine in a Docker environment, see [Deploying TDengine with Docker](../../operation/deployment/#docker). diff --git a/docs/en/05-get-started/03-package.md b/docs/en/04-get-started/03-package.md similarity index 98% rename from docs/en/05-get-started/03-package.md rename to docs/en/04-get-started/03-package.md index cbb88fa472..83559a20d1 100644 --- a/docs/en/05-get-started/03-package.md +++ b/docs/en/04-get-started/03-package.md @@ -14,7 +14,7 @@ This document describes how to install TDengine on Linux/Windows/macOS and perfo - To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). - If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). -The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface (CLI, taos), and some tools. Note that taosAdapter supports Linux only. In addition to client libraries for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter). +The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface (CLI, taos), and some tools. Note that taosAdapter supports Linux only. In addition to client libraries for multiple languages, TDengine also provides a [REST API](../../reference/connectors/rest-api) through [taosAdapter](../../reference/components/taosadapter). The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ client library. @@ -265,7 +265,7 @@ SELECT * FROM t; Query OK, 2 row(s) in set (0.003128s) ``` -You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either machines. For more information, see [TDengine CLI](../../reference/taos-shell/). +You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either machines. For more information, see [TDengine CLI](../../reference/components/taos-shell/). ## TDengine Graphic User Interface @@ -287,7 +287,7 @@ This command creates the `meters` supertable in the `test` database. In the `met The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. -You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark). +You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/components/taosbenchmark). ## Test data query performance @@ -323,4 +323,4 @@ Query the average, maximum, and minimum values for table `d10` in 10 second inte SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s); ``` -In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/). +In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../reference/taos-sql/distinguished/). diff --git a/docs/en/05-get-started/_apt_get_install.mdx b/docs/en/04-get-started/_apt_get_install.mdx similarity index 100% rename from docs/en/05-get-started/_apt_get_install.mdx rename to docs/en/04-get-started/_apt_get_install.mdx diff --git a/docs/en/05-get-started/_pkg_install.mdx b/docs/en/04-get-started/_pkg_install.mdx similarity index 100% rename from docs/en/05-get-started/_pkg_install.mdx rename to docs/en/04-get-started/_pkg_install.mdx diff --git a/docs/en/05-get-started/discord.svg b/docs/en/04-get-started/discord.svg similarity index 100% rename from docs/en/05-get-started/discord.svg rename to docs/en/04-get-started/discord.svg diff --git a/docs/en/05-get-started/github.svg b/docs/en/04-get-started/github.svg similarity index 100% rename from docs/en/05-get-started/github.svg rename to docs/en/04-get-started/github.svg diff --git a/docs/en/05-get-started/index.md b/docs/en/04-get-started/index.md similarity index 96% rename from docs/en/05-get-started/index.md rename to docs/en/04-get-started/index.md index 697f98af15..b125d8c903 100644 --- a/docs/en/05-get-started/index.md +++ b/docs/en/04-get-started/index.md @@ -12,7 +12,7 @@ import StackOverflowSVG from './stackoverflow.svg' You can install and run TDengine on Linux/Windows/macOS machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud. -The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to client libraries for multiple languages, TDengine also provides a [RESTful interface](../reference/rest-api) through [taosAdapter](../reference/taosadapter). +The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to client libraries for multiple languages, TDengine also provides a [RESTful interface](../reference/connectors/rest-api) through [taosAdapter](../reference/components/taosadapter). ```mdx-code-block import DocCardList from '@theme/DocCardList'; diff --git a/docs/en/05-get-started/linkedin.svg b/docs/en/04-get-started/linkedin.svg similarity index 100% rename from docs/en/05-get-started/linkedin.svg rename to docs/en/04-get-started/linkedin.svg diff --git a/docs/en/05-get-started/stackoverflow.svg b/docs/en/04-get-started/stackoverflow.svg similarity index 100% rename from docs/en/05-get-started/stackoverflow.svg rename to docs/en/04-get-started/stackoverflow.svg diff --git a/docs/en/05-get-started/twitter.svg b/docs/en/04-get-started/twitter.svg similarity index 100% rename from docs/en/05-get-started/twitter.svg rename to docs/en/04-get-started/twitter.svg diff --git a/docs/en/05-get-started/youtube.svg b/docs/en/04-get-started/youtube.svg similarity index 100% rename from docs/en/05-get-started/youtube.svg rename to docs/en/04-get-started/youtube.svg diff --git a/docs/en/05-get-started/_category_.yml b/docs/en/05-get-started/_category_.yml deleted file mode 100644 index 043ae21554..0000000000 --- a/docs/en/05-get-started/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Get Started diff --git a/docs/en/07-develop/01-connect/_category_.yml b/docs/en/07-develop/01-connect/_category_.yml deleted file mode 100644 index 83f9754f58..0000000000 --- a/docs/en/07-develop/01-connect/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Connect diff --git a/docs/en/07-develop/02-model/_category_.yml b/docs/en/07-develop/02-model/_category_.yml deleted file mode 100644 index a2b49eb879..0000000000 --- a/docs/en/07-develop/02-model/_category_.yml +++ /dev/null @@ -1,2 +0,0 @@ -label: Data Model - diff --git a/docs/en/07-develop/03-insert-data/_category_.yml b/docs/en/07-develop/03-insert-data/_category_.yml deleted file mode 100644 index e515d60e09..0000000000 --- a/docs/en/07-develop/03-insert-data/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Insert Data diff --git a/docs/en/07-develop/04-query-data/_category_.yml b/docs/en/07-develop/04-query-data/_category_.yml deleted file mode 100644 index 809db34621..0000000000 --- a/docs/en/07-develop/04-query-data/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Query Data diff --git a/docs/en/07-develop/_category_.yml b/docs/en/07-develop/_category_.yml deleted file mode 100644 index 6f0d66351a..0000000000 --- a/docs/en/07-develop/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Developer Guide \ No newline at end of file diff --git a/docs/en/13-operation/02-planning.mdx b/docs/en/07-operation/02-planning.mdx similarity index 98% rename from docs/en/13-operation/02-planning.mdx rename to docs/en/07-operation/02-planning.mdx index 37ef6aae26..0330f9a66c 100644 --- a/docs/en/13-operation/02-planning.mdx +++ b/docs/en/07-operation/02-planning.mdx @@ -17,7 +17,7 @@ Each database creates a fixed number of vgroups. This number is 2 by default and - pagesize - cachesize -For more information, see [Database](../../taos-sql/database). +For more information, see [Database](../../reference/taos-sql/database). The memory required by a database is therefore greater than or equal to: diff --git a/docs/en/07-operation/03-deployment.md b/docs/en/07-operation/03-deployment.md new file mode 100644 index 0000000000..005dee3f66 --- /dev/null +++ b/docs/en/07-operation/03-deployment.md @@ -0,0 +1,1226 @@ +--- +title: Deployment of TDengine Cluster +sidebar_label: Deployment +toc_heading_level: 4 +description: This document describes how to deploy TDengine cluster. +--- + + +TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. + +This document describes how to manually deploy a cluster on a host directly and deploy a cluster with Docker, Kubernetes or Helm. + +## Manual Deployment +### Prerequisites + +1. Step 1 + +The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command. If you have a DNS server on your network, contact your network administrator for assistance. + +2. Step 2 + +If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`. + +:::note +FQDN information is written to file. If you have started TDengine without configuring or changing the FQDN, ensure that data is backed up or no longer needed before running the `rm -rf /var/lib\taos/\*` command. +::: + +:::note +- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. +::: + +3. Step 3 + +- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster. + +4. Step 4 + +Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. + +5. Step 5 + +Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. + +To get the hostname on any host, the command `hostname -f` can be executed. + +`ping ` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other. Hosts that are not accessible to each other cannot form a cluster. + +On the physical machine running the application, ping the dnode that is running taosd. If the dnode is not accessible, the application cannot connect to taosd. In this case, verify the DNS and hosts settings on the physical node running the application. + +The end point of each dnode is the output hostname and port, such as h1.tdengine.com:6030. + +6. Step 6 + +Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.tdengine.com:6030", its `taos.cfg` is configured as following. + +```c +// firstEp is the end point to connect to when any dnode starts +firstEp h1.tdengine.com:6030 + +// must be configured to the FQDN of the host where the dnode is launched +fqdn h1.tdengine.com + +// the port used by the dnode, default is 6030 +serverPort 6030 + +``` + +`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. Retain the default values for other parameters. + +For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. + +| **#** | **Parameter** | **Definition** | +| ----- | ---------------- | ----------------------------------------------------------------------------- | +| 1 | statusInterval | The interval by which dnode reports its status to mnode | +| 2 | timezone | Timezone | +| 3 | locale | System region and encoding | +| 4 | charset | Character set | +| 5 | ttlChangeOnWrite | Whether the ttl expiration time changes with the table modification operation | + +### Start Cluster + +The first dnode can be started following the instructions in [Get Started](../../get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example: + +``` +taos> show dnodes; +id | endpoint | vnodes | support_vnodes | status | create_time | note | +============================================================================================================================================ +1 | h1.tdengine.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | | +Query OK, 1 rows affected (0.007984s) + + +``` + +From the above output, it is shown that the end point of the started dnode is "h1.tdengine.com:6030", which is the `firstEp` of the cluster. + +### Add DNODE + +There are a few steps necessary to add other dnodes in the cluster. + +Second, we can start `taosd` as instructed in [Get Started](../../get-started/). + +Then, on the first dnode i.e. h1.tdengine.com in our example, use TDengine CLI `taos` to execute the following command: + +```sql +CREATE DNODE "h2.taos.com:6030"; +```` + +This adds the end point of the new dnode (from Step 4) into the end point list of the cluster. In the command "fqdn:port" should be quoted using double quotes. Change `"h2.taos.com:6030"` to the end point of your new dnode. + +Then on the first dnode h1.tdengine.com, execute `show dnodes` in `taos` + +```sql +SHOW DNODES; +``` + +to show whether the second dnode has been added in the cluster successfully or not. If the status of the newly added dnode is offline, please check: + +- Whether the `taosd` process is running properly or not +- In the log file `taosdlog.0` to see whether the fqdn and port are correct and add the correct end point if not. +The above process can be repeated to add more dnodes in the cluster. + +:::tip + +Any node that is in the cluster and online can be the firstEp of new nodes. +Nodes use the firstEp parameter only when joining a cluster for the first time. After a node has joined the cluster, it stores the latest mnode in its end point list and no longer makes use of firstEp. + +However, firstEp is used by clients that connect to the cluster. For example, if you run TDengine CLI `taos` without arguments, it connects to the firstEp by default. + +Two dnodes that are launched without a firstEp value operate independently of each other. It is not possible to add one dnode to the other dnode and form a cluster. It is also not possible to form two independent clusters into a new cluster. + +::: + +### Show DNODEs + +The below command can be executed in TDengine CLI `taos` + +```sql +SHOW DNODES; +``` + +to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode. + +Below is the example output of this command. + +``` +taos> show dnodes; + id | endpoint | vnodes | support_vnodes | status | create_time | note | +============================================================================================================================================ + 1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | | +Query OK, 1 rows affected (0.006684s) +``` + +### Show VGROUPs + +To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes. + +Launch TDengine CLI `taos` and execute below command: + +```sql +USE SOME_DATABASE; +SHOW VGROUPS; +``` + +The example output is below: + +``` +taos> use db; +Database changed. + +taos> show vgroups; + vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma | +================================================================================================================================================================================================ + 2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | + 3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | + 4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | +Query OK, 8 row(s) in set (0.001154s) +``` + +### Drop DNODE + +Before running the TDengine CLI, ensure that the taosd process has been stopped on the dnode that you want to delete. + +```sql +DROP DNODE dnodeId; +``` + +to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`. + +:::warning + +- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs. +- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped. +- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode. +- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication. + +::: + +## Docker + +This section describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file. + +### Starting TDengine + +The TDengine image starts with the HTTP service activated by default, using the following command: + +```shell +docker run -d --name tdengine \ +-v ~/data/taos/dnode/data:/var/lib/taos \ +-v ~/data/taos/dnode/log:/var/log/taos \ +-p 6041:6041 tdengine/tdengine +``` +:::note + +* /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. And also you can modify ~/data/taos/dnode/data to your any other local emtpy data directory +* /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. And also you can modify ~/data/taos/dnode/log to your any other local empty log directory + + +::: + +The above command starts a container named "tdengine" and maps the HTTP service port 6041 to the host port 6041. You can verify that the HTTP service provided in this container is available using the following command. + +```shell +curl -u root:taosdata -d "show databases" localhost:6041/rest/sql +``` + +The TDengine client taos can be executed in this container to access TDengine using the following command. + +```shell +$ docker exec -it tdengine taos + +taos> show databases; + name | +================================= + information_schema | + performance_schema | +Query OK, 2 row(s) in set (0.002843s) +``` + +The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various client libraries (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various client libraries for complex scenarios. + +### Start TDengine on the host network + +```shell +docker run -d --name tdengine --network host tdengine/tdengine +``` + +The above command starts TDengine on the host network and uses the host's FQDN to establish a connection instead of the container's hostname. It is the equivalent of using `systemctl` to start TDengine on the host. If the TDengine client is already installed on the host, you can access it directly with the following command. + +```shell +$ taos + +taos> show dnodes; + id | end_point | vnodes | cores | status | role | create_time | offline reason | +====================================================================================================================================== + 1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | | +Query OK, 1 row(s) in set (0.003233s) +``` + +### Start TDengine with the specified hostname and port + +The `TAOS_FQDN` environment variable or the `fqdn` configuration item in `taos.cfg` allows TDengine to establish a connection at the specified hostname. This approach provides greater flexibility for deployment. + +```shell +docker run -d \ + --name tdengine \ + -e TAOS_FQDN=tdengine \ + -p 6030-6049:6030-6049 \ + -p 6030-6049:6030-6049/udp \ + tdengine/tdengine +``` + +The above command starts a TDengine service in the container, which listens to the hostname tdengine, and maps the container's port segment 6030 to 6049 to the host's port segment 6030 to 6049 (both TCP and UDP ports need to be mapped). If the port segment is already occupied on the host, you can modify the above command to specify a free port segment on the host. If `rpcForceTcp` is set to `1`, you can map only the TCP protocol. + +Next, ensure the hostname "tdengine" is resolvable in `/etc/hosts`. + +```shell +echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts +``` + +Finally, the TDengine service can be accessed from the TDengine CLI or any client library with "tdengine" as the server address. + +```shell +taos -h tdengine -P 6030 +``` + +If set `TAOS_FQDN` to the same hostname, the effect is the same as "Start TDengine on host network". + +### Start TDengine on the specified network + +You can also start TDengine on a specific network. Perform the following steps: + +1. First, create a docker network named `td-net` + + ```shell + docker network create td-net + ``` + +2. Start TDengine + + Start the TDengine service on the `td-net` network with the following command: + + ```shell + docker run -d --name tdengine --network td-net \ + -e TAOS_FQDN=tdengine \ + tdengine/tdengine + ``` + +3. Start the TDengine client in another container on the same network + + ```shell + docker run --rm -it --network td-net -e TAOS_FIRST_EP=tdengine tdengine/tdengine taos + # or + #docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine + ``` + +### Launching a client application in a container + +If you want to start your application in a container, you need to add the corresponding dependencies on TDengine to the image as well, e.g. + +```docker +FROM ubuntu:20.04 +RUN apt-get update && apt-get install -y wget +ENV TDENGINE_VERSION=3.0.0.0 +RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && cd TDengine-client-${TDENGINE_VERSION} \ + && ./install_client.sh \ + && cd ../ \ + && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} +## add your application next, eg. go, build it in builder stage, copy the binary to the runtime +#COPY --from=builder /path/to/build/app /usr/bin/ +#CMD ["app"] +``` + +Here is an example GO program: + +```go +/* + * In this test program, we'll create a database and insert 4 records then select out. + */ +package main + +import ( + "database/sql" + "flag" + "fmt" + "time" + + _ "github.com/taosdata/driver-go/v3/taosSql" +) + +type config struct { + hostName string + serverPort string + user string + password string +} + +var configPara config +var taosDriverName = "taosSql" +var url string + +func init() { + flag.StringVar(&configPara.hostName, "h", "", "The host to connect to TDengine server.") + flag.StringVar(&configPara.serverPort, "p", "", "The TCP/IP port number to use for the connection to TDengine server.") + flag.StringVar(&configPara.user, "u", "root", "The TDengine user name to use when connecting to the server.") + flag.StringVar(&configPara.password, "P", "taosdata", "The password to use when connecting to the server.") + flag.Parse() +} + +func printAllArgs() { + fmt.Printf("============= args parse result: =============\n") + fmt.Printf("hostName: %v\n", configPara.hostName) + fmt.Printf("serverPort: %v\n", configPara.serverPort) + fmt.Printf("usr: %v\n", configPara.user) + fmt.Printf("password: %v\n", configPara.password) + fmt.Printf("================================================\n") +} + +func main() { + printAllArgs() + + url = "root:taosdata@/tcp(" + configPara.hostName + ":" + configPara.serverPort + ")/" + + taos, err := sql.Open(taosDriverName, url) + checkErr(err, "open database error") + defer taos.Close() + + taos.Exec("create database if not exists test") + taos.Exec("use test") + taos.Exec("create table if not exists tb1 (ts timestamp, a int)") + _, err = taos.Exec("insert into tb1 values(now, 0)(now+1s,1)(now+2s,2)(now+3s,3)") + checkErr(err, "failed to insert") + rows, err := taos.Query("select * from tb1") + checkErr(err, "failed to select") + + defer rows.Close() + for rows.Next() { + var r struct { + ts time.Time + a int + } + err := rows.Scan(&r.ts, &r.a) + if err != nil { + fmt.Println("scan error:\n", err) + return + } + fmt.Println(r.ts, r.a) + } +} + +func checkErr(err error, prompt string) { + if err != nil { + fmt.Println("ERROR: %s\n", prompt) + panic(err) + } +} +``` + +Here is the full Dockerfile: + +```docker +FROM golang:1.17.6-buster as builder +ENV TDENGINE_VERSION=3.0.0.0 +RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && cd TDengine-client-${TDENGINE_VERSION} \ + && ./install_client.sh \ + && cd ../ \ + && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} +WORKDIR /usr/src/app/ +ENV GOPROXY="https://goproxy.io,direct" +COPY ./main.go ./go.mod ./go.sum /usr/src/app/ +RUN go env +RUN go mod tidy +RUN go build + +FROM ubuntu:20.04 +RUN apt-get update && apt-get install -y wget +ENV TDENGINE_VERSION=3.0.0.0 +RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ + && cd TDengine-client-${TDENGINE_VERSION} \ + && ./install_client.sh \ + && cd ../ \ + && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} + +## add your application next, eg. go, build it in builder stage, copy the binary to the runtime +COPY --from=builder /usr/src/app/app /usr/bin/ +CMD ["app"] +``` + +Now that we have `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, we can build the application and start it on the `td-net` network. + +```shell +$ docker build -t app -f app.dockerfile +$ docker run --rm --network td-net app -h tdengine -p 6030 +============= args parse result: ============= +hostName: tdengine +serverPort: 6030 +usr: root +password: taosdata +================================================ +2022-01-17 15:56:55.48 +0000 UTC 0 +2022-01-17 15:56:56.48 +0000 UTC 1 +2022-01-17 15:56:57.48 +0000 UTC 2 +2022-01-17 15:56:58.48 +0000 UTC 3 +2022-01-17 15:58:01.842 +0000 UTC 0 +2022-01-17 15:58:02.842 +0000 UTC 1 +2022-01-17 15:58:03.842 +0000 UTC 2 +2022-01-17 15:58:04.842 +0000 UTC 3 +2022-01-18 01:43:48.029 +0000 UTC 0 +2022-01-18 01:43:49.029 +0000 UTC 1 +2022-01-18 01:43:50.029 +0000 UTC 2 +2022-01-18 01:43:51.029 +0000 UTC 3 +``` + +### Start the TDengine cluster with docker-compose + +1. The following docker-compose file starts a TDengine cluster with three nodes. + +```yml +version: "3" +services: + td-1: + image: tdengine/tdengine:$VERSION + environment: + TAOS_FQDN: "td-1" + TAOS_FIRST_EP: "td-1" + ports: + - 6041:6041 + - 6030:6030 + volumes: + # /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory + - ~/data/taos/dnode1/data:/var/lib/taos + # /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory + - ~/data/taos/dnode1/log:/var/log/taos + td-2: + image: tdengine/tdengine:$VERSION + environment: + TAOS_FQDN: "td-2" + TAOS_FIRST_EP: "td-1" + volumes: + - ~/data/taos/dnode2/data:/var/lib/taos + - ~/data/taos/dnode2/log:/var/log/taos + td-3: + image: tdengine/tdengine:$VERSION + environment: + TAOS_FQDN: "td-3" + TAOS_FIRST_EP: "td-1" + volumes: + - ~/data/taos/dnode3/data:/var/lib/taos + - ~/data/taos/dnode3/log:/var/log/taos +``` + +:::note + +- The `VERSION` environment variable is used to set the tdengine image tag +- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time + + +::: + +2. Start the cluster + + ```shell + $ VERSION=3.0.0.0 docker-compose up -d + Creating network "test_default" with the default driver + Creating volume "test_taosdata-td1" with default driver + Creating volume "test_taoslog-td1" with default driver + Creating volume "test_taosdata-td2" with default driver + Creating volume "test_taoslog-td2" with default driver + Creating test_td-1_1 ... done + Creating test_arbitrator_1 ... done + Creating test_td-2_1 ... done + ``` + +3. Check the status of each node + + ```shell + $ docker-compose ps + Name Command State Ports + --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + test_arbitrator_1 /usr/bin/entrypoint.sh tar ... Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp + test_td-1_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp + test_td-2_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp + ``` + +4. Show dnodes via TDengine CLI + +```shell +$ docker-compose exec td-1 taos -s "show dnodes" + +taos> show dnodes + id | endpoint | vnodes | support_vnodes | status | create_time | note | +====================================================================================================================================== + 1 | td-1:6030 | 0 | 32 | ready | 2022-08-19 07:57:29.971 | | + 2 | td-2:6030 | 0 | 32 | ready | 2022-08-19 07:57:31.415 | | + 3 | td-3:6030 | 0 | 32 | ready | 2022-08-19 07:57:31.417 | | +Query OK, 3 rows in database (0.021262s) + +``` + +### taosAdapter + +1. taosAdapter is enabled by default in the TDengine container. If you want to disable it, specify the environment variable `TAOS_DISABLE_ADAPTER=true` at startup + +2. At the same time, for flexible deployment, taosAdapter can be started in a separate container + + ```docker + services: + # ... + adapter: + image: tdengine/tdengine:$VERSION + command: taosadapter + ``` + + Suppose you want to deploy multiple taosAdapters to improve throughput and provide high availability. In that case, the recommended configuration method uses a reverse proxy such as Nginx to offer a unified access entry. For specific configuration methods, please refer to the official documentation of Nginx. Here is an example: + +```yml +version: "3" + +networks: + inter: + +services: + td-1: + image: tdengine/tdengine:$VERSION + environment: + TAOS_FQDN: "td-1" + TAOS_FIRST_EP: "td-1" + volumes: + # /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory + - ~/data/taos/dnode1/data:/var/lib/taos + # /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory + - ~/data/taos/dnode1/log:/var/log/taos + td-2: + image: tdengine/tdengine:$VERSION + environment: + TAOS_FQDN: "td-2" + TAOS_FIRST_EP: "td-1" + volumes: + - ~/data/taos/dnode2/data:/var/lib/taos + - ~/data/taos/dnode2/log:/var/log/taos + adapter: + image: tdengine/tdengine:$VERSION + entrypoint: "taosadapter" + networks: + - inter + environment: + TAOS_FIRST_EP: "td-1" + TAOS_SECOND_EP: "td-2" + deploy: + replicas: 4 + nginx: + image: nginx + depends_on: + - adapter + networks: + - inter + ports: + - 6041:6041 + - 6044:6044/udp + command: [ + "sh", + "-c", + "while true; + do curl -s http://adapter:6041/-/ping >/dev/null && break; + done; + printf 'server{listen 6041;location /{proxy_pass http://adapter:6041;}}' + > /etc/nginx/conf.d/rest.conf; + printf 'stream{server{listen 6044 udp;proxy_pass adapter:6044;}}' + >> /etc/nginx/nginx.conf;cat /etc/nginx/nginx.conf; + nginx -g 'daemon off;'", + ] +``` + +### Deploy with docker swarm + +If you want to deploy a container-based TDengine cluster on multiple hosts, you can use docker swarm. First, to establish a docker swarm cluster on these hosts, please refer to the official docker documentation. + +The docker-compose file can refer to the previous section. Here is the command to start TDengine with docker swarm: + +```shell +$ VERSION=3.0.0.0 docker stack deploy -c docker-compose.yml taos +Creating network taos_inter +Creating network taos_api +Creating service taos_arbitrator +Creating service taos_td-1 +Creating service taos_td-2 +Creating service taos_adapter +Creating service taos_nginx +``` + +Checking status: + +```shell +$ docker stack ps taos +ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS +79ni8temw59n taos_nginx.1 nginx:latest TM1701 Running Running about a minute ago +3e94u72msiyg taos_adapter.1 tdengine/tdengine:3.0.0.0 TM1702 Running Running 56 seconds ago +100amjkwzsc6 taos_td-2.1 tdengine/tdengine:3.0.0.0 TM1703 Running Running about a minute ago +pkjehr2vvaaa taos_td-1.1 tdengine/tdengine:3.0.0.0 TM1704 Running Running 2 minutes ago +tpzvgpsr1qkt taos_arbitrator.1 tdengine/tdengine:3.0.0.0 TM1705 Running Running 2 minutes ago +rvss3g5yg6fa taos_adapter.2 tdengine/tdengine:3.0.0.0 TM1706 Running Running 56 seconds ago +i2augxamfllf taos_adapter.3 tdengine/tdengine:3.0.0.0 TM1707 Running Running 56 seconds ago +lmjyhzccpvpg taos_adapter.4 tdengine/tdengine:3.0.0.0 TM1708 Running Running 56 seconds ago +$ docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +561t4lu6nfw6 taos_adapter replicated 4/4 tdengine/tdengine:3.0.0.0 +3hk5ct3q90sm taos_arbitrator replicated 1/1 tdengine/tdengine:3.0.0.0 +d8qr52envqzu taos_nginx replicated 1/1 nginx:latest *:6041->6041/tcp, *:6044->6044/udp +2isssfvjk747 taos_td-1 replicated 1/1 tdengine/tdengine:3.0.0.0 +9pzw7u02ichv taos_td-2 replicated 1/1 tdengine/tdengine:3.0.0.0 +``` + +From the above output, you can see two dnodes, two taosAdapters, and one Nginx reverse proxy service. + +Next, we can reduce the number of taosAdapter services. + +```shell +$ docker service scale taos_adapter=1 +taos_adapter scaled to 1 +overall progress: 1 out of 1 tasks +1/1: running [==================================================>] +verify: Service converged + +$ docker service ls -f name=taos_adapter +ID NAME MODE REPLICAS IMAGE PORTS +561t4lu6nfw6 taos_adapter replicated 1/1 tdengine/tdengine:3.0.0.0 +``` + +## Kubernetes + +As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment. + +To meet [high availability](../../tdinternal/high-availability/) requirements, clusters need to meet the following requirements: + +- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3 +- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable. +- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again) + +### Prerequisites + +Before deploying TDengine on Kubernetes, perform the following: + +- This article applies Kubernetes 1.19 and above +- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance +- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services + +You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine). + +### Configure the service + +Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step: + +```YAML +--- +apiVersion: v1 +kind: Service +metadata: + name: "taosd" + labels: + app: "tdengine" +spec: + ports: + - name: tcp6030 + protocol: "TCP" + port: 6030 + - name: tcp6041 + protocol: "TCP" + port: 6041 + selector: + app: "tdengine" +``` + +### Configure the service as StatefulSet + +According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation. + +Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) + +```YAML +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: "tdengine" + labels: + app: "tdengine" +spec: + serviceName: "taosd" + replicas: 3 + updateStrategy: + type: RollingUpdate + selector: + matchLabels: + app: "tdengine" + template: + metadata: + name: "tdengine" + labels: + app: "tdengine" + spec: + containers: + - name: "tdengine" + image: "tdengine/tdengine:3.0.7.1" + imagePullPolicy: "IfNotPresent" + ports: + - name: tcp6030 + protocol: "TCP" + containerPort: 6030 + - name: tcp6041 + protocol: "TCP" + containerPort: 6041 + env: + # POD_NAME for FQDN config + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + # SERVICE_NAME and NAMESPACE for fqdn resolve + - name: SERVICE_NAME + value: "taosd" + - name: STS_NAME + value: "tdengine" + - name: STS_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + # TZ for timezone settings, we recommend to always set it. + - name: TZ + value: "Asia/Shanghai" + # TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase. + - name: TAOS_SERVER_PORT + value: "6030" + # Must set if you want a cluster. + - name: TAOS_FIRST_EP + value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)" + # TAOS_FQND should always be set in k8s env. + - name: TAOS_FQDN + value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local" + volumeMounts: + - name: taosdata + mountPath: /var/lib/taos + startupProbe: + exec: + command: + - taos-check + failureThreshold: 360 + periodSeconds: 10 + readinessProbe: + exec: + command: + - taos-check + initialDelaySeconds: 5 + timeoutSeconds: 5000 + livenessProbe: + exec: + command: + - taos-check + initialDelaySeconds: 15 + periodSeconds: 20 + volumeClaimTemplates: + - metadata: + name: taosdata + spec: + accessModes: + - "ReadWriteOnce" + storageClassName: "standard" + resources: + requests: + storage: "5Gi" +``` + +### Use kubectl to deploy TDengine + +First create the corresponding namespace, and then execute the following command in sequence : + +```Bash +kubectl apply -f taosd-service.yaml -n tdengine-test +kubectl apply -f tdengine.yaml -n tdengine-test +``` + +The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster: + +```Bash +kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" +kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes" +kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes" +``` + +The output is as follows: + +```Bash +taos> show dnodes + id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | +============================================================================================================================================================================================================================================= + 1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | | + 2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | | + 3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | | +Query OK, 3 row(s) in set (0.001853s) +``` + +View the current mnode + +```Bash +kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G" +taos> show mnodes\G +*************************** 1.row *************************** + id: 1 + endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030 + role: leader + status: ready +create_time: 2023-07-19 17:54:18.559 +reboot_time: 2023-07-19 17:54:19.520 +Query OK, 1 row(s) in set (0.001282s) +``` + +### Create mnode + +```Bash +kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2" +kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3" +``` + +View mnode + +```Bash +kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G" + +taos> show mnodes\G +*************************** 1.row *************************** + id: 1 + endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030 + role: leader + status: ready +create_time: 2023-07-19 17:54:18.559 +reboot_time: 2023-07-20 09:19:36.060 +*************************** 2.row *************************** + id: 2 + endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030 + role: follower + status: ready +create_time: 2023-07-20 09:22:05.600 +reboot_time: 2023-07-20 09:22:12.838 +*************************** 3.row *************************** + id: 3 + endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030 + role: follower + status: ready +create_time: 2023-07-20 09:22:20.042 +reboot_time: 2023-07-20 09:22:23.271 +Query OK, 3 row(s) in set (0.003108s) +``` + +### Enable port forwarding + +Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments. + +```bash +kubectl port-forward -n tdengine-test tdengine-0 6041:6041 & +``` + +Use **curl** to verify that the TDengine REST API is working on port 6041: + +```bash +curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql +{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4} +``` + +## Helm + +Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes. + +### Install Helm + +```bash +curl -fsSL -o get_helm.sh \ + https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 +chmod +x get_helm.sh +./get_helm.sh + +``` + +Helm uses the kubectl and kubeconfig configurations to perform Kubernetes operations. For more information, see the Rancher configuration for Kubernetes installation. + +### Install TDengine Chart + +To use TDengine Chart, download it from GitHub: + +```bash +wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz + +``` + +Query the storageclass of your Kubernetes deployment: + +```bash +kubectl get storageclass + +``` + +With minikube, the default value is standard. + +Use Helm commands to install TDengine: + +```bash +helm install tdengine tdengine-3.0.2.tgz \ + --set storage.className= + +``` + +You can configure a small storage size in minikube to ensure that your deployment does not exceed your available disk space. + +```bash +helm install tdengine tdengine-3.0.2.tgz \ + --set storage.className=standard \ + --set storage.dataSize=2Gi \ + --set storage.logSize=10Mi + +``` + +After TDengine is deployed, TDengine Chart outputs information about how to use TDengine: + +```bash +export POD_NAME=$(kubectl get pods --namespace default \ + -l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \ + -o jsonpath="{.items[0].metadata.name}") +kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes" +kubectl --namespace default exec -it $POD_NAME -- taos + +``` + +You can test the deployment by creating a table: + +```bash +kubectl --namespace default exec $POD_NAME -- \ + taos -s "create database test; + use test; + create table t1 (ts timestamp, n int); + insert into t1 values(now, 1)(now + 1s, 2); + select * from t1;" + +``` + +### Configuring Values + +You can configure custom parameters in TDengine with the `values.yaml` file. + +Run the `helm show values` command to see all parameters supported by TDengine Chart. + +```bash +helm show values tdengine-3.0.2.tgz + +``` + +Save the output of this command as `values.yaml`. Then you can modify this file with your desired values and use it to deploy a TDengine cluster: + +```bash +helm install tdengine tdengine-3.0.2.tgz -f values.yaml + +``` + +The parameters are described as follows: + +```yaml +# Default values for tdengine. +# This is a YAML-formatted file. +# Declare variables to be passed into helm templates. + +replicaCount: 1 + +image: + prefix: tdengine/tdengine + #pullPolicy: Always + # Overrides the image tag whose default is the chart appVersion. +# tag: "3.0.2.0" + +service: + # ClusterIP is the default service type, use NodeIP only if you know what you are doing. + type: ClusterIP + ports: + # TCP range required + tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060] + # UDP range + udp: [6044, 6045] + + +# Set timezone here, not in taoscfg +timezone: "Asia/Shanghai" + +resources: + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +storage: + # Set storageClassName for pvc. K8s use default storage class if not set. + # + className: "" + dataSize: "100Gi" + logSize: "10Gi" + +nodeSelectors: + taosd: + # node selectors + +clusterDomainSuffix: "" +# Config settings in taos.cfg file. +# +# The helm/k8s support will use environment variables for taos.cfg, +# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`, +# to a camelCase taos config variable `debugFlag`. +# +# See the [Configuration Variables](../../reference/config) +# +# Note: +# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up. +# 2. serverPort: should not be set, we'll use the default 6030 in many places. +# 3. fqdn: will be auto generated in kubernetes, user should not care about it. +# 4. role: currently role is not supported - every node is able to be mnode and vnode. +# +# Btw, keep quotes "" around the value like below, even the value will be number or not. +taoscfg: + # Starts as cluster or not, must be 0 or 1. + # 0: all pods will start as a separate TDengine server + # 1: pods will start as TDengine server cluster. [default] + CLUSTER: "1" + + # number of replications, for cluster only + TAOS_REPLICA: "1" + + # + # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC + #TAOS_NUM_OF_RPC_THREADS: "2" + + + # + # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data + #TAOS_NUM_OF_COMMIT_THREADS: "4" + + # enable/disable installation / usage report + #TAOS_TELEMETRY_REPORTING: "1" + + # time interval of system monitor, seconds + #TAOS_MONITOR_INTERVAL: "30" + + # time interval of dnode status reporting to mnode, seconds, for cluster only + #TAOS_STATUS_INTERVAL: "1" + + # time interval of heart beat from shell to dnode, seconds + #TAOS_SHELL_ACTIVITY_TIMER: "3" + + # minimum sliding window time, milli-second + #TAOS_MIN_SLIDING_TIME: "10" + + # minimum time window, milli-second + #TAOS_MIN_INTERVAL_TIME: "1" + + # the compressed rpc message, option: + # -1 (no compression) + # 0 (all message compressed), + # > 0 (rpc message body which larger than this value will be compressed) + #TAOS_COMPRESS_MSG_SIZE: "-1" + + # max number of connections allowed in dnode + #TAOS_MAX_SHELL_CONNS: "50000" + + # stop writing logs when the disk size of the log folder is less than this value + #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" + + # stop writing temporary files when the disk size of the tmp folder is less than this value + #TAOS_MINIMAL_TMP_DIR_G_B: "0.1" + + # if disk free space is less than this value, taosd service exit directly within startup process + #TAOS_MINIMAL_DATA_DIR_G_B: "0.1" + + # One mnode is equal to the number of vnode consumed + #TAOS_MNODE_EQUAL_VNODE_NUM: "4" + + # enbale/disable http service + #TAOS_HTTP: "1" + + # enable/disable system monitor + #TAOS_MONITOR: "1" + + # enable/disable async log + #TAOS_ASYNC_LOG: "1" + + # + # time of keeping log files, days + #TAOS_LOG_KEEP_DAYS: "0" + + # The following parameters are used for debug purpose only. + # debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR + # 131: output warning and error + # 135: output debug, warning and error + # 143: output trace, debug, warning and error to log + # 199: output debug, warning and error to both screen and file + # 207: output trace, debug, warning and error to both screen and file + # + # debug flag for all log type, take effect when non-zero value\ + #TAOS_DEBUG_FLAG: "143" + + # generate core file when service crash + #TAOS_ENABLE_CORE_FILE: "1" +``` + +### Scaling Out + +For information about scaling out your deployment, see Kubernetes. Additional Helm-specific is described as follows. + +First, obtain the name of the StatefulSet service for your deployment. + +```bash +export STS_NAME=$(kubectl get statefulset \ + -l "app.kubernetes.io/name=tdengine" \ + -o jsonpath="{.items[0].metadata.name}") + +``` + +You can scale out your deployment by adding replicas. The following command scales a deployment to three nodes: + +```bash +kubectl scale --replicas 3 statefulset/$STS_NAME + +``` + +Run the `show dnodes` and `show mnodes` commands to check whether the scale-out was successful. + +### Scaling In + +:::warning +Exercise caution when scaling in a cluster. + +::: + +Determine which dnodes you want to remove and drop them manually. + +```bash +kubectl --namespace default exec $POD_NAME -- \ + cat /var/lib/taos/dnode/dnodeEps.json \ + | jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r +kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes" +kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode ""' + +``` + +### Remove a TDengine Cluster + +You can use Helm to remove your cluster: + +```bash +helm uninstall tdengine + +``` + +However, Helm does not remove PVCs automatically. After you remove your cluster, manually remove all PVCs. diff --git a/docs/en/13-operation/03-tolerance.md b/docs/en/07-operation/04-tolerance.md similarity index 100% rename from docs/en/13-operation/03-tolerance.md rename to docs/en/07-operation/04-tolerance.md diff --git a/docs/en/13-operation/07-import.md b/docs/en/07-operation/07-import.md similarity index 100% rename from docs/en/13-operation/07-import.md rename to docs/en/07-operation/07-import.md diff --git a/docs/en/13-operation/08-export.md b/docs/en/07-operation/08-export.md similarity index 100% rename from docs/en/13-operation/08-export.md rename to docs/en/07-operation/08-export.md diff --git a/docs/en/13-operation/10-monitor.md b/docs/en/07-operation/10-monitor.md similarity index 99% rename from docs/en/13-operation/10-monitor.md rename to docs/en/07-operation/10-monitor.md index ce530be1e4..7ca09d4bff 100644 --- a/docs/en/13-operation/10-monitor.md +++ b/docs/en/07-operation/10-monitor.md @@ -41,7 +41,7 @@ Launch `TDinsight.sh` with the command above and restart Grafana, then open Dash ## log database -The data of tdinsight dashboard is stored in `log` database (default. You can change it in taoskeeper's config file. For more infrmation, please reference to [taoskeeper document](../../reference/taosKeeper)). The taoskeeper will create log database on taoskeeper startup. +The data of tdinsight dashboard is stored in `log` database (default. You can change it in taoskeeper's config file. For more infrmation, please reference to [taoskeeper document](../../reference/components/taosKeeper)). The taoskeeper will create log database on taoskeeper startup. ### taosd\_cluster\_basic table diff --git a/docs/en/13-operation/17-diagnose.md b/docs/en/07-operation/17-diagnose.md similarity index 100% rename from docs/en/13-operation/17-diagnose.md rename to docs/en/07-operation/17-diagnose.md diff --git a/docs/en/13-operation/index.md b/docs/en/07-operation/index.md similarity index 100% rename from docs/en/13-operation/index.md rename to docs/en/07-operation/index.md diff --git a/docs/en/08-client-libraries/_category_.yml b/docs/en/08-client-libraries/_category_.yml deleted file mode 100644 index a70a33caa6..0000000000 --- a/docs/en/08-client-libraries/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: "Client Libraries" diff --git a/docs/en/07-develop/01-connect/_connect_c.mdx b/docs/en/08-develop/01-connect/_connect_c.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_c.mdx rename to docs/en/08-develop/01-connect/_connect_c.mdx diff --git a/docs/en/07-develop/01-connect/_connect_cs.mdx b/docs/en/08-develop/01-connect/_connect_cs.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_cs.mdx rename to docs/en/08-develop/01-connect/_connect_cs.mdx diff --git a/docs/en/07-develop/01-connect/_connect_go.mdx b/docs/en/08-develop/01-connect/_connect_go.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_go.mdx rename to docs/en/08-develop/01-connect/_connect_go.mdx diff --git a/docs/en/07-develop/01-connect/_connect_java.mdx b/docs/en/08-develop/01-connect/_connect_java.mdx similarity index 92% rename from docs/en/07-develop/01-connect/_connect_java.mdx rename to docs/en/08-develop/01-connect/_connect_java.mdx index 4d29e24911..f43fedba27 100644 --- a/docs/en/07-develop/01-connect/_connect_java.mdx +++ b/docs/en/08-develop/01-connect/_connect_java.mdx @@ -12,4 +12,4 @@ When using REST connection, the feature of bulk pulling can be enabled if the si {{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}} ``` -More configuration about connection, please refer to [Java Client Library](../../client-libraries/java) +More configuration about connection, please refer to [Java Client Library](../../reference/connectors/java) diff --git a/docs/en/07-develop/01-connect/_connect_node.mdx b/docs/en/08-develop/01-connect/_connect_node.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_node.mdx rename to docs/en/08-develop/01-connect/_connect_node.mdx diff --git a/docs/en/07-develop/01-connect/_connect_php.mdx b/docs/en/08-develop/01-connect/_connect_php.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_php.mdx rename to docs/en/08-develop/01-connect/_connect_php.mdx diff --git a/docs/en/07-develop/01-connect/_connect_python.mdx b/docs/en/08-develop/01-connect/_connect_python.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_python.mdx rename to docs/en/08-develop/01-connect/_connect_python.mdx diff --git a/docs/en/07-develop/01-connect/_connect_r.mdx b/docs/en/08-develop/01-connect/_connect_r.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_r.mdx rename to docs/en/08-develop/01-connect/_connect_r.mdx diff --git a/docs/en/07-develop/01-connect/_connect_rust.mdx b/docs/en/08-develop/01-connect/_connect_rust.mdx similarity index 100% rename from docs/en/07-develop/01-connect/_connect_rust.mdx rename to docs/en/08-develop/01-connect/_connect_rust.mdx diff --git a/docs/en/07-develop/01-connect/connection-type-en.webp b/docs/en/08-develop/01-connect/connection-type-en.webp similarity index 100% rename from docs/en/07-develop/01-connect/connection-type-en.webp rename to docs/en/08-develop/01-connect/connection-type-en.webp diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/08-develop/01-connect/index.md similarity index 89% rename from docs/en/07-develop/01-connect/index.md rename to docs/en/08-develop/01-connect/index.md index 90b63d96e3..916d5e1e09 100644 --- a/docs/en/07-develop/01-connect/index.md +++ b/docs/en/08-develop/01-connect/index.md @@ -15,14 +15,14 @@ import ConnCSNative from "./_connect_cs.mdx"; import ConnC from "./_connect_c.mdx"; import ConnR from "./_connect_r.mdx"; import ConnPHP from "./_connect_php.mdx"; -import InstallOnLinux from "../../08-client-libraries/_linux_install.mdx"; -import InstallOnWindows from "../../08-client-libraries/_windows_install.mdx"; -import InstallOnMacOS from "../../08-client-libraries/_macos_install.mdx"; -import VerifyLinux from "../../08-client-libraries/_verify_linux.mdx"; -import VerifyWindows from "../../08-client-libraries/_verify_windows.mdx"; -import VerifyMacOS from "../../08-client-libraries/_verify_macos.mdx"; +import InstallOnLinux from "../../14-reference/05-connectors/_linux_install.mdx"; +import InstallOnWindows from "../../14-reference/05-connectors/_windows_install.mdx"; +import InstallOnMacOS from "../../14-reference/05-connectors/_macos_install.mdx"; +import VerifyLinux from "../../14-reference/05-connectors/_verify_linux.mdx"; +import VerifyWindows from "../../14-reference/05-connectors/_verify_windows.mdx"; +import VerifyMacOS from "../../14-reference/05-connectors/_verify_macos.mdx"; -Any application running on any platform can access TDengine through the REST API provided by TDengine. For information, see [REST API](../../reference/rest-api/). Applications can also use the client libraries for various programming languages, including C/C++, Java, Python, Go, Node.js, C#, and Rust, to access TDengine. These client libraries support connecting to TDengine clusters using both native interfaces (taosc). Some client libraries also support connecting over a REST interface. Community developers have also contributed several unofficial client libraries, such as the ADO.NET, Lua, and PHP libraries. +Any application running on any platform can access TDengine through the REST API provided by TDengine. For information, see [REST API](../../reference/connectors/rest-api/). Applications can also use the client libraries for various programming languages, including C/C++, Java, Python, Go, Node.js, C#, and Rust, to access TDengine. These client libraries support connecting to TDengine clusters using both native interfaces (taosc). Some client libraries also support connecting over a REST interface. Community developers have also contributed several unofficial client libraries, such as the ADO.NET, Lua, and PHP libraries. ## Establish Connection diff --git a/docs/en/07-develop/02-model/index.mdx b/docs/en/08-develop/02-model/index.mdx similarity index 90% rename from docs/en/07-develop/02-model/index.mdx rename to docs/en/08-develop/02-model/index.mdx index 6df4d6a0e7..c89e72f75e 100644 --- a/docs/en/07-develop/02-model/index.mdx +++ b/docs/en/08-develop/02-model/index.mdx @@ -22,7 +22,7 @@ In the above SQL statement: - a new data file will be created every 10 days - the size of the write cache pool on each VNode is 16 MB - the number of vgroups is 100 -- WAL is enabled but fsync is disabled For more details please refer to [Database](../../taos-sql/database). +- WAL is enabled but fsync is disabled For more details please refer to [Database](../../reference/taos-sql/database). After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. @@ -47,7 +47,7 @@ In a time-series application, there may be multiple kinds of data collection poi CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int); ``` -Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](../../taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](../../taos-sql/stable) for more details. +Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](../../reference/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](../../reference/taos-sql/stable) for more details. For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices. @@ -61,7 +61,7 @@ A specific table needs to be created for each data collection point. Similar to CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2); ``` -In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](../../taos-sql/table) for details. +In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](../../reference/taos-sql/table) for details. It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value. @@ -75,7 +75,7 @@ INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`. -For more details please refer to [Create Table Automatically](../../taos-sql/insert#automatically-create-table-when-inserting). +For more details please refer to [Create Table Automatically](../../reference/taos-sql/insert#automatically-create-table-when-inserting). ## Single Column vs Multiple Column diff --git a/docs/en/07-develop/03-insert-data/01-sql-writing.mdx b/docs/en/08-develop/03-insert-data/01-sql-writing.mdx similarity index 96% rename from docs/en/07-develop/03-insert-data/01-sql-writing.mdx rename to docs/en/08-develop/03-insert-data/01-sql-writing.mdx index 8f7e573995..7199363eec 100644 --- a/docs/en/07-develop/03-insert-data/01-sql-writing.mdx +++ b/docs/en/08-develop/03-insert-data/01-sql-writing.mdx @@ -33,7 +33,7 @@ The below SQL statement is used to insert one row into table "d1001". INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31); ``` -`ts1` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../taos-sql/insert). +`ts1` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../reference/taos-sql/insert). ### Insert Multiple Rows @@ -43,7 +43,7 @@ Multiple rows can be inserted in a single SQL statement. The example below inser INSERT INTO d1001 VALUES (ts2, 10.2, 220, 0.23) (ts2, 10.3, 218, 0.25); ``` -`ts1` and `ts2` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../taos-sql/insert). +`ts1` and `ts2` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../reference/taos-sql/insert). ### Insert into Multiple Tables @@ -53,9 +53,9 @@ Data can be inserted into multiple tables in the same SQL statement. The example INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31) (ts2, 12.6, 218, 0.33) d1002 VALUES (ts3, 12.3, 221, 0.31); ``` -`ts1`, `ts2` and `ts3` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../taos-sql/insert). +`ts1`, `ts2` and `ts3` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](../../../reference/taos-sql/insert). -For more details about `INSERT` please refer to [INSERT](../../../taos-sql/insert). +For more details about `INSERT` please refer to [INSERT](../../../reference/taos-sql/insert). :::info diff --git a/docs/en/07-develop/03-insert-data/20-kafka-writting.mdx b/docs/en/08-develop/03-insert-data/20-kafka-writting.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/20-kafka-writting.mdx rename to docs/en/08-develop/03-insert-data/20-kafka-writting.mdx diff --git a/docs/en/07-develop/03-insert-data/30-influxdb-line.mdx b/docs/en/08-develop/03-insert-data/30-influxdb-line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/30-influxdb-line.mdx rename to docs/en/08-develop/03-insert-data/30-influxdb-line.mdx diff --git a/docs/en/07-develop/03-insert-data/40-opentsdb-telnet.mdx b/docs/en/08-develop/03-insert-data/40-opentsdb-telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/40-opentsdb-telnet.mdx rename to docs/en/08-develop/03-insert-data/40-opentsdb-telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/50-opentsdb-json.mdx b/docs/en/08-develop/03-insert-data/50-opentsdb-json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/50-opentsdb-json.mdx rename to docs/en/08-develop/03-insert-data/50-opentsdb-json.mdx diff --git a/docs/en/07-develop/03-insert-data/60-high-volume.md b/docs/en/08-develop/03-insert-data/60-high-volume.md similarity index 99% rename from docs/en/07-develop/03-insert-data/60-high-volume.md rename to docs/en/08-develop/03-insert-data/60-high-volume.md index 8e9a788d22..6b95f94cd0 100644 --- a/docs/en/07-develop/03-insert-data/60-high-volume.md +++ b/docs/en/08-develop/03-insert-data/60-high-volume.md @@ -49,7 +49,7 @@ If the data source is Kafka, then the application program is a consumer of Kafka On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned. -For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config). +For more configuration parameters, please refer to [Database Configuration](../../../reference/taos-sql/database) and [Server Configuration](../../../reference/config). ## Sample Programs diff --git a/docs/en/07-develop/03-insert-data/_c_line.mdx b/docs/en/08-develop/03-insert-data/_c_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_c_line.mdx rename to docs/en/08-develop/03-insert-data/_c_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_c_opts_json.mdx b/docs/en/08-develop/03-insert-data/_c_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_c_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_c_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_c_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_c_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_c_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_c_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_c_sql.mdx b/docs/en/08-develop/03-insert-data/_c_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_c_sql.mdx rename to docs/en/08-develop/03-insert-data/_c_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_c_stmt.mdx b/docs/en/08-develop/03-insert-data/_c_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_c_stmt.mdx rename to docs/en/08-develop/03-insert-data/_c_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_cs_line.mdx b/docs/en/08-develop/03-insert-data/_cs_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_cs_line.mdx rename to docs/en/08-develop/03-insert-data/_cs_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_cs_opts_json.mdx b/docs/en/08-develop/03-insert-data/_cs_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_cs_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_cs_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_cs_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_cs_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_cs_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_cs_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_cs_sql.mdx b/docs/en/08-develop/03-insert-data/_cs_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_cs_sql.mdx rename to docs/en/08-develop/03-insert-data/_cs_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_cs_stmt.mdx b/docs/en/08-develop/03-insert-data/_cs_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_cs_stmt.mdx rename to docs/en/08-develop/03-insert-data/_cs_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_go_line.mdx b/docs/en/08-develop/03-insert-data/_go_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_go_line.mdx rename to docs/en/08-develop/03-insert-data/_go_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_go_opts_json.mdx b/docs/en/08-develop/03-insert-data/_go_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_go_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_go_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_go_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_go_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_go_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_go_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_go_sql.mdx b/docs/en/08-develop/03-insert-data/_go_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_go_sql.mdx rename to docs/en/08-develop/03-insert-data/_go_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_go_stmt.mdx b/docs/en/08-develop/03-insert-data/_go_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_go_stmt.mdx rename to docs/en/08-develop/03-insert-data/_go_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_java_line.mdx b/docs/en/08-develop/03-insert-data/_java_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_java_line.mdx rename to docs/en/08-develop/03-insert-data/_java_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_java_opts_json.mdx b/docs/en/08-develop/03-insert-data/_java_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_java_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_java_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_java_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_java_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_java_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_java_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_java_sql.mdx b/docs/en/08-develop/03-insert-data/_java_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_java_sql.mdx rename to docs/en/08-develop/03-insert-data/_java_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_java_stmt.mdx b/docs/en/08-develop/03-insert-data/_java_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_java_stmt.mdx rename to docs/en/08-develop/03-insert-data/_java_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_js_line.mdx b/docs/en/08-develop/03-insert-data/_js_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_js_line.mdx rename to docs/en/08-develop/03-insert-data/_js_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_js_opts_json.mdx b/docs/en/08-develop/03-insert-data/_js_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_js_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_js_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_js_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_js_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_js_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_js_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_js_sql.mdx b/docs/en/08-develop/03-insert-data/_js_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_js_sql.mdx rename to docs/en/08-develop/03-insert-data/_js_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_js_stmt.mdx b/docs/en/08-develop/03-insert-data/_js_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_js_stmt.mdx rename to docs/en/08-develop/03-insert-data/_js_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_php_sql.mdx b/docs/en/08-develop/03-insert-data/_php_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_php_sql.mdx rename to docs/en/08-develop/03-insert-data/_php_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_php_stmt.mdx b/docs/en/08-develop/03-insert-data/_php_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_php_stmt.mdx rename to docs/en/08-develop/03-insert-data/_php_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_kafka.mdx b/docs/en/08-develop/03-insert-data/_py_kafka.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_kafka.mdx rename to docs/en/08-develop/03-insert-data/_py_kafka.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_line.mdx b/docs/en/08-develop/03-insert-data/_py_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_line.mdx rename to docs/en/08-develop/03-insert-data/_py_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_opts_json.mdx b/docs/en/08-develop/03-insert-data/_py_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_py_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_py_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_py_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_sql.mdx b/docs/en/08-develop/03-insert-data/_py_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_sql.mdx rename to docs/en/08-develop/03-insert-data/_py_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_py_stmt.mdx b/docs/en/08-develop/03-insert-data/_py_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_py_stmt.mdx rename to docs/en/08-develop/03-insert-data/_py_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_line.mdx b/docs/en/08-develop/03-insert-data/_rust_line.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_line.mdx rename to docs/en/08-develop/03-insert-data/_rust_line.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_opts_json.mdx b/docs/en/08-develop/03-insert-data/_rust_opts_json.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_opts_json.mdx rename to docs/en/08-develop/03-insert-data/_rust_opts_json.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_opts_telnet.mdx b/docs/en/08-develop/03-insert-data/_rust_opts_telnet.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_opts_telnet.mdx rename to docs/en/08-develop/03-insert-data/_rust_opts_telnet.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_schemaless.mdx b/docs/en/08-develop/03-insert-data/_rust_schemaless.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_schemaless.mdx rename to docs/en/08-develop/03-insert-data/_rust_schemaless.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_sql.mdx b/docs/en/08-develop/03-insert-data/_rust_sql.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_sql.mdx rename to docs/en/08-develop/03-insert-data/_rust_sql.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_stmt.mdx b/docs/en/08-develop/03-insert-data/_rust_stmt.mdx similarity index 100% rename from docs/en/07-develop/03-insert-data/_rust_stmt.mdx rename to docs/en/08-develop/03-insert-data/_rust_stmt.mdx diff --git a/docs/en/07-develop/03-insert-data/highvolume.webp b/docs/en/08-develop/03-insert-data/highvolume.webp similarity index 100% rename from docs/en/07-develop/03-insert-data/highvolume.webp rename to docs/en/08-develop/03-insert-data/highvolume.webp diff --git a/docs/en/07-develop/03-insert-data/index.md b/docs/en/08-develop/03-insert-data/index.md similarity index 100% rename from docs/en/07-develop/03-insert-data/index.md rename to docs/en/08-develop/03-insert-data/index.md diff --git a/docs/en/07-develop/04-query-data/_c.mdx b/docs/en/08-develop/04-query-data/_c.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_c.mdx rename to docs/en/08-develop/04-query-data/_c.mdx diff --git a/docs/en/07-develop/04-query-data/_c_async.mdx b/docs/en/08-develop/04-query-data/_c_async.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_c_async.mdx rename to docs/en/08-develop/04-query-data/_c_async.mdx diff --git a/docs/en/07-develop/04-query-data/_cs.mdx b/docs/en/08-develop/04-query-data/_cs.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_cs.mdx rename to docs/en/08-develop/04-query-data/_cs.mdx diff --git a/docs/en/07-develop/04-query-data/_go.mdx b/docs/en/08-develop/04-query-data/_go.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_go.mdx rename to docs/en/08-develop/04-query-data/_go.mdx diff --git a/docs/en/07-develop/04-query-data/_go_async.mdx b/docs/en/08-develop/04-query-data/_go_async.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_go_async.mdx rename to docs/en/08-develop/04-query-data/_go_async.mdx diff --git a/docs/en/07-develop/04-query-data/_java.mdx b/docs/en/08-develop/04-query-data/_java.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_java.mdx rename to docs/en/08-develop/04-query-data/_java.mdx diff --git a/docs/en/07-develop/04-query-data/_js.mdx b/docs/en/08-develop/04-query-data/_js.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_js.mdx rename to docs/en/08-develop/04-query-data/_js.mdx diff --git a/docs/en/07-develop/04-query-data/_js_async.mdx b/docs/en/08-develop/04-query-data/_js_async.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_js_async.mdx rename to docs/en/08-develop/04-query-data/_js_async.mdx diff --git a/docs/en/07-develop/04-query-data/_php.mdx b/docs/en/08-develop/04-query-data/_php.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_php.mdx rename to docs/en/08-develop/04-query-data/_php.mdx diff --git a/docs/en/07-develop/04-query-data/_py.mdx b/docs/en/08-develop/04-query-data/_py.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_py.mdx rename to docs/en/08-develop/04-query-data/_py.mdx diff --git a/docs/en/07-develop/04-query-data/_py_async.mdx b/docs/en/08-develop/04-query-data/_py_async.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_py_async.mdx rename to docs/en/08-develop/04-query-data/_py_async.mdx diff --git a/docs/en/07-develop/04-query-data/_rust.mdx b/docs/en/08-develop/04-query-data/_rust.mdx similarity index 100% rename from docs/en/07-develop/04-query-data/_rust.mdx rename to docs/en/08-develop/04-query-data/_rust.mdx diff --git a/docs/en/07-develop/04-query-data/index.mdx b/docs/en/08-develop/04-query-data/index.mdx similarity index 96% rename from docs/en/07-develop/04-query-data/index.mdx rename to docs/en/08-develop/04-query-data/index.mdx index 164ec0d0a6..7bfb2d7d88 100644 --- a/docs/en/07-develop/04-query-data/index.mdx +++ b/docs/en/08-develop/04-query-data/index.mdx @@ -42,7 +42,7 @@ Query OK, 2 row(s) in set (0.001100s) To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). -For detailed query syntax, see [Select](../../taos-sql/select). +For detailed query syntax, see [Select](../../reference/taos-sql/select). ## Aggregation among Tables @@ -73,7 +73,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2; Query OK, 1 row(s) in set (0.002136s) ``` -In [Select](../../taos-sql/select), all query operations are marked as to whether they support STables or not. +In [Select](../../reference/taos-sql/select), all query operations are marked as to whether they support STables or not. ## Down Sampling and Interpolation @@ -121,7 +121,7 @@ In many use cases, it's hard to align the timestamp of the data collected by eac Interpolation can be performed in TDengine if there is no data in a time range. -For more information, see [Aggregate by Window](../../taos-sql/distinguished). +For more information, see [Aggregate by Window](../../reference/taos-sql/distinguished). ## Examples diff --git a/docs/en/07-develop/06-stream.md b/docs/en/08-develop/06-stream.md similarity index 97% rename from docs/en/07-develop/06-stream.md rename to docs/en/08-develop/06-stream.md index 59a6b815cf..fe3c859c37 100644 --- a/docs/en/07-develop/06-stream.md +++ b/docs/en/08-develop/06-stream.md @@ -12,7 +12,7 @@ The stream processing engine includes data filtering, scalar function computatio TDengine stream processing supports the aggregation of supertables that are deployed across multiple vnodes. It can also handle out-of-order writes and includes a watermark mechanism that determines the extent to which out-of-order data is accepted by the system. You can configure whether to drop or reprocess out-of-order data through the **ignore expired** parameter. -For more information, see [Stream Processing](../../taos-sql/stream). +For more information, see [Stream Processing](../../reference/taos-sql/stream). ## Create a Stream @@ -26,7 +26,7 @@ stream_options: { } ``` -For more information, see [Stream Processing](../../taos-sql/stream). +For more information, see [Stream Processing](../../reference/taos-sql/stream). ## Usage Scenario 1 diff --git a/docs/en/07-develop/07-tmq.mdx b/docs/en/08-develop/07-tmq.mdx similarity index 100% rename from docs/en/07-develop/07-tmq.mdx rename to docs/en/08-develop/07-tmq.mdx diff --git a/docs/en/07-develop/08-cache.md b/docs/en/08-develop/08-cache.md similarity index 100% rename from docs/en/07-develop/08-cache.md rename to docs/en/08-develop/08-cache.md diff --git a/docs/en/07-develop/09-udf.md b/docs/en/08-develop/09-udf.md similarity index 99% rename from docs/en/07-develop/09-udf.md rename to docs/en/08-develop/09-udf.md index dd17b7d844..4105a89bcf 100644 --- a/docs/en/07-develop/09-udf.md +++ b/docs/en/08-develop/09-udf.md @@ -887,4 +887,4 @@ The `pycumsum` function finds the cumulative sum for all data in the input colum ## Manage and Use UDF -You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../../taos-sql/udf/). +You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../../reference/taos-sql/udf/). diff --git a/docs/en/07-develop/_sub_c.mdx b/docs/en/08-develop/_sub_c.mdx similarity index 100% rename from docs/en/07-develop/_sub_c.mdx rename to docs/en/08-develop/_sub_c.mdx diff --git a/docs/en/07-develop/_sub_cs.mdx b/docs/en/08-develop/_sub_cs.mdx similarity index 100% rename from docs/en/07-develop/_sub_cs.mdx rename to docs/en/08-develop/_sub_cs.mdx diff --git a/docs/en/07-develop/_sub_go.mdx b/docs/en/08-develop/_sub_go.mdx similarity index 100% rename from docs/en/07-develop/_sub_go.mdx rename to docs/en/08-develop/_sub_go.mdx diff --git a/docs/en/07-develop/_sub_java.mdx b/docs/en/08-develop/_sub_java.mdx similarity index 100% rename from docs/en/07-develop/_sub_java.mdx rename to docs/en/08-develop/_sub_java.mdx diff --git a/docs/en/07-develop/_sub_java_ws.mdx b/docs/en/08-develop/_sub_java_ws.mdx similarity index 100% rename from docs/en/07-develop/_sub_java_ws.mdx rename to docs/en/08-develop/_sub_java_ws.mdx diff --git a/docs/en/07-develop/_sub_node.mdx b/docs/en/08-develop/_sub_node.mdx similarity index 100% rename from docs/en/07-develop/_sub_node.mdx rename to docs/en/08-develop/_sub_node.mdx diff --git a/docs/en/07-develop/_sub_python.mdx b/docs/en/08-develop/_sub_python.mdx similarity index 100% rename from docs/en/07-develop/_sub_python.mdx rename to docs/en/08-develop/_sub_python.mdx diff --git a/docs/en/07-develop/_sub_rust.mdx b/docs/en/08-develop/_sub_rust.mdx similarity index 100% rename from docs/en/07-develop/_sub_rust.mdx rename to docs/en/08-develop/_sub_rust.mdx diff --git a/docs/en/07-develop/index.md b/docs/en/08-develop/index.md similarity index 87% rename from docs/en/07-develop/index.md rename to docs/en/08-develop/index.md index da020d53d5..7c922933dd 100644 --- a/docs/en/07-develop/index.md +++ b/docs/en/08-develop/index.md @@ -14,7 +14,7 @@ Before creating an application to process time-series data with TDengine, consid 7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. 8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem. -This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each client library, please read the [Client Library Reference Guide](../client-libraries/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](../third-party/). +This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../reference/taos-sql/). For a more in-depth understanding of the use of each client library, please read the [Client Library Reference Guide](../reference/connectors). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](../third-party/). If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away. diff --git a/docs/en/10-deployment/01-deploy.md b/docs/en/10-deployment/01-deploy.md deleted file mode 100644 index c6f0f5a3a3..0000000000 --- a/docs/en/10-deployment/01-deploy.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -title: Manual Deployment and Management -sidebar_label: Manual Deployment -description: This document describes how to deploy TDengine on a server. ---- - -## Prerequisites - -### Step 0 - -The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command. If you have a DNS server on your network, contact your network administrator for assistance. - -### Step 1 - -If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`. - -:::note -FQDN information is written to file. If you have started TDengine without configuring or changing the FQDN, ensure that data is backed up or no longer needed before running the `rm -rf /var/lib\taos/\*` command. -::: - -:::note -- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. -::: - -### Step 2 - -- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster. - -### Step 3 - -Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. - -### Step 4 - -Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. - -To get the hostname on any host, the command `hostname -f` can be executed. - -`ping ` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other. Hosts that are not accessible to each other cannot form a cluster. - -On the physical machine running the application, ping the dnode that is running taosd. If the dnode is not accessible, the application cannot connect to taosd. In this case, verify the DNS and hosts settings on the physical node running the application. - -The end point of each dnode is the output hostname and port, such as h1.tdengine.com:6030. - -### Step 5 - -Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.tdengine.com:6030", its `taos.cfg` is configured as following. - -```c -// firstEp is the end point to connect to when any dnode starts -firstEp h1.tdengine.com:6030 - -// must be configured to the FQDN of the host where the dnode is launched -fqdn h1.tdengine.com - -// the port used by the dnode, default is 6030 -serverPort 6030 - -``` - -`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. Retain the default values for other parameters. - -For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. - -| **#** | **Parameter** | **Definition** | -| ----- | ---------------- | ----------------------------------------------------------------------------- | -| 1 | statusInterval | The interval by which dnode reports its status to mnode | -| 2 | timezone | Timezone | -| 3 | locale | System region and encoding | -| 4 | charset | Character set | -| 5 | ttlChangeOnWrite | Whether the ttl expiration time changes with the table modification operation | - -## Start Cluster - -The first dnode can be started following the instructions in [Get Started](../../get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example: - -``` -taos> show dnodes; -id | endpoint | vnodes | support_vnodes | status | create_time | note | -============================================================================================================================================ -1 | h1.tdengine.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | | -Query OK, 1 rows affected (0.007984s) - - -``` - -From the above output, it is shown that the end point of the started dnode is "h1.tdengine.com:6030", which is the `firstEp` of the cluster. - -## Add DNODE - -There are a few steps necessary to add other dnodes in the cluster. - -Second, we can start `taosd` as instructed in [Get Started](../../get-started/). - -Then, on the first dnode i.e. h1.tdengine.com in our example, use TDengine CLI `taos` to execute the following command: - -```sql -CREATE DNODE "h2.taos.com:6030"; -```` - -This adds the end point of the new dnode (from Step 4) into the end point list of the cluster. In the command "fqdn:port" should be quoted using double quotes. Change `"h2.taos.com:6030"` to the end point of your new dnode. - -Then on the first dnode h1.tdengine.com, execute `show dnodes` in `taos` - -```sql -SHOW DNODES; -``` - -to show whether the second dnode has been added in the cluster successfully or not. If the status of the newly added dnode is offline, please check: - -- Whether the `taosd` process is running properly or not -- In the log file `taosdlog.0` to see whether the fqdn and port are correct and add the correct end point if not. -The above process can be repeated to add more dnodes in the cluster. - -:::tip - -Any node that is in the cluster and online can be the firstEp of new nodes. -Nodes use the firstEp parameter only when joining a cluster for the first time. After a node has joined the cluster, it stores the latest mnode in its end point list and no longer makes use of firstEp. - -However, firstEp is used by clients that connect to the cluster. For example, if you run TDengine CLI `taos` without arguments, it connects to the firstEp by default. - -Two dnodes that are launched without a firstEp value operate independently of each other. It is not possible to add one dnode to the other dnode and form a cluster. It is also not possible to form two independent clusters into a new cluster. - -::: - -## Show DNODEs - -The below command can be executed in TDengine CLI `taos` - -```sql -SHOW DNODES; -``` - -to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode. - -Below is the example output of this command. - -``` -taos> show dnodes; - id | endpoint | vnodes | support_vnodes | status | create_time | note | -============================================================================================================================================ - 1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | | -Query OK, 1 rows affected (0.006684s) -``` - -## Show VGROUPs - -To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes. - -Launch TDengine CLI `taos` and execute below command: - -```sql -USE SOME_DATABASE; -SHOW VGROUPS; -``` - -The example output is below: - -``` -taos> use db; -Database changed. - -taos> show vgroups; - vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma | -================================================================================================================================================================================================ - 2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | - 3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | - 4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 | -Query OK, 8 row(s) in set (0.001154s) -``` - -## Drop DNODE - -Before running the TDengine CLI, ensure that the taosd process has been stopped on the dnode that you want to delete. - -```sql -DROP DNODE dnodeId; -``` - -to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`. - -:::warning - -- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs. -- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped. -- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode. -- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication. - -::: diff --git a/docs/en/10-deployment/02-docker.md b/docs/en/10-deployment/02-docker.md deleted file mode 100644 index 2c281ec408..0000000000 --- a/docs/en/10-deployment/02-docker.md +++ /dev/null @@ -1,498 +0,0 @@ ---- -title: Deploying TDengine with Docker -sidebar_label: Docker -description: This chapter describes how to start and access TDengine in a Docker container. ---- - -This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file. - -## Starting TDengine - -The TDengine image starts with the HTTP service activated by default, using the following command: - -```shell -docker run -d --name tdengine \ --v ~/data/taos/dnode/data:/var/lib/taos \ --v ~/data/taos/dnode/log:/var/log/taos \ --p 6041:6041 tdengine/tdengine -``` -:::note - -* /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. And also you can modify ~/data/taos/dnode/data to your any other local emtpy data directory -* /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. And also you can modify ~/data/taos/dnode/log to your any other local empty log directory - - -::: - -The above command starts a container named "tdengine" and maps the HTTP service port 6041 to the host port 6041. You can verify that the HTTP service provided in this container is available using the following command. - -```shell -curl -u root:taosdata -d "show databases" localhost:6041/rest/sql -``` - -The TDengine client taos can be executed in this container to access TDengine using the following command. - -```shell -$ docker exec -it tdengine taos - -taos> show databases; - name | -================================= - information_schema | - performance_schema | -Query OK, 2 row(s) in set (0.002843s) -``` - -The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various client libraries (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various client libraries for complex scenarios. - -## Start TDengine on the host network - -```shell -docker run -d --name tdengine --network host tdengine/tdengine -``` - -The above command starts TDengine on the host network and uses the host's FQDN to establish a connection instead of the container's hostname. It is the equivalent of using `systemctl` to start TDengine on the host. If the TDengine client is already installed on the host, you can access it directly with the following command. - -```shell -$ taos - -taos> show dnodes; - id | end_point | vnodes | cores | status | role | create_time | offline reason | -====================================================================================================================================== - 1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | | -Query OK, 1 row(s) in set (0.003233s) -``` - -## Start TDengine with the specified hostname and port - -The `TAOS_FQDN` environment variable or the `fqdn` configuration item in `taos.cfg` allows TDengine to establish a connection at the specified hostname. This approach provides greater flexibility for deployment. - -```shell -docker run -d \ - --name tdengine \ - -e TAOS_FQDN=tdengine \ - -p 6030-6049:6030-6049 \ - -p 6030-6049:6030-6049/udp \ - tdengine/tdengine -``` - -The above command starts a TDengine service in the container, which listens to the hostname tdengine, and maps the container's port segment 6030 to 6049 to the host's port segment 6030 to 6049 (both TCP and UDP ports need to be mapped). If the port segment is already occupied on the host, you can modify the above command to specify a free port segment on the host. If `rpcForceTcp` is set to `1`, you can map only the TCP protocol. - -Next, ensure the hostname "tdengine" is resolvable in `/etc/hosts`. - -```shell -echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts -``` - -Finally, the TDengine service can be accessed from the TDengine CLI or any client library with "tdengine" as the server address. - -```shell -taos -h tdengine -P 6030 -``` - -If set `TAOS_FQDN` to the same hostname, the effect is the same as "Start TDengine on host network". - -## Start TDengine on the specified network - -You can also start TDengine on a specific network. Perform the following steps: - -1. First, create a docker network named `td-net` - - ```shell - docker network create td-net - ``` - -2. Start TDengine - - Start the TDengine service on the `td-net` network with the following command: - - ```shell - docker run -d --name tdengine --network td-net \ - -e TAOS_FQDN=tdengine \ - tdengine/tdengine - ``` - -3. Start the TDengine client in another container on the same network - - ```shell - docker run --rm -it --network td-net -e TAOS_FIRST_EP=tdengine tdengine/tdengine taos - # or - #docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine - ``` - -## Launching a client application in a container - -If you want to start your application in a container, you need to add the corresponding dependencies on TDengine to the image as well, e.g. - -```docker -FROM ubuntu:20.04 -RUN apt-get update && apt-get install -y wget -ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && cd TDengine-client-${TDENGINE_VERSION} \ - && ./install_client.sh \ - && cd ../ \ - && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} -## add your application next, eg. go, build it in builder stage, copy the binary to the runtime -#COPY --from=builder /path/to/build/app /usr/bin/ -#CMD ["app"] -``` - -Here is an example GO program: - -```go -/* - * In this test program, we'll create a database and insert 4 records then select out. - */ -package main - -import ( - "database/sql" - "flag" - "fmt" - "time" - - _ "github.com/taosdata/driver-go/v3/taosSql" -) - -type config struct { - hostName string - serverPort string - user string - password string -} - -var configPara config -var taosDriverName = "taosSql" -var url string - -func init() { - flag.StringVar(&configPara.hostName, "h", "", "The host to connect to TDengine server.") - flag.StringVar(&configPara.serverPort, "p", "", "The TCP/IP port number to use for the connection to TDengine server.") - flag.StringVar(&configPara.user, "u", "root", "The TDengine user name to use when connecting to the server.") - flag.StringVar(&configPara.password, "P", "taosdata", "The password to use when connecting to the server.") - flag.Parse() -} - -func printAllArgs() { - fmt.Printf("============= args parse result: =============\n") - fmt.Printf("hostName: %v\n", configPara.hostName) - fmt.Printf("serverPort: %v\n", configPara.serverPort) - fmt.Printf("usr: %v\n", configPara.user) - fmt.Printf("password: %v\n", configPara.password) - fmt.Printf("================================================\n") -} - -func main() { - printAllArgs() - - url = "root:taosdata@/tcp(" + configPara.hostName + ":" + configPara.serverPort + ")/" - - taos, err := sql.Open(taosDriverName, url) - checkErr(err, "open database error") - defer taos.Close() - - taos.Exec("create database if not exists test") - taos.Exec("use test") - taos.Exec("create table if not exists tb1 (ts timestamp, a int)") - _, err = taos.Exec("insert into tb1 values(now, 0)(now+1s,1)(now+2s,2)(now+3s,3)") - checkErr(err, "failed to insert") - rows, err := taos.Query("select * from tb1") - checkErr(err, "failed to select") - - defer rows.Close() - for rows.Next() { - var r struct { - ts time.Time - a int - } - err := rows.Scan(&r.ts, &r.a) - if err != nil { - fmt.Println("scan error:\n", err) - return - } - fmt.Println(r.ts, r.a) - } -} - -func checkErr(err error, prompt string) { - if err != nil { - fmt.Println("ERROR: %s\n", prompt) - panic(err) - } -} -``` - -Here is the full Dockerfile: - -```docker -FROM golang:1.17.6-buster as builder -ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && cd TDengine-client-${TDENGINE_VERSION} \ - && ./install_client.sh \ - && cd ../ \ - && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} -WORKDIR /usr/src/app/ -ENV GOPROXY="https://goproxy.io,direct" -COPY ./main.go ./go.mod ./go.sum /usr/src/app/ -RUN go env -RUN go mod tidy -RUN go build - -FROM ubuntu:20.04 -RUN apt-get update && apt-get install -y wget -ENV TDENGINE_VERSION=3.0.0.0 -RUN wget -c https://tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \ - && cd TDengine-client-${TDENGINE_VERSION} \ - && ./install_client.sh \ - && cd ../ \ - && rm -rf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz TDengine-client-${TDENGINE_VERSION} - -## add your application next, eg. go, build it in builder stage, copy the binary to the runtime -COPY --from=builder /usr/src/app/app /usr/bin/ -CMD ["app"] -``` - -Now that we have `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, we can build the application and start it on the `td-net` network. - -```shell -$ docker build -t app -f app.dockerfile -$ docker run --rm --network td-net app -h tdengine -p 6030 -============= args parse result: ============= -hostName: tdengine -serverPort: 6030 -usr: root -password: taosdata -================================================ -2022-01-17 15:56:55.48 +0000 UTC 0 -2022-01-17 15:56:56.48 +0000 UTC 1 -2022-01-17 15:56:57.48 +0000 UTC 2 -2022-01-17 15:56:58.48 +0000 UTC 3 -2022-01-17 15:58:01.842 +0000 UTC 0 -2022-01-17 15:58:02.842 +0000 UTC 1 -2022-01-17 15:58:03.842 +0000 UTC 2 -2022-01-17 15:58:04.842 +0000 UTC 3 -2022-01-18 01:43:48.029 +0000 UTC 0 -2022-01-18 01:43:49.029 +0000 UTC 1 -2022-01-18 01:43:50.029 +0000 UTC 2 -2022-01-18 01:43:51.029 +0000 UTC 3 -``` - -## Start the TDengine cluster with docker-compose - -1. The following docker-compose file starts a TDengine cluster with three nodes. - -```yml -version: "3" -services: - td-1: - image: tdengine/tdengine:$VERSION - environment: - TAOS_FQDN: "td-1" - TAOS_FIRST_EP: "td-1" - ports: - - 6041:6041 - - 6030:6030 - volumes: - # /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory - - ~/data/taos/dnode1/data:/var/lib/taos - # /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory - - ~/data/taos/dnode1/log:/var/log/taos - td-2: - image: tdengine/tdengine:$VERSION - environment: - TAOS_FQDN: "td-2" - TAOS_FIRST_EP: "td-1" - volumes: - - ~/data/taos/dnode2/data:/var/lib/taos - - ~/data/taos/dnode2/log:/var/log/taos - td-3: - image: tdengine/tdengine:$VERSION - environment: - TAOS_FQDN: "td-3" - TAOS_FIRST_EP: "td-1" - volumes: - - ~/data/taos/dnode3/data:/var/lib/taos - - ~/data/taos/dnode3/log:/var/log/taos -``` - -:::note - -- The `VERSION` environment variable is used to set the tdengine image tag -- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time - - -::: - -2. Start the cluster - - ```shell - $ VERSION=3.0.0.0 docker-compose up -d - Creating network "test_default" with the default driver - Creating volume "test_taosdata-td1" with default driver - Creating volume "test_taoslog-td1" with default driver - Creating volume "test_taosdata-td2" with default driver - Creating volume "test_taoslog-td2" with default driver - Creating test_td-1_1 ... done - Creating test_arbitrator_1 ... done - Creating test_td-2_1 ... done - ``` - -3. Check the status of each node - - ```shell - $ docker-compose ps - Name Command State Ports - --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - test_arbitrator_1 /usr/bin/entrypoint.sh tar ... Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp - test_td-1_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp - test_td-2_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp - ``` - -4. Show dnodes via TDengine CLI - -```shell -$ docker-compose exec td-1 taos -s "show dnodes" - -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | note | -====================================================================================================================================== - 1 | td-1:6030 | 0 | 32 | ready | 2022-08-19 07:57:29.971 | | - 2 | td-2:6030 | 0 | 32 | ready | 2022-08-19 07:57:31.415 | | - 3 | td-3:6030 | 0 | 32 | ready | 2022-08-19 07:57:31.417 | | -Query OK, 3 rows in database (0.021262s) - -``` - -## taosAdapter - -1. taosAdapter is enabled by default in the TDengine container. If you want to disable it, specify the environment variable `TAOS_DISABLE_ADAPTER=true` at startup - -2. At the same time, for flexible deployment, taosAdapter can be started in a separate container - - ```docker - services: - # ... - adapter: - image: tdengine/tdengine:$VERSION - command: taosadapter - ``` - - Suppose you want to deploy multiple taosAdapters to improve throughput and provide high availability. In that case, the recommended configuration method uses a reverse proxy such as Nginx to offer a unified access entry. For specific configuration methods, please refer to the official documentation of Nginx. Here is an example: - -```yml -version: "3" - -networks: - inter: - -services: - td-1: - image: tdengine/tdengine:$VERSION - environment: - TAOS_FQDN: "td-1" - TAOS_FIRST_EP: "td-1" - volumes: - # /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory - - ~/data/taos/dnode1/data:/var/lib/taos - # /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory - - ~/data/taos/dnode1/log:/var/log/taos - td-2: - image: tdengine/tdengine:$VERSION - environment: - TAOS_FQDN: "td-2" - TAOS_FIRST_EP: "td-1" - volumes: - - ~/data/taos/dnode2/data:/var/lib/taos - - ~/data/taos/dnode2/log:/var/log/taos - adapter: - image: tdengine/tdengine:$VERSION - entrypoint: "taosadapter" - networks: - - inter - environment: - TAOS_FIRST_EP: "td-1" - TAOS_SECOND_EP: "td-2" - deploy: - replicas: 4 - nginx: - image: nginx - depends_on: - - adapter - networks: - - inter - ports: - - 6041:6041 - - 6044:6044/udp - command: [ - "sh", - "-c", - "while true; - do curl -s http://adapter:6041/-/ping >/dev/null && break; - done; - printf 'server{listen 6041;location /{proxy_pass http://adapter:6041;}}' - > /etc/nginx/conf.d/rest.conf; - printf 'stream{server{listen 6044 udp;proxy_pass adapter:6044;}}' - >> /etc/nginx/nginx.conf;cat /etc/nginx/nginx.conf; - nginx -g 'daemon off;'", - ] -``` - -## Deploy with docker swarm - -If you want to deploy a container-based TDengine cluster on multiple hosts, you can use docker swarm. First, to establish a docker swarm cluster on these hosts, please refer to the official docker documentation. - -The docker-compose file can refer to the previous section. Here is the command to start TDengine with docker swarm: - -```shell -$ VERSION=3.0.0.0 docker stack deploy -c docker-compose.yml taos -Creating network taos_inter -Creating network taos_api -Creating service taos_arbitrator -Creating service taos_td-1 -Creating service taos_td-2 -Creating service taos_adapter -Creating service taos_nginx -``` - -Checking status: - -```shell -$ docker stack ps taos -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -79ni8temw59n taos_nginx.1 nginx:latest TM1701 Running Running about a minute ago -3e94u72msiyg taos_adapter.1 tdengine/tdengine:3.0.0.0 TM1702 Running Running 56 seconds ago -100amjkwzsc6 taos_td-2.1 tdengine/tdengine:3.0.0.0 TM1703 Running Running about a minute ago -pkjehr2vvaaa taos_td-1.1 tdengine/tdengine:3.0.0.0 TM1704 Running Running 2 minutes ago -tpzvgpsr1qkt taos_arbitrator.1 tdengine/tdengine:3.0.0.0 TM1705 Running Running 2 minutes ago -rvss3g5yg6fa taos_adapter.2 tdengine/tdengine:3.0.0.0 TM1706 Running Running 56 seconds ago -i2augxamfllf taos_adapter.3 tdengine/tdengine:3.0.0.0 TM1707 Running Running 56 seconds ago -lmjyhzccpvpg taos_adapter.4 tdengine/tdengine:3.0.0.0 TM1708 Running Running 56 seconds ago -$ docker service ls -ID NAME MODE REPLICAS IMAGE PORTS -561t4lu6nfw6 taos_adapter replicated 4/4 tdengine/tdengine:3.0.0.0 -3hk5ct3q90sm taos_arbitrator replicated 1/1 tdengine/tdengine:3.0.0.0 -d8qr52envqzu taos_nginx replicated 1/1 nginx:latest *:6041->6041/tcp, *:6044->6044/udp -2isssfvjk747 taos_td-1 replicated 1/1 tdengine/tdengine:3.0.0.0 -9pzw7u02ichv taos_td-2 replicated 1/1 tdengine/tdengine:3.0.0.0 -``` - -From the above output, you can see two dnodes, two taosAdapters, and one Nginx reverse proxy service. - -Next, we can reduce the number of taosAdapter services. - -```shell -$ docker service scale taos_adapter=1 -taos_adapter scaled to 1 -overall progress: 1 out of 1 tasks -1/1: running [==================================================>] -verify: Service converged - -$ docker service ls -f name=taos_adapter -ID NAME MODE REPLICAS IMAGE PORTS -561t4lu6nfw6 taos_adapter replicated 1/1 tdengine/tdengine:3.0.0.0 -``` diff --git a/docs/en/10-deployment/03-k8s.md b/docs/en/10-deployment/03-k8s.md deleted file mode 100644 index 3e5ba4c349..0000000000 --- a/docs/en/10-deployment/03-k8s.md +++ /dev/null @@ -1,544 +0,0 @@ ---- -title: Deploying a TDengine Cluster in Kubernetes -sidebar_label: Kubernetes -description: This document describes how to deploy TDengine on Kubernetes. ---- - -## Overview - -As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment. - -To meet [high availability](../../tdinternal/high-availability/) requirements, clusters need to meet the following requirements: - -- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3 -- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable. -- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again) - -## Prerequisites - -Before deploying TDengine on Kubernetes, perform the following: - -- This article applies Kubernetes 1.19 and above -- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance -- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services - -You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine). - -## Configure the service - -Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step: - -```YAML ---- -apiVersion: v1 -kind: Service -metadata: - name: "taosd" - labels: - app: "tdengine" -spec: - ports: - - name: tcp6030 - protocol: "TCP" - port: 6030 - - name: tcp6041 - protocol: "TCP" - port: 6041 - selector: - app: "tdengine" -``` - -## Configure the service as StatefulSet - -According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation. - -Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) - -```YAML ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: "tdengine" - labels: - app: "tdengine" -spec: - serviceName: "taosd" - replicas: 3 - updateStrategy: - type: RollingUpdate - selector: - matchLabels: - app: "tdengine" - template: - metadata: - name: "tdengine" - labels: - app: "tdengine" - spec: - containers: - - name: "tdengine" - image: "tdengine/tdengine:3.0.7.1" - imagePullPolicy: "IfNotPresent" - ports: - - name: tcp6030 - protocol: "TCP" - containerPort: 6030 - - name: tcp6041 - protocol: "TCP" - containerPort: 6041 - env: - # POD_NAME for FQDN config - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - # SERVICE_NAME and NAMESPACE for fqdn resolve - - name: SERVICE_NAME - value: "taosd" - - name: STS_NAME - value: "tdengine" - - name: STS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # TZ for timezone settings, we recommend to always set it. - - name: TZ - value: "Asia/Shanghai" - # TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase. - - name: TAOS_SERVER_PORT - value: "6030" - # Must set if you want a cluster. - - name: TAOS_FIRST_EP - value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)" - # TAOS_FQND should always be set in k8s env. - - name: TAOS_FQDN - value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local" - volumeMounts: - - name: taosdata - mountPath: /var/lib/taos - startupProbe: - exec: - command: - - taos-check - failureThreshold: 360 - periodSeconds: 10 - readinessProbe: - exec: - command: - - taos-check - initialDelaySeconds: 5 - timeoutSeconds: 5000 - livenessProbe: - exec: - command: - - taos-check - initialDelaySeconds: 15 - periodSeconds: 20 - volumeClaimTemplates: - - metadata: - name: taosdata - spec: - accessModes: - - "ReadWriteOnce" - storageClassName: "standard" - resources: - requests: - storage: "5Gi" -``` - -## Use kubectl to deploy TDengine - -First create the corresponding namespace, and then execute the following command in sequence : - -```Bash -kubectl apply -f taosd-service.yaml -n tdengine-test -kubectl apply -f tdengine.yaml -n tdengine-test -``` - -The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster: - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" -kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes" -kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes" -``` - -The output is as follows: - -```Bash -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | | - 2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | | - 3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | | -Query OK, 3 row(s) in set (0.001853s) -``` - -View the current mnode - -```Bash -kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G" -taos> show mnodes\G -*************************** 1.row *************************** - id: 1 - endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030 - role: leader - status: ready -create_time: 2023-07-19 17:54:18.559 -reboot_time: 2023-07-19 17:54:19.520 -Query OK, 1 row(s) in set (0.001282s) -``` - -## Create mnode - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2" -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3" -``` - -View mnode - -```Bash -kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G" - -taos> show mnodes\G -*************************** 1.row *************************** - id: 1 - endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030 - role: leader - status: ready -create_time: 2023-07-19 17:54:18.559 -reboot_time: 2023-07-20 09:19:36.060 -*************************** 2.row *************************** - id: 2 - endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030 - role: follower - status: ready -create_time: 2023-07-20 09:22:05.600 -reboot_time: 2023-07-20 09:22:12.838 -*************************** 3.row *************************** - id: 3 - endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030 - role: follower - status: ready -create_time: 2023-07-20 09:22:20.042 -reboot_time: 2023-07-20 09:22:23.271 -Query OK, 3 row(s) in set (0.003108s) -``` - -## Enable port forwarding - -Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments. - -```bash -kubectl port-forward -n tdengine-test tdengine-0 6041:6041 & -``` - -Use **curl** to verify that the TDengine REST API is working on port 6041: - -```bash -curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql -{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4} -``` - -## Test cluster - -### Data preparation - -#### taosBenchmark - -Create a 3 replicas database with taosBenchmark, write 100 million data at the same time, and view the data at the same time - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3 - -# query data -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;" - -taos> select count(*) from test.meters; - count(*) | -======================== - 100000000 | -Query OK, 1 row(s) in set (0.103537s) -``` - -View vnode distribution by showing dnodes - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" - -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | | - 2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | | - 3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | | -Query OK, 3 row(s) in set (0.001357s) -``` - -View xnode distribution by showing vgroup - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups" - -taos> show test.vgroups - vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma | -============================================================================================================================================================================================== - 2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 | - 3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 | - 4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 | - 5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 | - 6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 | - 7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 | - 8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 | - 9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 | -Query OK, 8 row(s) in set (0.001488s) -``` - -#### Manually created - -Common a three-copy test1, and create a table, write 2 pieces of data - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- \ - taos -s \ - "create database if not exists test1 replica 3; - use test1; - create table if not exists t1(ts timestamp, n int); - insert into t1 values(now, 1)(now+1s, 2);" -``` - -View xnode distribution by showing test1.vgroup - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups" - -taos> show test1.vgroups - vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma | -============================================================================================================================================================================================== - 10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 | - 11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 | -Query OK, 2 row(s) in set (0.001489s) -``` - -### Test fault tolerance - -The dnode where the mnode leader is located is disconnected, dnode1 - -```Bash -kubectl get pod -l app=tdengine -n tdengine-test -o wide -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 -tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 -tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 -``` - -At this time, the cluster mnode has a re-election, and the monde on dnode2 becomes the leader. - -```Bash -kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G" -Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706 -Copyright (c) 2022 by TDengine, all rights reserved. - -taos> show mnodes\G -*************************** 1.row *************************** - id: 1 - endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030 - role: offline - status: offline -create_time: 2023-07-19 17:54:18.559 -reboot_time: 1970-01-01 08:00:00.000 -*************************** 2.row *************************** - id: 2 - endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030 - role: leader - status: ready -create_time: 2023-07-20 09:22:05.600 -reboot_time: 2023-07-20 09:32:00.227 -*************************** 3.row *************************** - id: 3 - endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030 - role: follower - status: ready -create_time: 2023-07-20 09:22:20.042 -reboot_time: 2023-07-20 09:32:00.026 -Query OK, 3 row(s) in set (0.001513s) -``` - -Cluster can read and write normally - -```Bash -# insert -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);" - -taos> insert into test1.t1 values(now, 1)(now+1s, 2); -Insert OK, 2 row(s) affected (0.002098s) - -# select -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1" - -taos> select *from test1.t1 - ts | n | -======================================== - 2023-07-19 18:04:58.104 | 1 | - 2023-07-19 18:04:59.104 | 2 | - 2023-07-19 18:06:00.303 | 1 | - 2023-07-19 18:06:01.303 | 2 | -Query OK, 4 row(s) in set (0.001994s) -``` - -Similarly, as for the non-leader mnode dropped, read and write can of course be normal, here will not do too much display . - -## Scaling Out Your Cluster - -TDengine cluster supports automatic expansion: - -```Bash -kubectl scale statefulsets tdengine --replicas=4 -``` - -The parameter `--replica = 4 `in the above command line indicates that you want to expand the TDengine cluster to 4 nodes. After execution, first check the status of the Pod: - -```Bash -kubectl get pod -l app=tdengine -n tdengine-test -o wide -``` - -The output is as follows: - -```Plain -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 -tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 -tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 -tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 -``` - -At this time, the state of the POD is still Running, and the dnode state in the TDengine cluster can only be seen after the Pod status is `ready `: - -```Bash -kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes" -``` - -The dnode list of the expanded four-node TDengine cluster: - -```Plain -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | | - 2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | | - 3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | | - 4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | | -Query OK, 4 row(s) in set (0.003628s) -``` - -## Scaling In Your Cluster - -Since the TDengine cluster will migrate data between nodes during volume expansion and contraction, using the **kubectl** command to reduce the volume requires first using the "drop dnodes" command ( **If there are 3 replicas of db in the cluster, the number of dnodes after reduction must also be greater than or equal to 3, otherwise the drop dnode operation will be aborted** ), the node deletion is completed before Kubernetes cluster reduction. - -Note: Since Kubernetes Pods in the Statefulset can only be removed in reverse order of creation, the TDengine drop dnode also needs to be removed in reverse order of creation, otherwise the Pod will be in an error state. - -```Bash -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4" -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" - -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | | - 2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | | - 3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | | -Query OK, 3 row(s) in set (0.003324s) -``` - -After confirming that the removal is successful (use kubectl exec -i -t tdengine-0 --taos -s "show dnodes" to view and confirm the dnode list), use the kubectl command to remove the Pod: - -```Plain -kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test -``` - -The last Pod will be deleted. Use the command kubectl get pods -l app = tdengine to check the Pod status: - -```Plain -kubectl get pod -l app=tdengine -n tdengine-test -o wide -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 -tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 -tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 -``` - -After the Pod is deleted, the PVC needs to be deleted manually, otherwise the previous data will continue to be used for the next expansion, resulting in the inability to join the cluster normally. - -```Bash -kubectl delete pvc aosdata-tdengine-3 -n tdengine-test -``` - -The cluster state at this time is safe and can be scaled up again if needed. - -```Bash -kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test -statefulset.apps/tdengine scaled - -kubectl get pod -l app=tdengine -n tdengine-test -o wide -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 -tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 -tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 -tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 - -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" - -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | | - 2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | | - 3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | | - 5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | | -Query OK, 4 row(s) in set (0.003881s) -``` - -## Remove a TDengine Cluster - -> **When deleting the PVC, you need to pay attention to the pv persistentVolumeReclaimPolicy policy. It is recommended to change to Delete, so that the PV will be automatically cleaned up when the PVC is deleted, and the underlying CSI storage resources will be cleaned up at the same time. If the policy of deleting the PVC to automatically clean up the PV is not configured, and then after deleting the pvc, when manually cleaning up the PV, the CSI storage resources corresponding to the PV may not be released.** - -Complete removal of TDengine cluster, need to clean up statefulset, svc, configmap, pvc respectively. - -```Bash -kubectl delete statefulset -l app=tdengine -n tdengine-test -kubectl delete svc -l app=tdengine -n tdengine-test -kubectl delete pvc -l app=tdengine -n tdengine-test -kubectl delete configmap taoscfg -n tdengine-test -``` - -## Troubleshooting - -### Error 1 - -No "drop dnode" is directly reduced. Since the TDengine has not deleted the node, the reduced pod causes some nodes in the TDengine cluster to be offline. - -```Plain -kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes" - -taos> show dnodes - id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code | -============================================================================================================================================================================================================================================= - 1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | | - 2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | | - 3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | | - 5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | | -Query OK, 4 row(s) in set (0.003862s) -``` - -## Finally - -For the high availability and high reliability of TDengine in a Kubernetes environment, hardware damage and disaster recovery are divided into two levels: - -1. The disaster recovery capability of the underlying distributed Block Storage, the multi-copy of Block Storage, the current popular distributed Block Storage such as Ceph, has the multi-copy capability, extending the storage copy to different racks, cabinets, computer rooms, Data center (or directly use the Block Storage service provided by Public Cloud vendors) -2. TDengine disaster recovery, in TDengine Enterprise, itself has when a dnode permanently offline (TCE-metal disk damage, data sorting loss), re-pull a blank dnode to restore the original dnode work. - -Finally, welcome to [TDengine Cloud ](https://cloud.tdengine.com/)to experience the one-stop fully managed TDengine Cloud as a Service. - -> TDengine Cloud is a minimalist fully managed time series data processing Cloud as a Service platform developed based on the open source time series database TDengine. In addition to high-performance time series database, it also has system functions such as caching, subscription and stream computing, and provides convenient and secure data sharing, as well as numerous enterprise-level functions. It allows enterprises in the fields of Internet of Things, Industrial Internet, Finance, IT operation and maintenance monitoring to significantly reduce labor costs and operating costs in the management of time series data. diff --git a/docs/en/10-deployment/05-helm.md b/docs/en/10-deployment/05-helm.md deleted file mode 100644 index aa61717669..0000000000 --- a/docs/en/10-deployment/05-helm.md +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: Use Helm to deploy TDengine -sidebar_label: Helm -description: This document describes how to deploy TDengine on Kubernetes by using Helm. ---- - -Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes. - -## Install Helm - -```bash -curl -fsSL -o get_helm.sh \ - https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 -chmod +x get_helm.sh -./get_helm.sh - -``` - -Helm uses the kubectl and kubeconfig configurations to perform Kubernetes operations. For more information, see the Rancher configuration for Kubernetes installation. - -## Install TDengine Chart - -To use TDengine Chart, download it from GitHub: - -```bash -wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz - -``` - -Query the storageclass of your Kubernetes deployment: - -```bash -kubectl get storageclass - -``` - -With minikube, the default value is standard. - -Use Helm commands to install TDengine: - -```bash -helm install tdengine tdengine-3.0.2.tgz \ - --set storage.className= - -``` - -You can configure a small storage size in minikube to ensure that your deployment does not exceed your available disk space. - -```bash -helm install tdengine tdengine-3.0.2.tgz \ - --set storage.className=standard \ - --set storage.dataSize=2Gi \ - --set storage.logSize=10Mi - -``` - -After TDengine is deployed, TDengine Chart outputs information about how to use TDengine: - -```bash -export POD_NAME=$(kubectl get pods --namespace default \ - -l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \ - -o jsonpath="{.items[0].metadata.name}") -kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes" -kubectl --namespace default exec -it $POD_NAME -- taos - -``` - -You can test the deployment by creating a table: - -```bash -kubectl --namespace default exec $POD_NAME -- \ - taos -s "create database test; - use test; - create table t1 (ts timestamp, n int); - insert into t1 values(now, 1)(now + 1s, 2); - select * from t1;" - -``` - -## Configuring Values - -You can configure custom parameters in TDengine with the `values.yaml` file. - -Run the `helm show values` command to see all parameters supported by TDengine Chart. - -```bash -helm show values tdengine-3.0.2.tgz - -``` - -Save the output of this command as `values.yaml`. Then you can modify this file with your desired values and use it to deploy a TDengine cluster: - -```bash -helm install tdengine tdengine-3.0.2.tgz -f values.yaml - -``` - -The parameters are described as follows: - -```yaml -# Default values for tdengine. -# This is a YAML-formatted file. -# Declare variables to be passed into helm templates. - -replicaCount: 1 - -image: - prefix: tdengine/tdengine - #pullPolicy: Always - # Overrides the image tag whose default is the chart appVersion. -# tag: "3.0.2.0" - -service: - # ClusterIP is the default service type, use NodeIP only if you know what you are doing. - type: ClusterIP - ports: - # TCP range required - tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060] - # UDP range - udp: [6044, 6045] - - -# Set timezone here, not in taoscfg -timezone: "Asia/Shanghai" - -resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. - # limits: - # cpu: 100m - # memory: 128Mi - # requests: - # cpu: 100m - # memory: 128Mi - -storage: - # Set storageClassName for pvc. K8s use default storage class if not set. - # - className: "" - dataSize: "100Gi" - logSize: "10Gi" - -nodeSelectors: - taosd: - # node selectors - -clusterDomainSuffix: "" -# Config settings in taos.cfg file. -# -# The helm/k8s support will use environment variables for taos.cfg, -# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`, -# to a camelCase taos config variable `debugFlag`. -# -# See the [Configuration Variables](../../reference/config) -# -# Note: -# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up. -# 2. serverPort: should not be set, we'll use the default 6030 in many places. -# 3. fqdn: will be auto generated in kubernetes, user should not care about it. -# 4. role: currently role is not supported - every node is able to be mnode and vnode. -# -# Btw, keep quotes "" around the value like below, even the value will be number or not. -taoscfg: - # Starts as cluster or not, must be 0 or 1. - # 0: all pods will start as a separate TDengine server - # 1: pods will start as TDengine server cluster. [default] - CLUSTER: "1" - - # number of replications, for cluster only - TAOS_REPLICA: "1" - - # - # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC - #TAOS_NUM_OF_RPC_THREADS: "2" - - - # - # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data - #TAOS_NUM_OF_COMMIT_THREADS: "4" - - # enable/disable installation / usage report - #TAOS_TELEMETRY_REPORTING: "1" - - # time interval of system monitor, seconds - #TAOS_MONITOR_INTERVAL: "30" - - # time interval of dnode status reporting to mnode, seconds, for cluster only - #TAOS_STATUS_INTERVAL: "1" - - # time interval of heart beat from shell to dnode, seconds - #TAOS_SHELL_ACTIVITY_TIMER: "3" - - # minimum sliding window time, milli-second - #TAOS_MIN_SLIDING_TIME: "10" - - # minimum time window, milli-second - #TAOS_MIN_INTERVAL_TIME: "1" - - # the compressed rpc message, option: - # -1 (no compression) - # 0 (all message compressed), - # > 0 (rpc message body which larger than this value will be compressed) - #TAOS_COMPRESS_MSG_SIZE: "-1" - - # max number of connections allowed in dnode - #TAOS_MAX_SHELL_CONNS: "50000" - - # stop writing logs when the disk size of the log folder is less than this value - #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" - - # stop writing temporary files when the disk size of the tmp folder is less than this value - #TAOS_MINIMAL_TMP_DIR_G_B: "0.1" - - # if disk free space is less than this value, taosd service exit directly within startup process - #TAOS_MINIMAL_DATA_DIR_G_B: "0.1" - - # One mnode is equal to the number of vnode consumed - #TAOS_MNODE_EQUAL_VNODE_NUM: "4" - - # enbale/disable http service - #TAOS_HTTP: "1" - - # enable/disable system monitor - #TAOS_MONITOR: "1" - - # enable/disable async log - #TAOS_ASYNC_LOG: "1" - - # - # time of keeping log files, days - #TAOS_LOG_KEEP_DAYS: "0" - - # The following parameters are used for debug purpose only. - # debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR - # 131: output warning and error - # 135: output debug, warning and error - # 143: output trace, debug, warning and error to log - # 199: output debug, warning and error to both screen and file - # 207: output trace, debug, warning and error to both screen and file - # - # debug flag for all log type, take effect when non-zero value\ - #TAOS_DEBUG_FLAG: "143" - - # generate core file when service crash - #TAOS_ENABLE_CORE_FILE: "1" -``` - -## Scaling Out - -For information about scaling out your deployment, see Kubernetes. Additional Helm-specific is described as follows. - -First, obtain the name of the StatefulSet service for your deployment. - -```bash -export STS_NAME=$(kubectl get statefulset \ - -l "app.kubernetes.io/name=tdengine" \ - -o jsonpath="{.items[0].metadata.name}") - -``` - -You can scale out your deployment by adding replicas. The following command scales a deployment to three nodes: - -```bash -kubectl scale --replicas 3 statefulset/$STS_NAME - -``` - -Run the `show dnodes` and `show mnodes` commands to check whether the scale-out was successful. - -## Scaling In - -:::warning -Exercise caution when scaling in a cluster. - -::: - -Determine which dnodes you want to remove and drop them manually. - -```bash -kubectl --namespace default exec $POD_NAME -- \ - cat /var/lib/taos/dnode/dnodeEps.json \ - | jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r -kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes" -kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode ""' - -``` - -## Remove a TDengine Cluster - -You can use Helm to remove your cluster: - -```bash -helm uninstall tdengine - -``` - -However, Helm does not remove PVCs automatically. After you remove your cluster, manually remove all PVCs. diff --git a/docs/en/10-deployment/_category_.yml b/docs/en/10-deployment/_category_.yml deleted file mode 100644 index 0bb1ba461b..0000000000 --- a/docs/en/10-deployment/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Deployment diff --git a/docs/en/10-deployment/index.md b/docs/en/10-deployment/index.md deleted file mode 100644 index 0079ad3740..0000000000 --- a/docs/en/10-deployment/index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Deployment -description: This document describes how to deploy a TDengine cluster on a server, on Kubernetes, and by using Helm. ---- - -TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. - -This document describes how to manually deploy a cluster on a host directly and deploy a cluster with Docker, Kubernetes or Helm. - -```mdx-code-block -import DocCardList from '@theme/DocCardList'; -import {useCurrentSidebarCategory} from '@docusaurus/theme-common'; - - -``` diff --git a/docs/en/12-taos-sql/_category_.yml b/docs/en/12-taos-sql/_category_.yml deleted file mode 100644 index 74a3b6309e..0000000000 --- a/docs/en/12-taos-sql/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: TDengine SQL diff --git a/docs/en/13-operation/_category_.yml b/docs/en/13-operation/_category_.yml deleted file mode 100644 index 3231ce910d..0000000000 --- a/docs/en/13-operation/_category_.yml +++ /dev/null @@ -1 +0,0 @@ -label: Administration diff --git a/docs/en/14-reference/04-taosadapter.md b/docs/en/14-reference/01-components/03-taosadapter.md similarity index 98% rename from docs/en/14-reference/04-taosadapter.md rename to docs/en/14-reference/01-components/03-taosadapter.md index c21a2d3a3f..b257633d3b 100644 --- a/docs/en/14-reference/04-taosadapter.md +++ b/docs/en/14-reference/01-components/03-taosadapter.md @@ -1,5 +1,5 @@ --- -title: taosAdapter +title: taosAdapter Reference Guide sidebar_label: taosAdapter description: This document describes how to use taosAdapter, a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. --- @@ -162,7 +162,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl ## Feature List - RESTful interface - [https://docs.tdengine.com/reference/rest-api/](https://docs.tdengine.com/reference/rest-api/) + [https://docs.tdengine.com/reference/rest-api/](../../connectors/rest-api/) - Compatible with InfluxDB v1 write interface [https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/) - Compatible with OpenTSDB JSON and telnet format writes @@ -186,7 +186,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl ### TDengine RESTful interface -You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://:6041/rest/sql`. See the [official documentation](../rest-api/) for details. +You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://:6041/rest/sql`. See the [official documentation](../../connectors/rest-api/) for details. ### InfluxDB @@ -327,7 +327,7 @@ This parameter controls the number of results returned by the following interfac ## Configure http return code -taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 http status code http status code other than when the C interface returns an error. When set to true, different http status codes will be returned according to the error code returned by C. For details, see [RESTful API](https://docs.tdengine.com/reference/rest-api/) HTTP Response Code chapter. +taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 http status code http status code other than when the C interface returns an error. When set to true, different http status codes will be returned according to the error code returned by C. For details, see [RESTful API](../../connectors/rest-api/) HTTP Response Code chapter. ## Configure whether schemaless writes automatically create DBs diff --git a/docs/en/14-reference/01-components/04-taosx.md b/docs/en/14-reference/01-components/04-taosx.md new file mode 100644 index 0000000000..8ad22dd488 --- /dev/null +++ b/docs/en/14-reference/01-components/04-taosx.md @@ -0,0 +1,305 @@ +--- +toc_max_heading_level: 4 +title: taosX Reference Guide +sidebar_label: taosX +--- + +taosX is zero-code platform for data ingestion, replication, and backup. This article describes the command-line parameters of taosX. This section will describe the two kinds of running models of taosX: Server mode and command line mode. + +## Command Line Mode + +**Note: Some parameters cannot be configured through taosExplorer.** + +An example of taosX command-line parameters is shown as follows: + +```shell +taosx -f -t +``` + +Angled braces (\<\>) are used to denote content that you input based on your system configuration. + +### Data Source Name (DSN) + +taosX refers to data sources and destinations by their DSN. A standard DSN is shown as follows: + +```bash +# url-like +[+]://[[:@]:][/][?=[&=]] +|------|------------|---|-----------|-----------|------|------|----------|-----------------------| +|driver| protocol | | username | password | host | port | object | params | + +// URL example +tmq+ws://root:taosdata@localhost:6030/db1?timeout=never +``` + +Items within brackets (\[\]) are optional. + +1. Each driver uses different parameters. taosX includes the following drivers: + +- taos: queries data from TDengine +- tmq: subscribes to data in TDengine +- local: used to back up or restore data locally +- pi: obtains data from a PI System deployment +- opc: obtains data from an OPC server +- mqtt: obtains data from an MQTT broker +- kafka: subscribes to data in Kafka topics +- influxdb: obtains data from an InfluxDB deployment +- csv: parses data from a CSV file + +2. taosX supports the following protocols: +- +ws: uses the REST API to connect with a TDengine server using the taos or tmq driver. If you do not specify the +ws protocol, the taos and tmq drivers use native connections to TDengine. Note that the TDengine Client must be installed on the same machine as taosX for native connections. +- +ua: uses OPC-UA to connect with an OPC server. +- +da: uses OPC-DA to connect with an OPC server. + +3. host:port indicates the IP address and port of the data source. +4. object indicates the specific item to transfer. This can be a TDengine database, supertable, or table; a local backup file; or a database on a data source. +5. username and password indicate the credentials on the data source. +6. params indicate additional parameters for the data source. + +### Other Parameters + +1. jobs indicates the number of concurrent jobs that can be run. This option is used with the tmq driver only. This parameter cannot be configured in taosExplorer. You can specify the number of concurrent jobs with the `--jobs ` or `-j ` parameter. +2. The -v parameter specifies the log level of taosX. -v indicates info, -vv indicates debug, and -vvv indicates trace. + +### Scenarios + +#### Users and Privileges Import/Export + +| ---- | ---- | +| -u | Includes user basic information (password, whether enabled, etc.) | +| -p | Includes permission information | +| -w | Includes whitelist information | + +When the `-u`/`-p` parameters are applied, only the specified information will be included. Without any parameters, it means all information (username, password, permissions, and whitelist) will be included. + +The `-w` parameter cannot be used alone. It is only effective when used together with `-u` (using `-u` alone will not include the whitelist). + +#### Migrating Data from Older Versions + +1. Synchronize historical data + +Synchronize the entire database: + +```shell +taosx run -f 'taos://root:taosdata@localhost:6030/db1' -t 'taos:///db2' -v +``` + +Synchronize a specified super table: + +```shell +taosx run \ + -f 'taos://root:taosdata@localhost:6030/db1?stables=meters' \ + -t 'taos:///db2' -v +``` + +To synchronize sub-tables or regular tables, support specifying a sub-table of a super table with `{stable}.{table}`, or directly specify the table name `{table}` + +```shell +taosx run \ + -f 'taos://root:taosdata@localhost:6030/db1?tables=meters.d0,d1,table1' \ + -t 'taos:///db2' -v +``` + +2. Synchronize data for a specific time range (using RFC3339 time format with time zone): + +```shell +taosx run \ + -f 'taos:///db1?start=2022-10-10T00:00:00Z' \ + -t 'taos:///db2' -v +``` + +3. Continuous synchronization, restro specifies to synchronize data from the last 5 minutes and to synchronize new data. In the example, it checks every 1 second, excursion allows for 500ms of delay or out-of-order data. + +```shell +taosx run \ + -f 'taos:///db1?mode=realtime&restro=5m&interval=1s&excursion=500ms' \ + -t 'taos:///db2' -v +``` + +4. Synchronize historical data + real-time data: + +```shell +taosx run -f 'taos:///db1?mode=all' -t 'taos:///db2' -v +``` + +5. Configure data synchronization through --transform or -T (only supports synchronization between 2.6 to 3.0 and within 3.0) to perform operations on table names and table fields during the synchronization process. It cannot be set through Explorer yet. Configuration instructions are as follows: + + ```shell + 1. AddTag: adds a tag to a table. Example: `-T add-tag:=` + 2. Table renaming: + 2.1 Conditions + 2.1.1 RenameTable: renames all tables that match the specified conditions + 2.1.2. RenameChildTable: renames all subtables that match the specified conditions + 2.1.3 RenameSuperTable: renames all supertables that match the specified conditions + 2.2 Options + 2.2.1 Prefix: adds a prefix + 2.2.2 Suffix: adds a suffix + 2.2.3 Template: template mode + 2.2.4 ReplaceWithRegex: replaces with a regular expression + 2.2.5 Map: use `old,new` pairs in a csv file to rename tables + Operations are performed as follows: + :