Merge pull request #29078 from taosdata/docs/dclow-codeblocks
docs: unify codeblock language strings
This commit is contained in:
commit
e173ca3da1
|
@ -112,14 +112,14 @@ Fill in the example data from the MQTT message body in **Message Body**.
|
|||
|
||||
JSON data supports JSONObject or JSONArray, and the json parser can parse the following data:
|
||||
|
||||
``` json
|
||||
```json
|
||||
{"id": 1, "message": "hello-word"}
|
||||
{"id": 2, "message": "hello-word"}
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
``` json
|
||||
```json
|
||||
[{"id": 1, "message": "hello-word"},{"id": 2, "message": "hello-word"}]
|
||||
```
|
||||
|
||||
|
|
|
@ -109,7 +109,7 @@ In addition, the [Kerberos](https://web.mit.edu/kerberos/) authentication servic
|
|||
|
||||
After configuration, you can use the [kcat](https://github.com/edenhill/kcat) tool to verify Kafka topic consumption:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
kcat <topic> \
|
||||
-b <kafka-server:port> \
|
||||
-G kcat \
|
||||
|
@ -171,14 +171,14 @@ Enter sample data from the Kafka message body in **Message Body**.
|
|||
|
||||
JSON data supports JSONObject or JSONArray, and the following data can be parsed using a JSON parser:
|
||||
|
||||
``` json
|
||||
```json
|
||||
{"id": 1, "message": "hello-word"}
|
||||
{"id": 2, "message": "hello-word"}
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
``` json
|
||||
```json
|
||||
[{"id": 1, "message": "hello-word"},{"id": 2, "message": "hello-word"}]
|
||||
```
|
||||
|
||||
|
|
|
@ -83,7 +83,7 @@ Parsing is the process of parsing unstructured strings into structured data. The
|
|||
|
||||
JSON parsing supports JSONObject or JSONArray. The following JSON sample data can automatically parse fields: `groupid`, `voltage`, `current`, `ts`, `inuse`, `location`.
|
||||
|
||||
``` json
|
||||
```json
|
||||
{"groupid": 170001, "voltage": "221V", "current": 12.3, "ts": "2023-12-18T22:12:00", "inuse": true, "location": "beijing.chaoyang.datun"}
|
||||
{"groupid": 170001, "voltage": "220V", "current": 12.2, "ts": "2023-12-18T22:12:02", "inuse": true, "location": "beijing.chaoyang.datun"}
|
||||
{"groupid": 170001, "voltage": "216V", "current": 12.5, "ts": "2023-12-18T22:12:04", "inuse": false, "location": "beijing.chaoyang.datun"}
|
||||
|
@ -91,7 +91,7 @@ JSON parsing supports JSONObject or JSONArray. The following JSON sample data ca
|
|||
|
||||
Or
|
||||
|
||||
``` json
|
||||
```json
|
||||
[{"groupid": 170001, "voltage": "221V", "current": 12.3, "ts": "2023-12-18T22:12:00", "inuse": true, "location": "beijing.chaoyang.datun"},
|
||||
{"groupid": 170001, "voltage": "220V", "current": 12.2, "ts": "2023-12-18T22:12:02", "inuse": true, "location": "beijing.chaoyang.datun"},
|
||||
{"groupid": 170001, "voltage": "216V", "current": 12.5, "ts": "2023-12-18T22:12:04", "inuse": false, "location": "beijing.chaoyang.datun"}]
|
||||
|
@ -101,7 +101,7 @@ Subsequent examples will only explain using JSONObject.
|
|||
|
||||
The following nested JSON data can automatically parse fields `groupid`, `data_voltage`, `data_current`, `ts`, `inuse`, `location_0_province`, `location_0_city`, `location_0_datun`, and you can also choose which fields to parse and set aliases for the parsed fields.
|
||||
|
||||
``` json
|
||||
```json
|
||||
{"groupid": 170001, "data": { "voltage": "221V", "current": 12.3 }, "ts": "2023-12-18T22:12:00", "inuse": true, "location": [{"province": "beijing", "city":"chaoyang", "street": "datun"}]}
|
||||
```
|
||||
|
||||
|
@ -114,7 +114,7 @@ The following nested JSON data can automatically parse fields `groupid`, `data_v
|
|||
|
||||
You can use **named capture groups** in regular expressions to extract multiple fields from any string (text) field. As shown in the figure, extract fields such as access IP, timestamp, and accessed URL from nginx logs.
|
||||
|
||||
``` re
|
||||
```regex
|
||||
(?<ip>\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b)\s-\s-\s\[(?<ts>\d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2}\s\+\d{4})\]\s"(?<method>[A-Z]+)\s(?<url>[^\s"]+).*(?<status>\d{3})\s(?<length>\d+)
|
||||
```
|
||||
|
||||
|
@ -133,7 +133,7 @@ Custom rhai syntax scripts for parsing input data (refer to `https://rhai.rs/boo
|
|||
|
||||
For example, for data reporting three-phase voltage values, which are entered into three subtables respectively, such data needs to be parsed
|
||||
|
||||
``` json
|
||||
```json
|
||||
{
|
||||
"ts": "2024-06-27 18:00:00",
|
||||
"voltage": "220.1,220.3,221.1",
|
||||
|
@ -164,7 +164,7 @@ The final parsing result is shown below:
|
|||
|
||||
The parsed data may still not meet the data requirements of the target table. For example, the original data collected by a smart meter is as follows (in json format):
|
||||
|
||||
``` json
|
||||
```json
|
||||
{"groupid": 170001, "voltage": "221V", "current": 12.3, "ts": "2023-12-18T22:12:00", "inuse": true, "location": "beijing.chaoyang.datun"}
|
||||
{"groupid": 170001, "voltage": "220V", "current": 12.2, "ts": "2023-12-18T22:12:02", "inuse": true, "location": "beijing.chaoyang.datun"}
|
||||
{"groupid": 170001, "voltage": "216V", "current": 12.5, "ts": "2023-12-18T22:12:04", "inuse": false, "location": "beijing.chaoyang.datun"}
|
||||
|
|
|
@ -83,14 +83,14 @@ Next, create a supertable (STABLE) named `meters`, whose table structure include
|
|||
|
||||
Create Database
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location -uroot:taosdata 'http://127.0.0.1:6041/rest/sql' \
|
||||
--data 'CREATE DATABASE IF NOT EXISTS power'
|
||||
```
|
||||
|
||||
Create Table, specify the database as `power` in the URL
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location -uroot:taosdata 'http://127.0.0.1:6041/rest/sql/power' \
|
||||
--data 'CREATE STABLE IF NOT EXISTS meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))'
|
||||
```
|
||||
|
@ -167,7 +167,7 @@ NOW is an internal system function, defaulting to the current time of the client
|
|||
|
||||
Write data
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location -uroot:taosdata 'http://127.0.0.1:6041/rest/sql' \
|
||||
--data 'INSERT INTO power.d1001 USING power.meters TAGS(2,'\''California.SanFrancisco'\'') VALUES (NOW + 1a, 10.30000, 219, 0.31000) (NOW + 2a, 12.60000, 218, 0.33000) (NOW + 3a, 12.30000, 221, 0.31000) power.d1002 USING power.meters TAGS(3, '\''California.SanFrancisco'\'') VALUES (NOW + 1a, 10.30000, 218, 0.25000)'
|
||||
```
|
||||
|
@ -247,7 +247,7 @@ Rust connector also supports using **serde** for deserializing to get structured
|
|||
|
||||
Query Data
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location -uroot:taosdata 'http://127.0.0.1:6041/rest/sql' \
|
||||
--data 'SELECT ts, current, location FROM power.meters limit 100'
|
||||
```
|
||||
|
@ -329,7 +329,7 @@ Below are code examples of setting reqId to execute SQL in various language conn
|
|||
|
||||
Query data, specify reqId as 3
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location -uroot:taosdata 'http://127.0.0.1:6041/rest/sql?req_id=3' \
|
||||
--data 'SELECT ts, current, location FROM power.meters limit 1'
|
||||
```
|
||||
|
|
|
@ -273,19 +273,19 @@ To better operate the above data structures, some convenience functions are prov
|
|||
|
||||
Create table:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
create table battery(ts timestamp, vol1 float, vol2 float, vol3 float, deviceId varchar(16));
|
||||
```
|
||||
|
||||
Create custom function:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C';
|
||||
```
|
||||
|
||||
Use custom function:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
select max_vol(vol1, vol2, vol3, deviceid) from battery;
|
||||
```
|
||||
|
||||
|
@ -334,7 +334,7 @@ When developing UDFs in Python, you need to implement the specified interface fu
|
|||
|
||||
The interface for scalar functions is as follows.
|
||||
|
||||
```Python
|
||||
```python
|
||||
def process(input: datablock) -> tuple[output_type]:
|
||||
```
|
||||
|
||||
|
@ -347,7 +347,7 @@ The main parameters are as follows:
|
|||
|
||||
The interface for aggregate functions is as follows.
|
||||
|
||||
```Python
|
||||
```python
|
||||
def start() -> bytes:
|
||||
def reduce(inputs: datablock, buf: bytes) -> bytes
|
||||
def finish(buf: bytes) -> output_type:
|
||||
|
@ -365,7 +365,7 @@ Finally, when all row data blocks have been processed, the finish function is ca
|
|||
|
||||
The interfaces for initialization and destruction are as follows.
|
||||
|
||||
```Python
|
||||
```python
|
||||
def init()
|
||||
def destroy()
|
||||
```
|
||||
|
@ -381,7 +381,7 @@ Parameter description:
|
|||
|
||||
The template for developing scalar functions in Python is as follows.
|
||||
|
||||
```Python
|
||||
```python
|
||||
def init():
|
||||
# initialization
|
||||
def destroy():
|
||||
|
@ -393,7 +393,7 @@ def process(input: datablock) -> tuple[output_type]:
|
|||
|
||||
The template for developing aggregate functions in Python is as follows.
|
||||
|
||||
```Python
|
||||
```python
|
||||
def init():
|
||||
#initialization
|
||||
def destroy():
|
||||
|
@ -828,7 +828,7 @@ Through this example, we learned how to define aggregate functions and print cus
|
|||
<details>
|
||||
<summary>pybitand.py</summary>
|
||||
|
||||
```Python
|
||||
```python
|
||||
{{#include tests/script/sh/pybitand.py}}
|
||||
```
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ TDengine is designed for various writing scenarios, and many of these scenarios
|
|||
|
||||
### Syntax
|
||||
|
||||
```SQL
|
||||
```sql
|
||||
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
||||
SHOW COMPACTS [compact_id];
|
||||
KILL COMPACT compact_id;
|
||||
|
@ -41,7 +41,7 @@ KILL COMPACT compact_id;
|
|||
|
||||
When one or more nodes in a multi-replica cluster restart due to upgrades or other reasons, it may lead to an imbalance in the load among the various dnodes in the cluster. In extreme cases, all vgroup leaders may be located on the same dnode. To solve this problem, you can use the following commands, which were first released in version 3.0.4.0. It is recommended to use the latest version as much as possible.
|
||||
|
||||
```SQL
|
||||
```sql
|
||||
balance vgroup leader; # Rebalance all vgroup leaders
|
||||
balance vgroup leader on <vgroup_id>; # Rebalance a vgroup leader
|
||||
balance vgroup leader database <database_name>; # Rebalance all vgroup leaders within a database
|
||||
|
|
|
@ -121,7 +121,7 @@ The cost of using object storage services is related to the amount of data store
|
|||
|
||||
When the TSDB time-series data exceeds the time specified by the `s3_keeplocal` parameter, the related data files will be split into multiple file blocks, each with a default size of 512 MB (`s3_chunkpages * tsdb_pagesize`). Except for the last file block, which is retained on the local file system, the rest of the file blocks are uploaded to the object storage service.
|
||||
|
||||
```math
|
||||
```text
|
||||
Upload Count = Data File Size / (s3_chunkpages * tsdb_pagesize) - 1
|
||||
```
|
||||
|
||||
|
@ -135,7 +135,7 @@ During query operations, if data in object storage needs to be accessed, TSDB do
|
|||
|
||||
Adjacent multiple data pages are downloaded as a single data block from object storage to reduce the number of downloads. The size of each data page is specified by the `tsdb_pagesize` parameter when creating the database, with a default of 4 KB.
|
||||
|
||||
```math
|
||||
```text
|
||||
Download Count = Number of Data Blocks Needed for Query - Number of Cached Data Blocks
|
||||
```
|
||||
|
||||
|
@ -155,7 +155,7 @@ For deployment methods, please refer to the [Flexify](https://azuremarketplace.m
|
|||
|
||||
In the configuration file /etc/taos/taos.cfg, add parameters for S3 access:
|
||||
|
||||
```cfg
|
||||
```text
|
||||
s3EndPoint http //20.191.157.23,http://20.191.157.24,http://20.191.157.25
|
||||
s3AccessKey FLIOMMNL0:uhRNdeZMLD4wo,ABCIOMMN:uhRNdeZMD4wog,DEFOMMNL049ba:uhRNdeZMLD4wogXd
|
||||
s3BucketName td-test
|
||||
|
|
|
@ -140,7 +140,7 @@ Finally, click the "Create" button at the bottom left to save the rule.
|
|||
|
||||
## Write a Mock Test Program
|
||||
|
||||
```javascript
|
||||
```js
|
||||
{{#include docs/examples/other/mock.js}}
|
||||
```
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ curl http://localhost:8083/connectors
|
|||
|
||||
If all components have started successfully, the following output will be displayed:
|
||||
|
||||
```txt
|
||||
```text
|
||||
[]
|
||||
```
|
||||
|
||||
|
@ -181,7 +181,7 @@ If the above command is executed successfully, the following output will be disp
|
|||
|
||||
Prepare a text file with test data, content as follows:
|
||||
|
||||
```txt title="test-data.txt"
|
||||
```text title="test-data.txt"
|
||||
meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000000
|
||||
meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250000000
|
||||
meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249000000
|
||||
|
@ -303,7 +303,7 @@ kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --t
|
|||
|
||||
Output:
|
||||
|
||||
```txt
|
||||
```text
|
||||
......
|
||||
meters,location="California.SanFrancisco",groupid=2i32 current=10.3f32,voltage=219i32,phase=0.31f32 1538548685000000000
|
||||
meters,location="California.SanFrancisco",groupid=2i32 current=12.6f32,voltage=218i32,phase=0.33f32 1538548695000000000
|
||||
|
|
|
@ -60,7 +60,7 @@ Click `Save & Test` to test, if successful, it will prompt: `TDengine Data sourc
|
|||
|
||||
For users using Grafana version 7.x or configuring with [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/), you can use the installation script on the Grafana server to automatically install the plugin and add the data source Provisioning configuration file.
|
||||
|
||||
```sh
|
||||
```shell
|
||||
bash -c "$(curl -fsSL \
|
||||
https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)" -- \
|
||||
-a http://localhost:6041 \
|
||||
|
@ -77,7 +77,7 @@ Save the script and execute `./install.sh --help` to view detailed help document
|
|||
|
||||
Use the [`grafana-cli` command line tool](https://grafana.com/docs/grafana/latest/administration/cli/) to install the plugin [installation](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation).
|
||||
|
||||
```bash
|
||||
```shell
|
||||
grafana-cli plugins install tdengine-datasource
|
||||
# with sudo
|
||||
sudo -u grafana grafana-cli plugins install tdengine-datasource
|
||||
|
@ -85,7 +85,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
|
|||
|
||||
Alternatively, download the .zip file from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) to your local machine and unzip it into the Grafana plugins directory. Example command line download is as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
GF_VERSION=3.5.1
|
||||
# from GitHub
|
||||
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
|
||||
|
@ -95,13 +95,13 @@ wget -O tdengine-datasource-$GF_VERSION.zip https://grafana.com/api/plugins/tden
|
|||
|
||||
For CentOS 7.2 operating system, unzip the plugin package into the /var/lib/grafana/plugins directory and restart Grafana.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
sudo unzip tdengine-datasource-$GF_VERSION.zip -d /var/lib/grafana/plugins/
|
||||
```
|
||||
|
||||
If Grafana is running in a Docker environment, you can use the following environment variable to set up automatic installation of the TDengine data source plugin:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
GF_INSTALL_PLUGINS=tdengine-datasource
|
||||
```
|
||||
|
||||
|
@ -120,7 +120,7 @@ Click `Save & Test` to test, if successful, it will prompt: `TDengine Data sourc
|
|||
|
||||
Refer to [Grafana containerized installation instructions](https://grafana.com/docs/grafana/next/setup-grafana/installation/docker/#install-plugins-in-the-docker-container). Use the following command to start a container and automatically install the TDengine plugin:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
--name=grafana \
|
||||
|
|
|
@ -31,7 +31,7 @@ The following parameter descriptions and examples use `<content>` as a placehold
|
|||
|
||||
In command line mode, taosX uses DSN to represent a data source (source or destination), a typical DSN is as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
# url-like
|
||||
<driver>[+<protocol>]://[[<username>:<password>@]<host>:<port>][/<object>][?<p1>=<v1>[&<p2>=<v2>]]
|
||||
|------|------------|---|-----------|-----------|------|------|----------|-----------------------|
|
||||
|
@ -390,7 +390,7 @@ You can view the log files or use the `journalctl` command to view the logs of `
|
|||
|
||||
The command to view logs under Linux using `journalctl` is as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
journalctl -u taosx [-f]
|
||||
```
|
||||
|
||||
|
@ -572,7 +572,7 @@ uint32_t len: The binary length of this string (excluding `\0`).
|
|||
|
||||
**Return Value**:
|
||||
|
||||
``` c
|
||||
```c
|
||||
struct parser_resp_t {
|
||||
int e; // 0 if success.
|
||||
void* p; // Success if contains.
|
||||
|
@ -589,7 +589,7 @@ When creation is successful, e = 0, p is the parser object.
|
|||
|
||||
Parse the input payload and return the result in JSON format [u8]. The returned JSON will be fully decoded using the default JSON parser (expanding the root array and all objects).
|
||||
|
||||
``` c
|
||||
```c
|
||||
const char* parser_mutate(
|
||||
void* parser,
|
||||
const uint8_t* in_ptr, uint32_t in_len,
|
||||
|
|
|
@ -26,7 +26,7 @@ The default configuration file for `Agent` is located at `/etc/taos/agent.toml`,
|
|||
|
||||
As shown below:
|
||||
|
||||
```TOML
|
||||
```toml
|
||||
# taosX service endpoint
|
||||
#
|
||||
#endpoint = "http://localhost:6055"
|
||||
|
@ -83,7 +83,7 @@ You don't need to be confused about how to set up the configuration file. Read a
|
|||
|
||||
On Linux systems, the `Agent` can be started with the Systemd command:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
systemctl start taosx-agent
|
||||
```
|
||||
|
||||
|
@ -95,6 +95,6 @@ You can view the log files or use the `journalctl` command to view the logs of t
|
|||
|
||||
The command to view logs with `journalctl` on Linux is as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
journalctl -u taosx-agent [-f]
|
||||
```
|
||||
|
|
|
@ -143,13 +143,13 @@ For details on TDengine monitoring configuration, please refer to: [TDengine Mon
|
|||
|
||||
After installation, please use the `systemctl` command to start the taoskeeper service process.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
systemctl start taoskeeper
|
||||
```
|
||||
|
||||
Check if the service is working properly:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
systemctl status taoskeeper
|
||||
```
|
||||
|
||||
|
@ -261,7 +261,7 @@ Query OK, 14 row(s) in set (0.006542s)
|
|||
|
||||
You can view the most recent report record of a supertable, such as:
|
||||
|
||||
``` shell
|
||||
```shell
|
||||
taos> select last_row(*) from taosd_dnodes_info;
|
||||
last_row(_ts) | last_row(disk_engine) | last_row(system_net_in) | last_row(vnodes_num) | last_row(system_net_out) | last_row(uptime) | last_row(has_mnode) | last_row(io_read_disk) | last_row(error_log_count) | last_row(io_read) | last_row(cpu_cores) | last_row(has_qnode) | last_row(has_snode) | last_row(disk_total) | last_row(mem_engine) | last_row(info_log_count) | last_row(cpu_engine) | last_row(io_write_disk) | last_row(debug_log_count) | last_row(disk_used) | last_row(mem_total) | last_row(io_write) | last_row(masters) | last_row(cpu_system) | last_row(trace_log_count) | last_row(mem_free) |
|
||||
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
|
||||
|
|
|
@ -14,7 +14,7 @@ taosExplorer does not require separate installation. Starting from TDengine vers
|
|||
|
||||
Before starting taosExplorer, please make sure the content in the configuration file is correct.
|
||||
|
||||
```TOML
|
||||
```toml
|
||||
# This is an automatically generated configuration file for Explorer in [TOML](https://toml.io/) format.
|
||||
#
|
||||
# Here is a full list of available options.
|
||||
|
@ -148,7 +148,7 @@ Description:
|
|||
|
||||
Then start taosExplorer, you can directly execute taos-explorer in the command line or use the systemctl command:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
systemctl start taos-explorer # Linux
|
||||
sc.exe start taos-explorer # Windows
|
||||
```
|
||||
|
|
|
@ -248,13 +248,13 @@ The new version of the plugin uses the Grafana unified alerting feature, the `-E
|
|||
|
||||
Assuming you start the TDengine database on the host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord
|
||||
```
|
||||
|
||||
If you want to monitor multiple TDengine clusters, you need to set up multiple TDinsight dashboards. Setting up a non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and if using the built-in SMS alert feature, `-N` and `-L` should also be changed.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
sudo ./TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1'
|
||||
```
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ The TDengine command line program (hereinafter referred to as TDengine CLI) is t
|
|||
|
||||
To enter the TDengine CLI, simply execute `taos` in the terminal.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
taos
|
||||
```
|
||||
|
||||
|
@ -81,7 +81,7 @@ There are many other parameters:
|
|||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
taos -h h1.taos.com -s "use db; show tables;"
|
||||
```
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ taosBenchmark supports comprehensive performance testing for TDengine, and the T
|
|||
|
||||
Execute the following command to quickly experience taosBenchmark performing a write performance test on TDengine based on the default configuration.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
taosBenchmark
|
||||
```
|
||||
|
||||
|
@ -38,7 +38,7 @@ When running without parameters, taosBenchmark by default connects to the TDengi
|
|||
|
||||
When running taosBenchmark using command line parameters and controlling its behavior, the `-f <json file>` parameter cannot be used. All configuration parameters must be specified through the command line. Below is an example of using command line mode to test the write performance of taosBenchmark.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
taosBenchmark -I stmt -n 200 -t 100
|
||||
```
|
||||
|
||||
|
@ -50,7 +50,7 @@ The taosBenchmark installation package includes examples of configuration files,
|
|||
|
||||
Use the following command line to run taosBenchmark and control its behavior through a configuration file.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
taosBenchmark -f <json file>
|
||||
```
|
||||
|
||||
|
|
|
@ -210,19 +210,19 @@ However, renaming individual columns is not supported for `first(*)`, `last(*)`,
|
|||
|
||||
Retrieve all subtable names and related tag information from a supertable:
|
||||
|
||||
```mysql
|
||||
```sql
|
||||
SELECT TAGS TBNAME, location FROM meters;
|
||||
```
|
||||
|
||||
It is recommended that users query the subtable tag information of supertables using the INS_TAGS system table under INFORMATION_SCHEMA, for example, to get all subtable names and tag values of the supertable meters:
|
||||
|
||||
```mysql
|
||||
```sql
|
||||
SELECT table_name, tag_name, tag_type, tag_value FROM information_schema.ins_tags WHERE stable_name='meters';
|
||||
```
|
||||
|
||||
Count the number of subtables under a supertable:
|
||||
|
||||
```mysql
|
||||
```sql
|
||||
SELECT COUNT(*) FROM (SELECT DISTINCT TBNAME FROM meters);
|
||||
```
|
||||
|
||||
|
@ -385,7 +385,7 @@ SELECT CURRENT_USER();
|
|||
|
||||
### Syntax
|
||||
|
||||
```txt
|
||||
```text
|
||||
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
|
||||
```
|
||||
|
||||
|
@ -403,7 +403,7 @@ The length of the regular match string cannot exceed 128 bytes. You can set and
|
|||
|
||||
### Syntax
|
||||
|
||||
```txt
|
||||
```text
|
||||
CASE value WHEN compare_value THEN result [WHEN compare_value THEN result ...] [ELSE result] END
|
||||
CASE WHEN condition THEN result [WHEN condition THEN result ...] [ELSE result] END
|
||||
```
|
||||
|
@ -493,7 +493,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
|
|||
|
||||
## UNION ALL Clause
|
||||
|
||||
```txt title=Syntax
|
||||
```text title=Syntax
|
||||
SELECT ...
|
||||
UNION ALL SELECT ...
|
||||
[UNION ALL SELECT ...]
|
||||
|
|
|
@ -417,7 +417,7 @@ MOD(expr1, expr2)
|
|||
|
||||
**Example**:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
taos> select mod(10,3);
|
||||
mod(10,3) |
|
||||
============================
|
||||
|
@ -454,7 +454,7 @@ RAND([seed])
|
|||
|
||||
**Example**:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
taos> select rand();
|
||||
rand() |
|
||||
============================
|
||||
|
|
|
@ -108,7 +108,7 @@ For the source code of the example programs, please refer to: [Example Programs]
|
|||
|
||||
The Data Source Name has a generic format, similar to [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but without the type prefix (brackets indicate optional):
|
||||
|
||||
``` text
|
||||
```text
|
||||
[username[:password]@][protocol[(address)]]/[dbname][?param1=value1&...¶mN=valueN]
|
||||
```
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ Below is an example using the `curl` tool in an Ubuntu environment (please confi
|
|||
|
||||
The following example lists all databases, please replace `h1.tdengine.com` and 6041 (default value) with the actual running TDengine service FQDN and port number:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \
|
||||
-d "select name, ntables, status from information_schema.ins_databases;" \
|
||||
h1.tdengine.com:6041/rest/sql
|
||||
|
@ -100,13 +100,13 @@ The BODY of the HTTP request contains a complete SQL statement. The data table i
|
|||
|
||||
Use `curl` to initiate an HTTP Request with custom authentication as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id][&row_with_meta=true]]
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id][&row_with_meta=true]]
|
||||
```
|
||||
|
||||
|
@ -279,7 +279,7 @@ Column types use the following strings:
|
|||
|
||||
Prepare data
|
||||
|
||||
```bash
|
||||
```shell
|
||||
create database demo
|
||||
use demo
|
||||
create table t(ts timestamp,c1 varbinary(20),c2 geometry(100))
|
||||
|
@ -288,7 +288,7 @@ insert into t values(now,'\x7f8290','point(100 100)')
|
|||
|
||||
Execute query
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl --location 'http://<fqdn>:<port>/rest/sql' \
|
||||
--header 'Content-Type: text/plain' \
|
||||
--header 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' \
|
||||
|
@ -428,7 +428,7 @@ Data Query Return Example
|
|||
|
||||
HTTP requests need to include an authorization code `<TOKEN>`, used for identity verification. The authorization code is usually provided by the administrator and can be simply obtained by sending an `HTTP GET` request as follows:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl http://<fqnd>:<port>/rest/login/<username>/<password>
|
||||
```
|
||||
|
||||
|
@ -440,7 +440,7 @@ Here, `fqdn` is the FQDN or IP address of the TDengine database, `port` is the p
|
|||
|
||||
Example of obtaining an authorization code:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl http://192.168.0.1:6041/rest/login/root/taosdata
|
||||
```
|
||||
|
||||
|
@ -457,7 +457,7 @@ Return value:
|
|||
|
||||
- Query all records of table d1001 in the demo database:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.d1001" 192.168.0.1:6041/rest/sql
|
||||
curl -L -H "Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04" -d "select * from demo.d1001" 192.168.0.1:6041/rest/sql
|
||||
```
|
||||
|
@ -509,7 +509,7 @@ Return value:
|
|||
|
||||
- Create database demo:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "create database demo" 192.168.0.1:6041/rest/sql
|
||||
curl -L -H "Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04" -d "create database demo" 192.168.0.1:6041/rest/sql
|
||||
```
|
||||
|
@ -560,7 +560,7 @@ Return value:
|
|||
|
||||
#### TDengine 2.x response codes and message bodies
|
||||
|
||||
```JSON
|
||||
```json
|
||||
{
|
||||
"status": "succ",
|
||||
"head": [
|
||||
|
@ -624,7 +624,7 @@ Return value:
|
|||
|
||||
#### TDengine 3.0 Response Codes and Message Body
|
||||
|
||||
```JSON
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"column_meta": [
|
||||
|
|
|
@ -90,7 +90,7 @@ Batch insertion. Each insert statement can insert multiple records into one tabl
|
|||
|
||||
When inserting nchar type data containing Chinese characters on Windows, first ensure that the system's regional settings are set to China (this can be set in the Control Panel). At this point, the `taos` client in cmd should already be working properly; if developing a Java application in an IDE, such as Eclipse or IntelliJ, ensure that the file encoding in the IDE is set to GBK (which is the default encoding type for Java), then initialize the client configuration when creating the Connection, as follows:
|
||||
|
||||
```JAVA
|
||||
```java
|
||||
Class.forName("com.taosdata.jdbc.TSDBDriver");
|
||||
Properties properties = new Properties();
|
||||
properties.setProperty(TSDBDriver.LOCALE_KEY, "UTF-8");
|
||||
|
@ -145,7 +145,7 @@ Version 3.0 of TDengine includes a standalone component developed in Go called `
|
|||
|
||||
The Go language version requirement is 1.14 or higher. If there are Go compilation errors, often due to issues accessing Go mod in China, they can be resolved by setting Go environment variables:
|
||||
|
||||
```sh
|
||||
```shell
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
```
|
||||
|
@ -196,7 +196,7 @@ Here are the solutions:
|
|||
|
||||
1. Create a file /Library/LaunchDaemons/limit.maxfiles.plist, write the following content (the example changes limit and maxfiles to 100,000, modify as needed):
|
||||
|
||||
```plist
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
|
||||
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
|
|
Loading…
Reference in New Issue