doc: refine kafka connector
This commit is contained in:
parent
5183e5dd51
commit
bd9c726e6d
|
@ -314,7 +314,6 @@ connection.backoff.ms=5000
|
|||
topic.prefix=tdengine-source-
|
||||
poll.interval.ms=1000
|
||||
fetch.max.rows=100
|
||||
out.format=line
|
||||
key.converter=org.apache.kafka.connect.storage.StringConverter
|
||||
value.converter=org.apache.kafka.connect.storage.StringConverter
|
||||
```
|
||||
|
@ -353,7 +352,7 @@ confluent local services connect connector load TDengineSourceConnector --config
|
|||
|
||||
### View topic data
|
||||
|
||||
Use the kafka-console-consumer command-line tool to monitor data in the topic tdengine-source-test. In the beginning, all historical data will be output. After inserting two new data into TDengine, kafka-console-consumer immediately outputs the two new data.
|
||||
Use the kafka-console-consumer command-line tool to monitor data in the topic tdengine-source-test. In the beginning, all historical data will be output. After inserting two new data into TDengine, kafka-console-consumer immediately outputs the two new data. The output is in InfluxDB line protocol format.
|
||||
|
||||
````
|
||||
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test
|
||||
|
@ -428,9 +427,8 @@ The following configuration items apply to TDengine Sink Connector and TDengine
|
|||
3. `timestamp.initial`: Data synchronization start time. The format is 'yyyy-MM-dd HH:mm:ss'. If it is not set, the data importing to Kafka will be started from the first/oldest row in the database.
|
||||
4. `poll.interval.ms`: The time interval for checking newly created tables or removed tables, default value is 1000.
|
||||
5. `fetch.max.rows`: The maximum number of rows retrieved when retrieving the database, default is 100.
|
||||
6. `out.format`: The data format. The value could be `line`, which represents the InfluxDB Line protocol format.
|
||||
7. 7. `query.interval.ms`: The time range of reading data from TDengine each time, its unit is millisecond. It should be adjusted according to the data flow in rate, the default value is 1000.
|
||||
8. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>-<stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>`.
|
||||
6. `query.interval.ms`: The time range of reading data from TDengine each time, its unit is millisecond. It should be adjusted according to the data flow in rate, the default value is 1000.
|
||||
7. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>-<stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>`.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -318,7 +318,6 @@ connection.backoff.ms=5000
|
|||
topic.prefix=tdengine-source-
|
||||
poll.interval.ms=1000
|
||||
fetch.max.rows=100
|
||||
out.format=line
|
||||
key.converter=org.apache.kafka.connect.storage.StringConverter
|
||||
value.converter=org.apache.kafka.connect.storage.StringConverter
|
||||
```
|
||||
|
@ -357,7 +356,7 @@ confluent local services connect connector load TDengineSourceConnector --config
|
|||
|
||||
### 查看 topic 数据
|
||||
|
||||
使用 kafka-console-consumer 命令行工具监控主题 tdengine-source-test 中的数据。一开始会输出所有历史数据, 往 TDengine 插入两条新的数据之后,kafka-console-consumer 也立即输出了新增的两条数据。
|
||||
使用 kafka-console-consumer 命令行工具监控主题 tdengine-source-test 中的数据。一开始会输出所有历史数据, 往 TDengine 插入两条新的数据之后,kafka-console-consumer 也立即输出了新增的两条数据。 输出数据 InfluxDB line protocol 的格式。
|
||||
|
||||
```
|
||||
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test
|
||||
|
@ -438,9 +437,8 @@ confluent local services connect connector unload TDengineSourceConnector
|
|||
3. `timestamp.initial`: 数据同步起始时间。格式为'yyyy-MM-dd HH:mm:ss',若未指定则从指定 DB 中最早的一条记录开始。
|
||||
4. `poll.interval.ms`: 检查是否有新建或删除的表的时间间隔,单位为 ms。默认为 1000。
|
||||
5. `fetch.max.rows` : 检索数据库时最大检索条数。 默认为 100。
|
||||
6. `out.format`: 数据格式。取值为 `line`, 表示 InfluxDB Line 协议格式
|
||||
7. `query.interval.ms`: 从 TDengine 一次读取数据的时间跨度,需要根据表中的数据特征合理配置,避免一次查询的数据量过大或过小;在具体的环境中建议通过测试设置一个较优值,默认值为 1000.
|
||||
8. `topic.per.stable`: 如果设置为true,表示一个超级表对应一个 Kafka topic,topic的命名规则 `<topic.prefix>-<connection.database>-<stable.name>`;如果设置为 false,则指定的 DB 中的所有数据进入一个 Kafka topic,topic 的命名规则为 `<topic.prefix>-<connection.database>`
|
||||
6. `query.interval.ms`: 从 TDengine 一次读取数据的时间跨度,需要根据表中的数据特征合理配置,避免一次查询的数据量过大或过小;在具体的环境中建议通过测试设置一个较优值,默认值为 1000.
|
||||
7. `topic.per.stable`: 如果设置为true,表示一个超级表对应一个 Kafka topic,topic的命名规则 `<topic.prefix>-<connection.database>-<stable.name>`;如果设置为 false,则指定的 DB 中的所有数据进入一个 Kafka topic,topic 的命名规则为 `<topic.prefix>-<connection.database>`
|
||||
|
||||
## 其他说明
|
||||
|
||||
|
|
Loading…
Reference in New Issue