fix:merge from 3.0

This commit is contained in:
wangmm0220 2022-08-17 11:09:49 +08:00
commit d5a8b397f0
52 changed files with 1292 additions and 727 deletions

View File

@ -21,17 +21,17 @@
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下:
- 高性能通过创新的存储引擎设计无论是数据写入还是查询TDengine 的性能比通用数据库快 10 倍以上也远超其他时序数据库存储空间不及通用数据库的1/10。
- **高性能**通过创新的存储引擎设计无论是数据写入还是查询TDengine 的性能比通用数据库快 10 倍以上也远超其他时序数据库存储空间不及通用数据库的1/10。
- 云原生通过原生分布式的设计充分利用云平台的优势TDengine 提供了水平扩展能力具备弹性、韧性和可观测性支持k8s部署可运行在公有云、私有云和混合云上。
- **云原生**通过原生分布式的设计充分利用云平台的优势TDengine 提供了水平扩展能力具备弹性、韧性和可观测性支持k8s部署可运行在公有云、私有云和混合云上。
- 极简时序数据平台TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
- **极简时序数据平台**TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
- 分析能力:支持 SQL同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术TDengine 具备强大的分析能力。
- **分析能力**:支持 SQL同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术TDengine 具备强大的分析能力。
- 简单易用无任何依赖安装、集群几秒搞定提供REST以及各种语言连接器与众多第三方工具无缝集成提供命令行程序便于管理和即席查询提供各种运维工具。
- **简单易用**无任何依赖安装、集群几秒搞定提供REST以及各种语言连接器与众多第三方工具无缝集成提供命令行程序便于管理和即席查询提供各种运维工具。
- 核心开源TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
- **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
# 文档

View File

@ -20,23 +20,19 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde
# What is TDengine
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
TDengine is an open source, high performance , cloud native time-series database (Time-Series Database, TSDB).
- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
TDengine can be optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT, IT operation and maintenance, finance and other fields. In addition to the core time series database functions, TDengine also provides functions such as caching, data subscription, and streaming computing. It is a minimalist time series data processing platform that minimizes the complexity of system design and reduces R&D and operating costs. Compared with other time series databases, the main advantages of TDengine are as follows:
- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- High-Performance: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- Simplified Solution: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- Cloud Native: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- Ease of Use: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- Easy Data Analytics: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- Open Source: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub, an active developer community, and over 137k running instances worldwide.
- **Open Source**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
# Documentation
@ -44,15 +40,10 @@ For user manual, system design and architecture, please refer to [TDengine Docum
# Building
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.

View File

@ -81,7 +81,7 @@ ENDIF ()
IF (TD_WINDOWS)
MESSAGE("${Yellow} set compiler flag for Windows! ${ColourReset}")
SET(COMMON_FLAGS "/w /D_WIN32 /DWIN32 /Zi")
SET(COMMON_FLAGS "/w /D_WIN32 /DWIN32 /Zi /MTd")
SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /MANIFEST:NO")
# IF (MSVC AND (MSVC_VERSION GREATER_EQUAL 1900))
# SET(COMMON_FLAGS "${COMMON_FLAGS} /Wv:18")
@ -92,6 +92,12 @@ IF (TD_WINDOWS)
IF (CMAKE_DEPFILE_FLAGS_CXX)
SET(CMAKE_DEPFILE_FLAGS_CXX "")
ENDIF ()
IF (CMAKE_C_FLAGS_DEBUG)
SET(CMAKE_C_FLAGS_DEBUG "" CACHE STRING "" FORCE)
ENDIF ()
IF (CMAKE_CXX_FLAGS_DEBUG)
SET(CMAKE_CXX_FLAGS_DEBUG "" CACHE STRING "" FORCE)
ENDIF ()
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COMMON_FLAGS}")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}")

View File

@ -273,7 +273,7 @@ endif(${BUILD_WITH_NURAFT})
# pthread
if(${BUILD_PTHREAD})
set(CMAKE_BUILD_TYPE release)
set(CMAKE_BUILD_TYPE debug)
add_definitions(-DPTW32_STATIC_LIB)
add_subdirectory(pthread EXCLUDE_FROM_ALL)
set_target_properties(libpthreadVC3 PROPERTIES OUTPUT_NAME pthread)

View File

@ -1,5 +1,7 @@
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}

View File

@ -130,7 +130,7 @@ The configuration parameters in the URL are as follows:
- charset: The character set used by the client, the default value is the system character set.
- locale: Client locale, by default, use the system's current locale.
- timezone: The time zone used by the client, the default value is the system's current time zone.
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: true. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html).

View File

@ -1,59 +1,6 @@
import taos
from taos.tmq import *
conn = taos.connect()
# create database
conn.execute("drop database if exists py_tmq")
conn.execute("create database if not exists py_tmq vgroups 2")
# create table and stables
conn.select_db("py_tmq")
conn.execute("create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)")
conn.execute("create table if not exists tb1 using stb1 tags(1)")
conn.execute("create table if not exists tb2 using stb1 tags(2)")
conn.execute("create table if not exists tb3 using stb1 tags(3)")
# create topic
conn.execute("drop topic if exists topic_ctb_column")
conn.execute("create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1")
# set consumer configure options
conf = TaosTmqConf()
conf.set("group.id", "tg2")
conf.set("td.connect.user", "root")
conf.set("td.connect.pass", "taosdata")
conf.set("enable.auto.commit", "true")
conf.set("msg.with.table.name", "true")
def tmq_commit_cb_print(tmq, resp, offset, param=None):
print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}")
conf.set_auto_commit_cb(tmq_commit_cb_print, None)
# build consumer
tmq = conf.new_consumer()
# build topic list
topic_list = TaosTmqList()
topic_list.append("topic_ctb_column")
# subscribe consumer
tmq.subscribe(topic_list)
# check subscriptions
sub_list = tmq.subscription()
print("subscribed topics: ",sub_list)
# start subscribe
while 1:
res = tmq.poll(1000)
if res:
topic = res.get_topic_name()
vg = res.get_vgroup_id()
db = res.get_db_name()
print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}")
for row in res:
from taos.tmq import TaosConsumer
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
for msg in consumer:
for row in msg:
print(row)
tb = res.get_table_name()
print(f"from table: {tb}")

View File

@ -5,12 +5,74 @@ title: 使用安装包立即开始
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgListV3 from "/components/PkgListV3";
TDengine 完整的软件包包括服务端taosd、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动taosc、命令行程序 (CLItaos) 和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../reference/rest-api/)。
为方便使用,标准的服务端安装包包含了 taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。
在 Linux 系统上TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。您也可以[用 Docker 立即体验](../../get-started/docker/)。需要注意的是rpm 和 deb 包不含 taosdump 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
在 Linux 系统上TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。您也可以[用Docker立即体验](../../get-started/docker/)。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
## 安装
<Tabs>
<TabItem label="Deb 安装" value="debinst">
1. 从列表中下载获得 deb 安装包;
<PkgListV3 type={6}/>
2. 进入到安装包所在目录,执行如下的安装命令:
```bash
# 替换为下载的安装包版本
sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1. 从列表中下载获得 rpm 安装包;
<PkgListV3 type={5}/>
2. 进入到安装包所在目录,执行如下的安装命令:
```bash
# 替换为下载的安装包版本
sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1. 从列表中下载获得 tar.gz 安装包;
<PkgListV3 type={0}/>
2. 进入到安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```bash
# 替换为下载的安装包版本
tar -zxvf TDengine-server-<version>-Linux-x64.tar.gz
```
解压后进入相应路径,执行
```bash
sudo ./install.sh
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
<TabItem label="Windows 安装" value="windows">
1. 从列表中下载获得 exe 安装程序;
<PkgListV3 type={3}/>
2. 运行可执行程序来安装 TDengine。
</TabItem>
<TabItem value="apt-get" label="apt-get">
可以使用 apt-get 工具从官方仓库安装。
@ -40,58 +102,12 @@ sudo apt-get install tdengine
apt-get 方式只适用于 Debian 或 Ubuntu 系统
::::
</TabItem>
<TabItem label="Deb 安装" value="debinst">
1. 从 [发布历史页面](../../releases) 下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```bash
sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1. 从 [发布历史页面](../../releases) 下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```bash
sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1. 从 [发布历史页面](../../releases) 下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```bash
tar -zxvf TDengine-server-3.0.0.0-Linux-x64.tar.gz
```
解压后进入相应路径,执行
```bash
sudo ./install.sh
```
</Tabs>
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[发布历史页面](../../releases)
:::
</TabItem>
<TabItem label="Windows 安装" value="windows">
1. 从 [发布历史页面](../../releases) 下载获得 exe 安装程序,例如 TDengine-server-3.0.0.0-Windows-x64.exe
2. 运行 TDengine-server-3.0.0.0-Windows-x64.exe 来安装 TDengine。
</TabItem>
</Tabs>
:::note
当安装第一个节点时,出现 Enter FQDN提示的时候不需要输入任何内容。只有当安装第二个或以后更多的节点时才需要输入已有集群中任何一个可用节点的 FQDN支持该新节点加入集群。当然也可以不输入而是在新节点启动前配置到新节点的配置文件中。
@ -189,7 +205,7 @@ Query OK, 2 row(s) in set (0.003128s)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
启动 TDengine 的服务,在 Linux 或 windows 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark

View File

@ -88,6 +88,30 @@ void close() throws SQLException;
```
</TabItem>
<TabItem value="Python" label="Python">
```python
class TaosConsumer():
def __init__(self, *topics, **configs)
def __iter__(self)
def __next__(self)
def sync_next(self)
def subscription(self)
def unsubscribe(self)
def close(self)
def __del__(self)
```
</TabItem>
<TabItem label="Go" value="Go">
```go
@ -106,6 +130,84 @@ func (c *Consumer) Subscribe(topics []string) error
func (c *Consumer) Unsubscribe() error
```
</TabItem>
<TabItem label="Rust" value="Rust">
```rust
impl TBuilder for TmqBuilder
fn from_dsn<D: IntoDsn>(dsn: D) -> Result<Self, Self::Error>
fn build(&self) -> Result<Self::Target, Self::Error>
impl AsAsyncConsumer for Consumer
async fn subscribe<T: Into<String>, I: IntoIterator<Item = T> + Send>(
&mut self,
topics: I,
) -> Result<(), Self::Error>;
fn stream(
&self,
) -> Pin<
Box<
dyn '_
+ Send
+ futures::Stream<
Item = Result<(Self::Offset, MessageSet<Self::Meta, Self::Data>), Self::Error>,
>,
>,
>;
async fn commit(&self, offset: Self::Offset) -> Result<(), Self::Error>;
async fn unsubscribe(self);
```
可在 <https://docs.rs/taos> 上查看详细 API 说明。
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
function TMQConsumer(config)
function subscribe(topic)
function consume(timeout)
function subscription()
function unsubscribe()
function commit(msg)
function close()
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
ConsumerBuilder(IEnumerable<KeyValuePair<string, string>> config)
virtual IConsumer Build()
Consumer(ConsumerBuilder builder)
void Subscribe(IEnumerable<string> topics)
void Subscribe(string topic)
ConsumeResult Consume(int millisecondsTimeout)
List<string> Subscription()
void Unsubscribe()
void Commit(ConsumeResult consumerResult)
void Close()
```
</TabItem>
</Tabs>
@ -249,6 +351,7 @@ public class MetersDeserializer extends ReferenceDeserializer<Meters> {
```
</TabItem>
<TabItem label="Go" value="Go">
```go
@ -299,6 +402,91 @@ if err != nil {
```
</TabItem>
<TabItem label="Rust" value="Rust">
```rust
let mut dsn: Dsn = "taos://".parse()?;
dsn.set("group.id", "group1");
dsn.set("client.id", "test");
dsn.set("auto.offset.reset", "earliest");
let tmq = TmqBuilder::from_dsn(dsn)?;
let mut consumer = tmq.build()?;
```
</TabItem>
<TabItem value="Python" label="Python">
Python 使用以下配置项创建一个 Consumer 实例。
| 参数名称 | 类型 | 参数说明 | 备注 |
| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
| `td_connect_ip` | string | 用于创建连接,同 `taos_connect` | |
| `td_connect_user` | string | 用于创建连接,同 `taos_connect` | |
| `td_connect_pass` | string | 用于创建连接,同 `taos_connect` | |
| `td_connect_port` | string | 用于创建连接,同 `taos_connect` | |
| `group_id` | string | 消费组 ID同一消费组共享消费进度 | **必填项**。最大长度192。 |
| `client_id` | string | 客户端 ID | 最大长度192。 |
| `auto_offset_reset` | string | 消费组订阅的初始位置 | 可选:`earliest`, `latest`, `none`(default) |
| `enable_auto_commit` | string | 启用自动提交 | 合法值:`true`, `false`。 |
| `auto_commit_interval_ms` | string | 以毫秒为单位的自动提交时间间隔 | |
| `enable_heartbeat_background` | string | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 合法值:`true`, `false` |
| `experimental_snapshot_enable` | string | 从 WAL 开始消费,还是从 TSBS 开始消费 | 合法值:`true`, `false` |
| `msg_with_table_name` | string | 是否允许从消息中解析表名 | 合法值:`true`, `false` |
| `timeout` | int | 消费者拉去的超时时间 | |
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
// 根据需要,设置消费组 (group.id)、自动提交 (enable.auto.commit)、
// 自动提交时间间隔 (auto.commit.interval.ms)、用户名 (td.connect.user)、密码 (td.connect.pass) 等参数
let consumer = taos.consumer({
'enable.auto.commit': 'true',
'auto.commit.interval.ms','1000',
'group.id': 'tg2',
'td.connect.user': 'root',
'td.connect.pass': 'taosdata',
'auto.offset.reset','earliest',
'msg.with.table.name': 'true',
'td.connect.ip','127.0.0.1',
'td.connect.port','6030'
});
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
using TDengineTMQ;
// 根据需要,设置消费组 (GourpId)、自动提交 (EnableAutoCommit)、
// 自动提交时间间隔 (AutoCommitIntervalMs)、用户名 (TDConnectUser)、密码 (TDConnectPasswd) 等参数
var cfg = new ConsumerConfig
{
EnableAutoCommit = "true"
AutoCommitIntervalMs = "1000"
GourpId = "TDengine-TMQ-C#",
TDConnectUser = "root",
TDConnectPasswd = "taosdata",
AutoOffsetReset = "earliest"
MsgWithTableName = "true",
TDConnectIp = "127.0.0.1",
TDConnectPort = "6030"
};
var consumer = new ConsumerBuilder(cfg).Build();
```
</TabItem>
</Tabs>
上述配置中包括 consumer group ID如果多个 consumer 指定的 consumer group ID 一样,则自动形成一个 consumer group共享消费进度。
@ -343,6 +531,45 @@ if err != nil {
}
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.subscribe(["tmq_meters"]).await?;
```
</TabItem>
<TabItem value="Python" label="Python">
```python
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
// 创建订阅 topics 列表
let topics = ['topic_test']
// 启动订阅
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// 创建订阅 topics 列表
List<String> topics = new List<string>();
topics.add("tmq_topic");
// 启动订阅
consumer.Subscribe(topics);
```
</TabItem>
</Tabs>
@ -377,6 +604,7 @@ while(running){
```
</TabItem>
<TabItem value="Go" label="Go">
```go
@ -392,6 +620,80 @@ for {
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
{
let mut stream = consumer.stream();
while let Some((offset, message)) = stream.try_next().await? {
// get information from offset
// the topic
let topic = offset.topic();
// the vgroup id, like partition id in kafka.
let vgroup_id = offset.vgroup_id();
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
if let Some(data) = message.into_data() {
while let Some(block) = data.fetch_raw_block().await? {
// one block for one table, get table name if needed
let name = block.table_name();
let records: Vec<Record> = block.deserialize().try_collect()?;
println!(
"** table: {}, got {} records: {:#?}\n",
name.unwrap(),
records.len(),
records
);
}
}
consumer.commit(offset).await?;
}
}
```
</TabItem>
<TabItem value="Python" label="Python">
```python
for msg in consumer:
for row in msg:
print(row)
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
while(true){
msg = consumer.consume(200);
// process message(consumeResult)
console.log(msg.topicPartition);
console.log(msg.block);
console.log(msg.fields)
}
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// 消费数据
while (true)
{
var consumerRes = consumer.Consume(100);
// process ConsumeResult
ProcessMsg(consumerRes);
consumer.Commit(consumerRes);
}
```
</TabItem>
</Tabs>
## 结束消费
@ -421,6 +723,8 @@ consumer.close();
```
</TabItem>
<TabItem value="Go" label="Go">
```go
@ -428,6 +732,45 @@ consumer.Close()
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.unsubscribe().await;
```
</TabItem>
<TabItem value="Python" label="Python">
```py
# 取消订阅
consumer.unsubscribe()
# 关闭消费
consumer.close()
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
consumer.unsubscribe();
consumer.close();
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// 取消订阅
consumer.Unsubscribe();
// 关闭消费
consumer.Close();
```
</TabItem>
</Tabs>
## 删除 *topic*
@ -775,65 +1118,15 @@ int main(int argc, char* argv[]) {
<TabItem label="Python" value="Python">
```python
import taos
from taos.tmq import TaosConsumer
import taos
from taos.tmq import *
conn = taos.connect()
# create database
conn.execute("drop database if exists py_tmq")
conn.execute("create database if not exists py_tmq vgroups 2")
# create table and stables
conn.select_db("py_tmq")
conn.execute("create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)")
conn.execute("create table if not exists tb1 using stb1 tags(1)")
conn.execute("create table if not exists tb2 using stb1 tags(2)")
conn.execute("create table if not exists tb3 using stb1 tags(3)")
# create topic
conn.execute("drop topic if exists topic_ctb_column")
conn.execute("create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1")
# set consumer configure options
conf = TaosTmqConf()
conf.set("group.id", "tg2")
conf.set("td.connect.user", "root")
conf.set("td.connect.pass", "taosdata")
conf.set("enable.auto.commit", "true")
conf.set("msg.with.table.name", "true")
def tmq_commit_cb_print(tmq, resp, offset, param=None):
print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}")
conf.set_auto_commit_cb(tmq_commit_cb_print, None)
# build consumer
tmq = conf.new_consumer()
# build topic list
topic_list = TaosTmqList()
topic_list.append("topic_ctb_column")
# subscribe consumer
tmq.subscribe(topic_list)
# check subscriptions
sub_list = tmq.subscription()
print("subscribed topics: ",sub_list)
# start subscribe
while 1:
res = tmq.poll(1000)
if res:
topic = res.get_topic_name()
vg = res.get_vgroup_id()
db = res.get_db_name()
print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}")
for row in res:
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
for msg in consumer:
for row in msg:
print(row)
tb = res.get_table_name()
print(f"from table: {tb}")
```

View File

@ -1,5 +1,7 @@
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}

View File

@ -1,10 +1,10 @@
import PkgList from "/components/PkgList";
import PkgListV3 from "/components/PkgListV3";
1. 下载客户端安装包
<PkgList type={1} sys="Linux" />
<PkgListV3 type={1} sys="Linux" />
[所有下载](https://www.taosdata.com/cn/all-downloads/)
[所有下载](../../releases)
2. 解压缩软件包

View File

@ -1,11 +1,10 @@
import PkgList from "/components/PkgList";
import PkgListV3 from "/components/PkgListV3";
1. 下载客户端安装包
<PkgList type={1} sys="Windows" />
[所有下载](https://www.taosdata.com/cn/all-downloads/)
<PkgListV3 type={4} sys="Windows" />
[所有下载](../../releases)
2. 执行安装程序,按提示选择默认值,完成安装
3. 安装路径

View File

@ -131,7 +131,7 @@ url 中的配置参数如下:
- charset客户端使用的字符集默认值为系统字符集。
- locale客户端语言环境默认值系统当前 locale。
- timezone客户端使用的时区默认值为系统当前时区。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为false。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为true。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败将继续执行下面的 SQL。false不再执行失败 SQL 后的任何语句。默认值为false。
JDBC 原生连接的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。

View File

@ -0,0 +1,134 @@
---
sidebar_label: taosKeeper
title: taosKeeper
description: TDengine taosKeeper 使用说明
---
## 简介
TaosKeeper 是 TDengine 3.0 版本监控指标的导出工具,通过简单的几项配置即可获取 TDengine 的运行状态。taosKeeper 使用 TDengine RESTful 接口,所以不需要安装 TDengine 客户端即可使用。
## 安装
<!-- taosKeeper 有两种安装方式: -->
taosKeeper 安装方式:
<!-- - 安装 TDengine 官方安装包的同时会自动安装 taosKeeper, 详情请参考[ TDengine 安装](/operation/pkg-install)。-->
<!-- - 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。-->
- 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。
## 运行
### 配置和运行方式
<!-- taosKeeper 需要在操作系统终端执行,该工具支持两种配置方式:[命令行参数](#命令行参数启动) 和 [配置文件](#配置文件启动)。命令行参数优先级高于配置文件参数。-->
taosKeeper 需要在操作系统终端执行,该工具支持 [配置文件启动](#配置文件启动)。
**在运行 taosKeeper 之前要确保 TDengine 集群与 taosAdapter 已经在正确运行。**
<!--
### 命令行参数启动
在使用命令行参数运行 taosBenchmark 并控制其行为。
```shell
taosKeeper
```
-->
### 配置文件启动
执行以下命令即可快速体验 taosKeeper。当不指定 taosKeeper 配置文件时,优先使用 `/etc/taos/keeper.toml` 配置,否则将使用默认配置。
```shell
taoskeeper -c <keeper config file>
```
**下面是配置文件的示例:**
```toml
# gin 框架是否启用 debug
debug = false
# 服务监听端口, 默认为 6043
port = 6043
# 日志级别,包含 panic、error、info、debug、trace等
loglevel = "info"
# 程序中使用协程池的大小
gopoolsize = 50000
# 查询 TDengine 监控数据轮询间隔
RotationInterval = "15s"
[tdengine]
host = "127.0.0.1"
port = 6041
username = "root"
password = "taosdata"
# 需要被监控的 taosAdapter
[taosAdapter]
address = ["127.0.0.1:6041","192.168.1.95:6041"]
[metrics]
# 监控指标前缀
prefix = "taos"
# 集群数据的标识符
cluster = "production"
# 存放监控数据的数据库
database = "log"
# 指定需要监控的普通表
tables = ["normal_table"]
```
### 获取监控指标
taosKeeper 作为 TDengine 监控指标的导出工具,可以将 TDengine 产生的监控数据记录在指定数据库中,并提供导出接口。
#### 查看监控结果集
```shell
$ taos
#
> use log;
> select * from cluster_info limit 1;
```
结果示例:
```shell
ts | first_ep | first_ep_dnode_id | version | master_uptime | monitor_interval | dbs_total | tbs_total | stbs_total | dnodes_total | dnodes_alive | mnodes_total | mnodes_alive | vgroups_total | vgroups_alive | vnodes_total | vnodes_alive | connections_total | protocol | cluster_id |
===============================================================================================================================================================================================================================================================================================================================================================================
2022-08-16 17:37:01.629 | hlb:6030 | 1 | 3.0.0.0 | 0.27250 | 15 | 2 | 27 | 38 | 1 | 1 | 1 | 1 | 4 | 4 | 4 | 4 | 14 | 1 | 5981392874047724755 |
Query OK, 1 rows in database (0.036162s)
```
#### 导出监控指标
```shell
curl http://127.0.0.1:6043/metrics
```
部分结果集:
```shell
# HELP taos_cluster_info_connections_total
# TYPE taos_cluster_info_connections_total counter
taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16
# HELP taos_cluster_info_dbs_total
# TYPE taos_cluster_info_dbs_total counter
taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2
# HELP taos_cluster_info_dnodes_alive
# TYPE taos_cluster_info_dnodes_alive counter
taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_dnodes_total
# TYPE taos_cluster_info_dnodes_total counter
taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
```

View File

@ -53,7 +53,7 @@
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.14.1</version>
<version>2.17.1</version>
</dependency>
<!-- proxool -->
<dependency>

View File

@ -10,7 +10,7 @@
<description>Demo project for TDengine</description>
<properties>
<spring.version>5.3.2</spring.version>
<spring.version>5.3.20</spring.version>
</properties>
<dependencies>
@ -75,20 +75,20 @@
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.75</version>
<version>1.2.83</version>
</dependency>
<!-- mysql: just for test -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.16</version>
<version>8.0.28</version>
<scope>test</scope>
</dependency>
<!-- log4j -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.14.1</version>
<version>2.17.1</version>
</dependency>
<!-- junit -->
<dependency>

View File

@ -139,7 +139,6 @@ int32_t taosInitCfg(const char *cfgDir, const char **envCmd, const char *envFile
bool tsc);
void taosCleanupCfg();
void taosCfgDynamicOptions(const char *option, const char *value);
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary);
struct SConfig *taosGetCfg();

View File

@ -5,137 +5,54 @@
# #
########################################################
# first fully qualified domain name (FQDN) for TDengine system
# The end point of the first dnode in the cluster to be connected to when `taosd` or `taos` is started
# firstEp hostname:6030
# local fully qualified domain name (FQDN)
# The end point of the second dnode to be connected to if the firstEp is not available when `taosd` or `taos` is started
# secondEp
# The FQDN of the host where `taosd` will be started. It can be IP address
# fqdn hostname
# first port number for the connection (12 continuous UDP/TCP port number are used)
# The port for external access after `taosd` is started
# serverPort 6030
# log file's directory
# The maximum number of connections a dnode can accept
# maxShellConns 5000
# The directory for writing log files
# logDir /var/log/taos
# data file's directory
# All data files are stored in this directory
# dataDir /var/lib/taos
# temporary file's directory
# tempDir /tmp/
# the arbitrator's fully qualified domain name (FQDN) for TDengine system, for cluster only
# arbitrator arbitrator_hostname:6042
# number of threads per CPU core
# numOfThreadsPerCore 1.0
# number of threads to commit cache data
# numOfCommitThreads 4
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
# ratioOfQueryCores 1.0
# the last_row/first/last aggregator will not change the original column name in the result fields
keepColumnName 1
# number of management nodes in the system
# numOfMnodes 1
# enable/disable backuping vnode directory when removing vnode
# vnodeBak 1
# enable/disable installation / usage report
# Switch for allowing TDengine to collect and report service usage information
# telemetryReporting 1
# enable/disable load balancing
# balance 1
# The maximum number of vnodes supported by dnode
# supportVnodes 0
# role for dnode. 0 - any, 1 - mnode, 2 - dnode
# role 0
# max timer control blocks
# maxTmrCtrl 512
# time interval of system monitor, seconds
# monitorInterval 30
# number of seconds allowed for a dnode to be offline, for cluster only
# offlineThreshold 864000
# RPC re-try timer, millisecond
# rpcTimer 300
# RPC maximum time for ack, seconds.
# rpcMaxTime 600
# time interval of dnode status reporting to mnode, seconds, for cluster only
# The interval of dnode reporting status to mnode
# statusInterval 1
# time interval of heart beat from shell to dnode, seconds
# The interval for taos shell to send heartbeat to mnode
# shellActivityTimer 3
# minimum sliding window time, milli-second
# The minimum sliding window time, milli-second
# minSlidingTime 10
# minimum time window, milli-second
# The minimum time window, milli-second
# minIntervalTime 10
# maximum delay before launching a stream computation, milli-second
# maxStreamCompDelay 20000
# The maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
# queryBufferSize -1
# maximum delay before launching a stream computation for the first time, milli-second
# maxFirstStreamCompDelay 10000
# retry delay when a stream computation fails, milli-second
# retryStreamCompDelay 10
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
# streamCompDelayRatio 0.1
# max number of vgroups per db, 0 means configured automatically
# maxVgroupsPerDb 0
# max number of tables per vnode
# maxTablesPerVnode 1000000
# cache block size (Mbyte)
# cache 16
# number of cache blocks per vnode
# blocks 6
# number of days per DB file
# days 10
# number of days to keep DB file
# keep 3650
# minimum rows of records in file block
# minRows 100
# maximum rows of records in file block
# maxRows 4096
# the number of acknowledgments required for successful data writing
# quorum 1
# enable/disable compression
# comp 2
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
# walLevel 1
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
# fsync 3000
# number of replications, for cluster only
# replica 1
# the compressed rpc message, option:
# The compressed rpc message, option:
# -1 (no compression)
# 0 (all message compressed),
# > 0 (rpc message body which larger than this value will be compressed)
@ -147,15 +64,6 @@ keepColumnName 1
# > 0 (any retrieved column size greater than this value all data will be compressed.)
# compressColData -1
# max length of an SQL
# maxSQLLength 65480
# max length of WildCards
# maxWildCardsLength 100
# the maximum number of records allowed for super table time sorting
# maxNumOfOrderedRes 100000
# system time zone
# timezone Asia/Shanghai (CST, +0800)
# system time zone (for windows 10)
@ -167,12 +75,6 @@ keepColumnName 1
# default system charset
# charset UTF-8
# max number of connections allowed in dnode
# maxShellConns 5000
# max number of connections allowed in client
# maxConnections 5000
# stop writing logs when the disk size of the log folder is less than this value
# minimalLogDirGB 1.0
@ -182,30 +84,9 @@ keepColumnName 1
# if disk free space is less than this value, taosd service exit directly within startup process
# minimalDataDirGB 2.0
# One mnode is equal to the number of vnode consumed
# mnodeEqualVnodeNum 4
# enbale/disable http service
# http 1
# enable/disable system monitor
# monitor 1
# enable/disable recording the SQL statements via restful interface
# httpEnableRecordSql 0
# number of threads used to process http requests
# httpMaxThreads 2
# maximum number of rows returned by the restful interface
# restfulRowLimit 10240
# database name must be specified in restful interface if the following parameter is set, off by default
# httpDbNameMandatory 1
# http keep alive, default is 30 seconds
# httpKeepAlive 30000
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
@ -216,7 +97,6 @@ keepColumnName 1
# time of keeping log files, days
# logKeepDays 0
# The following parameters are used for debug purpose only.
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# 131: output warning and error
@ -228,85 +108,62 @@ keepColumnName 1
# debug flag for all log type, take effect when non-zero value
# debugFlag 0
# debug flag for meta management messages
# mDebugFlag 135
# debug flag for dnode messages
# dDebugFlag 135
# debug flag for sync module
# sDebugFlag 135
# debug flag for WAL
# wDebugFlag 135
# debug flag for SDB
# sdbDebugFlag 135
# debug flag for RPC
# rpcDebugFlag 131
# debug flag for TAOS TIMER
# debug flag for timer
# tmrDebugFlag 131
# debug flag for TDengine client
# cDebugFlag 131
# debug flag for JNI
# jniDebugFlag 131
# debug flag for storage
# debug flag for util
# uDebugFlag 131
# debug flag for http server
# httpDebugFlag 131
# debug flag for rpc
# rpcDebugFlag 131
# debug flag for monitor
# monDebugFlag 131
# debug flag for jni
# jniDebugFlag 131
# debug flag for query
# qDebugFlag 131
# debug flag for taosc driver
# cDebugFlag 131
# debug flag for dnode messages
# dDebugFlag 135
# debug flag for vnode
# vDebugFlag 131
# debug flag for TSDB
# debug flag for meta management messages
# mDebugFlag 135
# debug flag for wal
# wDebugFlag 135
# debug flag for sync module
# sDebugFlag 135
# debug flag for tsdb
# tsdbDebugFlag 131
# debug flag for continue query
# cqDebugFlag 131
# debug flag for tq
# tqDebugFlag 131
# enable/disable recording the SQL in taos client
# enableRecordSql 0
# debug flag for fs
# fsDebugFlag 131
# debug flag for udf
# udfDebugFlag 131
# debug flag for sma
# smaDebugFlag 131
# debug flag for index
# idxDebugFlag 131
# debug flag for tdb
# tdbDebugFlag 131
# debug flag for meta
# metaDebugFlag 131
# generate core file when service crash
# enableCoreFile 1
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
# maxBinaryDisplayWidth 30
# enable/disable stream (continuous query)
# stream 1
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
# retrieveBlockingModel 0
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
# queryBufferSize -1
# percent of redundant data in tsdb meta will compact meta data,0 means donot compact
# tsdbMetaCompactRatio 0
# default string type used for storing JSON String, options can be binary/nchar, default is nchar
# defaultJSONStrType nchar
# force TCP transmission
# rpcForceTcp 0
# unit MB. Flush vnode wal file if walSize > walFlushSize and walSize > cache*0.5*blocks
# walFlushSize 1024
# unit Hour. Latency of data migration
# keepTimeOffset 0

View File

@ -29,6 +29,7 @@ else
# Remove all links
${csudo}rm -f ${bin_link_dir}/taos || :
${csudo}rm -f ${bin_link_dir}/taosd || :
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${cfg_link_dir}/* || :

View File

@ -60,6 +60,7 @@ cp ${compile_dir}/../packaging/tools/set_core.sh ${pkg_dir}${install_home_pat
cp ${compile_dir}/../packaging/tools/taosd-dump-cfg.gdb ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosd ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/udfd ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosBenchmark ${pkg_dir}${install_home_path}/bin
if [ -f "${compile_dir}/build/bin/taosadapter" ]; then

View File

@ -69,6 +69,7 @@ cp %{_compiledir}/../packaging/tools/set_core.sh %{buildroot}%{homepath}/bin
cp %{_compiledir}/../packaging/tools/taosd-dump-cfg.gdb %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taos %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosd %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/udfd %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
if [ -f %{_compiledir}/build/bin/taosadapter ]; then
@ -204,6 +205,7 @@ if [ $1 -eq 0 ];then
# Remove all links
${csudo}rm -f ${bin_link_dir}/taos || :
${csudo}rm -f ${bin_link_dir}/taosd || :
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${cfg_link_dir}/* || :
${csudo}rm -f ${inc_link_dir}/taos.h || :

View File

@ -18,6 +18,7 @@ script_dir=$(dirname $(readlink -f "$0"))
clientName="taos"
serverName="taosd"
udfdName="udfd"
configFile="taos.cfg"
productName="TDengine"
emailName="taosdata.com"
@ -192,6 +193,7 @@ function install_bin() {
# Remove links
${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/${serverName} || :
${csudo}rm -f ${bin_link_dir}/${udfdName} || :
${csudo}rm -f ${bin_link_dir}/${adapterName} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/${demoName} || :
@ -205,6 +207,7 @@ function install_bin() {
#Make link
[ -x ${install_main_dir}/bin/${clientName} ] && ${csudo}ln -s ${install_main_dir}/bin/${clientName} ${bin_link_dir}/${clientName} || :
[ -x ${install_main_dir}/bin/${serverName} ] && ${csudo}ln -s ${install_main_dir}/bin/${serverName} ${bin_link_dir}/${serverName} || :
[ -x ${install_main_dir}/bin/${udfdName} ] && ${csudo}ln -s ${install_main_dir}/bin/${udfdName} ${bin_link_dir}/${udfdName} || :
[ -x ${install_main_dir}/bin/${adapterName} ] && ${csudo}ln -s ${install_main_dir}/bin/${adapterName} ${bin_link_dir}/${adapterName} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${demoName} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName} || :
@ -742,7 +745,7 @@ function is_version_compatible() {
fi
exist_version=$(${installDir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 3)
vercomp $exist_version "2.0.16.0"
vercomp $exist_version "3.0.0.0"
case $? in
2)
prompt_force=1

View File

@ -85,6 +85,7 @@ else
${build_dir}/bin/${clientName} \
${taostools_bin_files} \
${build_dir}/bin/taosadapter \
${build_dir}/bin/udfd \
${script_dir}/remove.sh \
${script_dir}/set_core.sh \
${script_dir}/startPre.sh \
@ -318,7 +319,7 @@ if [ "$verMode" == "cluster" ]; then
fi
# Copy release note
cp ${script_dir}/release_note ${install_dir}
# cp ${script_dir}/release_note ${install_dir}
# exit 1

View File

@ -118,6 +118,7 @@ function install_bin() {
# Remove links
${csudo}rm -f ${bin_link_dir}/taos || :
${csudo}rm -f ${bin_link_dir}/taosd || :
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
@ -130,6 +131,7 @@ function install_bin() {
#Make link
[ -x ${bin_dir}/taos ] && ${csudo}ln -s ${bin_dir}/taos ${bin_link_dir}/taos || :
[ -x ${bin_dir}/taosd ] && ${csudo}ln -s ${bin_dir}/taosd ${bin_link_dir}/taosd || :
[ -x ${bin_dir}/udfd ] && ${csudo}ln -s ${bin_dir}/udfd ${bin_link_dir}/udfd || :
[ -x ${bin_dir}/taosadapter ] && ${csudo}ln -s ${bin_dir}/taosadapter ${bin_link_dir}/taosadapter || :
[ -x ${bin_dir}/taosBenchmark ] && ${csudo}ln -sf ${bin_dir}/taosBenchmark ${bin_link_dir}/taosdemo || :
[ -x ${bin_dir}/taosBenchmark ] && ${csudo}ln -sf ${bin_dir}/taosBenchmark ${bin_link_dir}/taosBenchmark || :

View File

@ -51,7 +51,7 @@ Source: taos.bat; DestDir: "{app}\include"; Flags: igNoreversion;
;Source: taosdemo.png; DestDir: "{app}\include"; Flags: igNoreversion;
;Source: taosShell.png; DestDir: "{app}\include"; Flags: igNoreversion;
Source: favicon.ico; DestDir: "{app}\include"; Flags: igNoreversion;
Source: {#MyAppSourceDir}{#MyAppDLLName}; DestDir: "{win}\System32"; Flags: igNoreversion;
Source: {#MyAppSourceDir}{#MyAppDLLName}; DestDir: "{win}\System32"; Flags: 64bit;Check:IsWin64;
Source: {#MyAppSourceDir}{#MyAppCfgName}; DestDir: "{app}\cfg"; Flags: igNoreversion recursesubdirs createallsubdirs onlyifdoesntexist uninsneveruninstall
Source: {#MyAppSourceDir}{#MyAppDriverName}; DestDir: "{app}\driver"; Flags: igNoreversion recursesubdirs createallsubdirs
;Source: {#MyAppSourceDir}{#MyAppConnectorName}; DestDir: "{app}\connector"; Flags: igNoreversion recursesubdirs createallsubdirs

View File

@ -9,6 +9,11 @@ IF (TD_GRANT)
ADD_DEFINITIONS(-D_GRANT)
ENDIF ()
IF (TD_STORAGE)
ADD_DEFINITIONS(-D_STORAGE)
TARGET_LINK_LIBRARIES(common PRIVATE storage)
ENDIF ()
target_include_directories(
common
PUBLIC "${TD_SOURCE_DIR}/include/common"

View File

@ -165,58 +165,26 @@ int32_t tsTtlUnit = 86400;
int32_t tsTtlPushInterval = 86400;
int32_t tsGrantHBInterval = 60;
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary) {
tstrncpy(tsDiskCfg[index].dir, v1, TSDB_FILENAME_LEN);
tsDiskCfg[index].level = level;
tsDiskCfg[index].primary = primary;
uTrace("dataDir:%s, level:%d primary:%d is configured", v1, level, primary);
}
static int32_t taosSetTfsCfg(SConfig *pCfg) {
#ifndef _STORAGE
int32_t taosSetTfsCfg(SConfig *pCfg) {
SConfigItem *pItem = cfgGetItem(pCfg, "dataDir");
memset(tsDataDir, 0, PATH_MAX);
int32_t size = taosArrayGetSize(pItem->array);
if (size <= 0) {
tsDiskCfgNum = 1;
taosAddDataDir(0, pItem->str, 0, 1);
tstrncpy(tsDiskCfg[0].dir, pItem->str, TSDB_FILENAME_LEN);
tsDiskCfg[0].level = 0;
tsDiskCfg[0].primary = 1;
tstrncpy(tsDataDir, pItem->str, PATH_MAX);
if (taosMulMkDir(tsDataDir) != 0) {
uError("failed to create dataDir:%s since %s", tsDataDir, terrstr());
uError("failed to create dataDir:%s", tsDataDir);
return -1;
}
} else {
tsDiskCfgNum = size < TFS_MAX_DISKS ? size : TFS_MAX_DISKS;
for (int32_t index = 0; index < tsDiskCfgNum; ++index) {
SDiskCfg *pCfg = taosArrayGet(pItem->array, index);
memcpy(&tsDiskCfg[index], pCfg, sizeof(SDiskCfg));
if (pCfg->level == 0 && pCfg->primary == 1) {
tstrncpy(tsDataDir, pCfg->dir, PATH_MAX);
}
if (taosMulMkDir(pCfg->dir) != 0) {
uError("failed to create tfsDir:%s since %s", tsDataDir, terrstr());
return -1;
}
}
}
if (tsDataDir[0] == 0) {
if (pItem->str != NULL) {
taosAddDataDir(tsDiskCfgNum, pItem->str, 0, 1);
tstrncpy(tsDataDir, pItem->str, PATH_MAX);
if (taosMulMkDir(tsDataDir) != 0) {
uError("failed to create tfsDir:%s since %s", tsDataDir, terrstr());
return -1;
}
tsDiskCfgNum++;
} else {
uError("datadir not set");
return -1;
}
}
return 0;
}
#else
int32_t taosSetTfsCfg(SConfig *pCfg);
#endif
struct SConfig *taosGetCfg() {
return tsCfg;

View File

@ -356,31 +356,44 @@ static int32_t mndDoRebalance(SMnode *pMnode, const SMqRebInputObj *pInput, SMqR
taosArrayPush(pConsumerEp->vgs, &pRebVg->pVgEp);
pRebVg->newConsumerId = pConsumerEp->consumerId;
taosArrayPush(pOutput->rebVgs, pRebVg);
mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId,
mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 " (second scan) (not enough)", pRebVg->pVgEp->vgId,
pConsumerEp->consumerId);
}
}
ASSERT(pIter == NULL);
// 7. handle unassigned vg
if (taosHashGetSize(pOutput->pSub->consumerHash) != 0) {
// if has consumer, assign all left vg
while (1) {
SMqConsumerEp *pConsumerEp = NULL;
pRemovedIter = taosHashIterate(pHash, pRemovedIter);
if (pRemovedIter == NULL) break;
if (pRemovedIter == NULL) {
if (pIter != NULL) {
taosHashCancelIterate(pOutput->pSub->consumerHash, pIter);
pIter = NULL;
}
break;
}
while (1) {
pIter = taosHashIterate(pOutput->pSub->consumerHash, pIter);
ASSERT(pIter);
pRebVg = (SMqRebOutputVg *)pRemovedIter;
SMqConsumerEp *pConsumerEp = (SMqConsumerEp *)pIter;
pConsumerEp = (SMqConsumerEp *)pIter;
ASSERT(pConsumerEp->consumerId > 0);
if (taosArrayGetSize(pConsumerEp->vgs) == minVgCnt) {
break;
}
}
pRebVg = (SMqRebOutputVg *)pRemovedIter;
taosArrayPush(pConsumerEp->vgs, &pRebVg->pVgEp);
pRebVg->newConsumerId = pConsumerEp->consumerId;
if (pRebVg->newConsumerId == pRebVg->oldConsumerId) {
mInfo("mq rebalance: skip vg %d for same consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId,
mInfo("mq rebalance: skip vg %d for same consumer:%" PRId64 " (second scan)", pRebVg->pVgEp->vgId,
pConsumerEp->consumerId);
continue;
}
taosArrayPush(pOutput->rebVgs, pRebVg);
mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId,
mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 " (second scan) (unassigned)", pRebVg->pVgEp->vgId,
pConsumerEp->consumerId);
}
} else {
@ -571,7 +584,7 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic);
/*ASSERT(pTopic);*/
if (pTopic == NULL) {
mError("rebalance %s failed since topic %s was dropped, abort", pRebInfo->key, topic);
mError("mq rebalance %s failed since topic %s not exist, abort", pRebInfo->key, topic);
continue;
}
taosRLockLatch(&pTopic->lock);
@ -601,7 +614,7 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
// TODO replace assert with error check
if (mndPersistRebResult(pMnode, pMsg, &rebOutput) < 0) {
mError("persist rebalance output error, possibly vnode splitted or dropped");
mError("mq rebalance persist rebalance output error, possibly vnode splitted or dropped");
}
taosArrayDestroy(pRebInfo->lostConsumers);
taosArrayDestroy(pRebInfo->newConsumers);

View File

@ -24,6 +24,7 @@ target_sources(
"src/meta/metaCommit.c"
"src/meta/metaEntry.c"
"src/meta/metaSnapshot.c"
"src/meta/metaCache.c"
# sma
"src/sma/smaEnv.c"

View File

@ -25,6 +25,7 @@ extern "C" {
typedef struct SMetaIdx SMetaIdx;
typedef struct SMetaDB SMetaDB;
typedef struct SMetaCache SMetaCache;
// metaDebug ==================
// clang-format off
@ -60,6 +61,13 @@ static FORCE_INLINE tb_uid_t metaGenerateUid(SMeta* pMeta) { return tGenIdPI64()
// metaTable ==================
int metaHandleEntry(SMeta* pMeta, const SMetaEntry* pME);
// metaCache ==================
int32_t metaCacheOpen(SMeta* pMeta);
void metaCacheClose(SMeta* pMeta);
int32_t metaCacheUpsert(SMeta* pMeta, SMetaInfo* pInfo);
int32_t metaCacheDrop(SMeta* pMeta, int64_t uid);
int32_t metaCacheGet(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo);
struct SMeta {
TdThreadRwlock lock;
@ -84,6 +92,8 @@ struct SMeta {
TTB* pStreamDb;
SMetaIdx* pIdx;
SMetaCache* pCache;
};
typedef struct {
@ -92,6 +102,12 @@ typedef struct {
} STbDbKey;
#pragma pack(push, 1)
typedef struct {
tb_uid_t suid;
int64_t version;
int32_t skmVer;
} SUidIdxVal;
typedef struct {
tb_uid_t uid;
int32_t sver;

View File

@ -130,6 +130,14 @@ int metaTtlSmaller(SMeta* pMeta, uint64_t time, SArray* uidList);
int32_t metaCreateTSma(SMeta* pMeta, int64_t version, SSmaCfg* pCfg);
int32_t metaDropTSma(SMeta* pMeta, int64_t indexUid);
typedef struct SMetaInfo {
int64_t uid;
int64_t suid;
int64_t version;
int32_t skmVer;
} SMetaInfo;
int32_t metaGetInfo(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo);
// tsdb
int tsdbOpen(SVnode* pVnode, STsdb** ppTsdb, const char* dir, STsdbKeepCfg* pKeepCfg);
int tsdbClose(STsdb** pTsdb);

View File

@ -0,0 +1,206 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "meta.h"
#define META_CACHE_BASE_BUCKET 1024
// (uid , suid) : child table
// (uid, 0) : normal table
// (suid, suid) : super table
typedef struct SMetaCacheEntry SMetaCacheEntry;
struct SMetaCacheEntry {
SMetaCacheEntry* next;
SMetaInfo info;
};
struct SMetaCache {
int32_t nEntry;
int32_t nBucket;
SMetaCacheEntry** aBucket;
};
int32_t metaCacheOpen(SMeta* pMeta) {
int32_t code = 0;
SMetaCache* pCache = NULL;
pCache = (SMetaCache*)taosMemoryMalloc(sizeof(SMetaCache));
if (pCache == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
pCache->nEntry = 0;
pCache->nBucket = META_CACHE_BASE_BUCKET;
pCache->aBucket = (SMetaCacheEntry**)taosMemoryCalloc(pCache->nBucket, sizeof(SMetaCacheEntry*));
if (pCache->aBucket == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
taosMemoryFree(pCache);
goto _err;
}
pMeta->pCache = pCache;
_exit:
return code;
_err:
metaError("vgId:%d meta open cache failed since %s", TD_VID(pMeta->pVnode), tstrerror(code));
return code;
}
void metaCacheClose(SMeta* pMeta) {
if (pMeta->pCache) {
for (int32_t iBucket = 0; iBucket < pMeta->pCache->nBucket; iBucket++) {
SMetaCacheEntry* pEntry = pMeta->pCache->aBucket[iBucket];
while (pEntry) {
SMetaCacheEntry* tEntry = pEntry->next;
taosMemoryFree(pEntry);
pEntry = tEntry;
}
}
taosMemoryFree(pMeta->pCache->aBucket);
taosMemoryFree(pMeta->pCache);
pMeta->pCache = NULL;
}
}
static int32_t metaRehashCache(SMetaCache* pCache, int8_t expand) {
int32_t code = 0;
int32_t nBucket;
if (expand) {
nBucket = pCache->nBucket * 2;
} else {
nBucket = pCache->nBucket / 2;
}
SMetaCacheEntry** aBucket = (SMetaCacheEntry**)taosMemoryCalloc(nBucket, sizeof(SMetaCacheEntry*));
if (aBucket == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _exit;
}
// rehash
for (int32_t iBucket = 0; iBucket < pCache->nBucket; iBucket++) {
SMetaCacheEntry* pEntry = pCache->aBucket[iBucket];
while (pEntry) {
SMetaCacheEntry* pTEntry = pEntry->next;
pEntry->next = aBucket[TABS(pEntry->info.uid) % nBucket];
aBucket[TABS(pEntry->info.uid) % nBucket] = pEntry;
pEntry = pTEntry;
}
}
// final set
taosMemoryFree(pCache->aBucket);
pCache->nBucket = nBucket;
pCache->aBucket = aBucket;
_exit:
return code;
}
int32_t metaCacheUpsert(SMeta* pMeta, SMetaInfo* pInfo) {
int32_t code = 0;
// ASSERT(metaIsWLocked(pMeta));
// search
SMetaCache* pCache = pMeta->pCache;
int32_t iBucket = TABS(pInfo->uid) % pCache->nBucket;
SMetaCacheEntry** ppEntry = &pCache->aBucket[iBucket];
while (*ppEntry && (*ppEntry)->info.uid != pInfo->uid) {
ppEntry = &(*ppEntry)->next;
}
if (*ppEntry) { // update
ASSERT(pInfo->suid == (*ppEntry)->info.suid);
if (pInfo->version > (*ppEntry)->info.version) {
(*ppEntry)->info.version = pInfo->version;
(*ppEntry)->info.skmVer = pInfo->skmVer;
}
} else { // insert
if (pCache->nEntry >= pCache->nBucket) {
code = metaRehashCache(pCache, 1);
if (code) goto _exit;
iBucket = TABS(pInfo->uid) % pCache->nBucket;
}
SMetaCacheEntry* pEntryNew = (SMetaCacheEntry*)taosMemoryMalloc(sizeof(*pEntryNew));
if (pEntryNew == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _exit;
}
pEntryNew->info = *pInfo;
pEntryNew->next = pCache->aBucket[iBucket];
pCache->aBucket[iBucket] = pEntryNew;
pCache->nEntry++;
}
_exit:
return code;
}
int32_t metaCacheDrop(SMeta* pMeta, int64_t uid) {
int32_t code = 0;
SMetaCache* pCache = pMeta->pCache;
int32_t iBucket = TABS(uid) % pCache->nBucket;
SMetaCacheEntry** ppEntry = &pCache->aBucket[iBucket];
while (*ppEntry && (*ppEntry)->info.uid != uid) {
ppEntry = &(*ppEntry)->next;
}
SMetaCacheEntry* pEntry = *ppEntry;
if (pEntry) {
*ppEntry = pEntry->next;
taosMemoryFree(pEntry);
pCache->nEntry--;
if (pCache->nEntry < pCache->nBucket / 4 && pCache->nBucket > META_CACHE_BASE_BUCKET) {
code = metaRehashCache(pCache, 0);
if (code) goto _exit;
}
} else {
code = TSDB_CODE_NOT_FOUND;
}
_exit:
return code;
}
int32_t metaCacheGet(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo) {
int32_t code = 0;
SMetaCache* pCache = pMeta->pCache;
int32_t iBucket = TABS(uid) % pCache->nBucket;
SMetaCacheEntry* pEntry = pCache->aBucket[iBucket];
while (pEntry && pEntry->info.uid != uid) {
pEntry = pEntry->next;
}
if (pEntry) {
*pInfo = pEntry->info;
} else {
code = TSDB_CODE_NOT_FOUND;
}
return code;
}

View File

@ -73,7 +73,7 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) {
}
// open pUidIdx
ret = tdbTbOpen("uid.idx", sizeof(tb_uid_t), sizeof(int64_t), uidIdxKeyCmpr, pMeta->pEnv, &pMeta->pUidIdx);
ret = tdbTbOpen("uid.idx", sizeof(tb_uid_t), sizeof(SUidIdxVal), uidIdxKeyCmpr, pMeta->pEnv, &pMeta->pUidIdx);
if (ret < 0) {
metaError("vgId:%d, failed to open meta uid idx since %s", TD_VID(pVnode), tstrerror(terrno));
goto _err;
@ -143,6 +143,13 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) {
goto _err;
}
int32_t code = metaCacheOpen(pMeta);
if (code) {
terrno = code;
metaError("vgId:%d, failed to open meta cache since %s", TD_VID(pVnode), tstrerror(terrno));
goto _err;
}
metaDebug("vgId:%d, meta is opened", TD_VID(pVnode));
*ppMeta = pMeta;
@ -169,6 +176,7 @@ _err:
int metaClose(SMeta *pMeta) {
if (pMeta) {
if (pMeta->pCache) metaCacheClose(pMeta);
if (pMeta->pIdx) metaCloseIdx(pMeta);
if (pMeta->pStreamDb) tdbTbClose(pMeta->pStreamDb);
if (pMeta->pSmaIdx) tdbTbClose(pMeta->pSmaIdx);

View File

@ -137,7 +137,7 @@ int metaGetTableEntryByUid(SMetaReader *pReader, tb_uid_t uid) {
return -1;
}
version = *(int64_t *)pReader->pBuf;
version = ((SUidIdxVal *)pReader->pBuf)[0].version;
return metaGetTableEntryByVersion(pReader, version, uid);
}
@ -234,7 +234,7 @@ int metaTbCursorNext(SMTbCursor *pTbCur) {
tDecoderClear(&pTbCur->mr.coder);
metaGetTableEntryByVersion(&pTbCur->mr, *(int64_t *)pTbCur->pVal, *(tb_uid_t *)pTbCur->pKey);
metaGetTableEntryByVersion(&pTbCur->mr, ((SUidIdxVal *)pTbCur->pVal)[0].version, *(tb_uid_t *)pTbCur->pKey);
if (pTbCur->mr.me.type == TSDB_SUPER_TABLE) {
continue;
}
@ -259,7 +259,7 @@ _query:
goto _err;
}
version = *(int64_t *)pData;
version = ((SUidIdxVal *)pData)[0].version;
tdbTbGet(pMeta->pTbDb, &(STbDbKey){.uid = uid, .version = version}, sizeof(STbDbKey), &pData, &nData);
SMetaEntry me = {0};
@ -967,8 +967,8 @@ int32_t metaGetTableTags(SMeta *pMeta, uint64_t suid, SArray *uidList, SArray *t
SHashObj *uHash = taosHashInit(32, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
size_t len = taosArrayGetSize(uidList);
if(len > 0){
for(int i = 0; i < len; i++){
if (len > 0) {
for (int i = 0; i < len; i++) {
int64_t *uid = taosArrayGet(uidList, i);
taosHashPut(uHash, uid, sizeof(int64_t), &i, sizeof(i));
}
@ -979,15 +979,15 @@ int32_t metaGetTableTags(SMeta *pMeta, uint64_t suid, SArray *uidList, SArray *t
break;
}
if(len > 0 && taosHashGet(uHash, &id, sizeof(int64_t)) == NULL){
if (len > 0 && taosHashGet(uHash, &id, sizeof(int64_t)) == NULL) {
continue;
}
void* tag = taosMemoryMalloc(pCur->vLen);
void *tag = taosMemoryMalloc(pCur->vLen);
memcpy(tag, pCur->pVal, pCur->vLen);
taosArrayPush(tags, &tag);
if(len == 0){
if (len == 0) {
taosArrayPush(uidList, &id);
}
}
@ -996,3 +996,41 @@ int32_t metaGetTableTags(SMeta *pMeta, uint64_t suid, SArray *uidList, SArray *t
metaCloseCtbCursor(pCur);
return TSDB_CODE_SUCCESS;
}
int32_t metaGetInfo(SMeta *pMeta, int64_t uid, SMetaInfo *pInfo) {
int32_t code = 0;
void *pData = NULL;
int nData = 0;
metaRLock(pMeta);
// search cache
if (metaCacheGet(pMeta, uid, pInfo) == 0) {
metaULock(pMeta);
goto _exit;
}
// search TDB
if (tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData) < 0) {
// not found
metaULock(pMeta);
code = TSDB_CODE_NOT_FOUND;
goto _exit;
}
metaULock(pMeta);
pInfo->uid = uid;
pInfo->suid = ((SUidIdxVal *)pData)->suid;
pInfo->version = ((SUidIdxVal *)pData)->version;
pInfo->skmVer = ((SUidIdxVal *)pData)->skmVer;
// upsert the cache
metaWLock(pMeta);
metaCacheUpsert(pMeta, pInfo);
metaULock(pMeta);
_exit:
tdbFree(pData);
return code;
}

View File

@ -28,9 +28,9 @@ int32_t metaCreateTSma(SMeta *pMeta, int64_t version, SSmaCfg *pCfg) {
int vLen = 0;
const void *pKey = NULL;
const void *pVal = NULL;
void * pBuf = NULL;
void *pBuf = NULL;
int32_t szBuf = 0;
void * p = NULL;
void *p = NULL;
SMetaReader mr = {0};
// validate req
@ -83,8 +83,8 @@ int32_t metaDropTSma(SMeta *pMeta, int64_t indexUid) {
static int metaSaveSmaToDB(SMeta *pMeta, const SMetaEntry *pME) {
STbDbKey tbDbKey;
void * pKey = NULL;
void * pVal = NULL;
void *pKey = NULL;
void *pVal = NULL;
int kLen = 0;
int vLen = 0;
SEncoder coder = {0};
@ -130,7 +130,8 @@ _err:
}
static int metaUpdateUidIdx(SMeta *pMeta, const SMetaEntry *pME) {
return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &pME->version, sizeof(int64_t), &pMeta->txn);
SUidIdxVal uidIdxVal = {.suid = pME->smaEntry.tsma->indexUid, .version = pME->version, .skmVer = 0};
return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &uidIdxVal, sizeof(uidIdxVal), &pMeta->txn);
}
static int metaUpdateNameIdx(SMeta *pMeta, const SMetaEntry *pME) {

View File

@ -27,6 +27,23 @@ static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME);
static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry);
static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type);
static void metaGetEntryInfo(const SMetaEntry *pEntry, SMetaInfo *pInfo) {
pInfo->uid = pEntry->uid;
pInfo->version = pEntry->version;
if (pEntry->type == TSDB_SUPER_TABLE) {
pInfo->suid = pEntry->uid;
pInfo->skmVer = pEntry->stbEntry.schemaRow.version;
} else if (pEntry->type == TSDB_CHILD_TABLE) {
pInfo->suid = pEntry->ctbEntry.suid;
pInfo->skmVer = 0;
} else if (pEntry->type == TSDB_NORMAL_TABLE) {
pInfo->suid = 0;
pInfo->skmVer = pEntry->ntbEntry.schemaRow.version;
} else {
ASSERT(0);
}
}
static int metaUpdateMetaRsp(tb_uid_t uid, char *tbName, SSchemaWrapper *pSchema, STableMetaRsp *pMetaRsp) {
pMetaRsp->pSchemas = taosMemoryMalloc(pSchema->nCols * sizeof(SSchema));
if (NULL == pMetaRsp->pSchemas) {
@ -164,29 +181,11 @@ int metaDelJsonVarFromIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const SSche
int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
SMetaEntry me = {0};
int kLen = 0;
int vLen = 0;
const void *pKey = NULL;
const void *pVal = NULL;
void *pBuf = NULL;
int32_t szBuf = 0;
void *p = NULL;
SMetaReader mr = {0};
// validate req
metaReaderInit(&mr, pMeta, 0);
if (metaGetTableEntryByName(&mr, pReq->name) == 0) {
// TODO: just for pass case
#if 0
terrno = TSDB_CODE_TDB_STB_ALREADY_EXIST;
metaReaderClear(&mr);
return -1;
#else
metaReaderClear(&mr);
if (tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name), NULL, NULL) == 0) {
return 0;
#endif
}
metaReaderClear(&mr);
// set structs
me.version = version;
@ -265,8 +264,8 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
// drop super table
_drop_super_table:
tdbTbGet(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pData, &nData);
tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = *(int64_t *)pData, .uid = pReq->suid}, sizeof(STbDbKey),
&pMeta->txn);
tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = ((SUidIdxVal *)pData)[0].version, .uid = pReq->suid},
sizeof(STbDbKey), &pMeta->txn);
tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pMeta->txn);
tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn);
tdbTbDelete(pMeta->pSuidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn);
@ -309,7 +308,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
return -1;
}
oversion = *(int64_t *)pData;
oversion = ((SUidIdxVal *)pData)[0].version;
tdbTbcOpen(pMeta->pTbDb, &pTbDbc, &pMeta->txn);
ret = tdbTbcMoveTo(pTbDbc, &((STbDbKey){.uid = pReq->suid, .version = oversion}), sizeof(STbDbKey), &c);
@ -336,15 +335,14 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
metaSaveToSkmDb(pMeta, &nStbEntry);
}
// if (oStbEntry.stbEntry.schemaTag.sver != pReq->schemaTag.sver) {
// // change tag schema
// }
// update table.db
metaSaveToTbDb(pMeta, &nStbEntry);
// update uid index
tdbTbcUpsert(pUidIdxc, &pReq->suid, sizeof(tb_uid_t), &version, sizeof(version), 0);
SMetaInfo info;
metaGetEntryInfo(&nStbEntry, &info);
tdbTbcUpsert(pUidIdxc, &pReq->suid, sizeof(tb_uid_t),
&(SUidIdxVal){.suid = info.suid, .version = info.version, .skmVer = info.skmVer}, sizeof(SUidIdxVal), 0);
if (oStbEntry.pBuf) taosMemoryFree(oStbEntry.pBuf);
metaULock(pMeta);
@ -503,7 +501,7 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) {
SDecoder dc = {0};
rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData);
int64_t version = *(int64_t *)pData;
int64_t version = ((SUidIdxVal *)pData)[0].version;
tdbTbGet(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pData, &nData);
@ -517,7 +515,7 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) {
int tLen = 0;
if (tdbTbGet(pMeta->pUidIdx, &e.ctbEntry.suid, sizeof(tb_uid_t), &tData, &tLen) == 0) {
version = *(int64_t *)tData;
version = ((SUidIdxVal *)tData)[0].version;
STbDbKey tbDbKey = {.uid = e.ctbEntry.suid, .version = version};
if (tdbTbGet(pMeta->pTbDb, &tbDbKey, sizeof(tbDbKey), &tData, &tLen) == 0) {
SDecoder tdc = {0};
@ -556,6 +554,8 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) {
--pMeta->pVnode->config.vndStats.numOfSTables;
}
metaCacheDrop(pMeta, uid);
tDecoderClear(&dc);
tdbFree(pData);
@ -594,7 +594,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
ASSERT(c == 0);
tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
oversion = *(int64_t *)pData;
oversion = ((SUidIdxVal *)pData)[0].version;
// search table.db
TBC *pTbDbc = NULL;
@ -708,7 +708,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
// save to table db
metaSaveToTbDb(pMeta, &entry);
tdbTbcUpsert(pUidIdxc, &entry.uid, sizeof(tb_uid_t), &version, sizeof(version), 0);
metaUpdateUidIdx(pMeta, &entry);
metaSaveToSkmDb(pMeta, &entry);
@ -764,7 +764,7 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA
ASSERT(c == 0);
tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
oversion = *(int64_t *)pData;
oversion = ((SUidIdxVal *)pData)[0].version;
// search table.db
TBC *pTbDbc = NULL;
@ -784,8 +784,8 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA
/* get stbEntry*/
tdbTbGet(pMeta->pUidIdx, &ctbEntry.ctbEntry.suid, sizeof(tb_uid_t), &pVal, &nVal);
tdbTbGet(pMeta->pTbDb, &((STbDbKey){.uid = ctbEntry.ctbEntry.suid, .version = *(int64_t *)pVal}), sizeof(STbDbKey),
(void **)&stbEntry.pBuf, &nVal);
tdbTbGet(pMeta->pTbDb, &((STbDbKey){.uid = ctbEntry.ctbEntry.suid, .version = ((SUidIdxVal *)pVal)[0].version}),
sizeof(STbDbKey), (void **)&stbEntry.pBuf, &nVal);
tdbFree(pVal);
tDecoderInit(&dc2, stbEntry.pBuf, nVal);
metaDecodeEntry(&dc2, &stbEntry);
@ -859,7 +859,7 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA
metaSaveToTbDb(pMeta, &ctbEntry);
// save to uid.idx
tdbTbUpsert(pMeta->pUidIdx, &ctbEntry.uid, sizeof(tb_uid_t), &version, sizeof(version), &pMeta->txn);
metaUpdateUidIdx(pMeta, &ctbEntry);
if (iCol == 0) {
metaUpdateTagIdx(pMeta, &ctbEntry);
@ -917,7 +917,7 @@ static int metaUpdateTableOptions(SMeta *pMeta, int64_t version, SVAlterTbReq *p
ASSERT(c == 0);
tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
oversion = *(int64_t *)pData;
oversion = ((SUidIdxVal *)pData)[0].version;
// search table.db
TBC *pTbDbc = NULL;
@ -962,7 +962,7 @@ static int metaUpdateTableOptions(SMeta *pMeta, int64_t version, SVAlterTbReq *p
// save to table db
metaSaveToTbDb(pMeta, &entry);
tdbTbcUpsert(pUidIdxc, &entry.uid, sizeof(tb_uid_t), &version, sizeof(version), 0);
metaUpdateUidIdx(pMeta, &entry);
metaULock(pMeta);
tdbTbcClose(pTbDbc);
@ -1045,7 +1045,14 @@ _err:
}
static int metaUpdateUidIdx(SMeta *pMeta, const SMetaEntry *pME) {
return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &pME->version, sizeof(int64_t), &pMeta->txn);
// upsert cache
SMetaInfo info;
metaGetEntryInfo(pME, &info);
metaCacheUpsert(pMeta, &info);
SUidIdxVal uidIdxVal = {.suid = info.suid, .version = info.version, .skmVer = info.skmVer};
return tdbTbUpsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &uidIdxVal, sizeof(uidIdxVal), &pMeta->txn);
}
static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME) {
@ -1122,7 +1129,7 @@ static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry) {
return -1;
}
tbDbKey.uid = pCtbEntry->ctbEntry.suid;
tbDbKey.version = *(int64_t *)pData;
tbDbKey.version = ((SUidIdxVal *)pData)[0].version;
tdbTbGet(pMeta->pTbDb, &tbDbKey, sizeof(tbDbKey), &pData, &nData);
tDecoderInit(&dc, pData, nData);

View File

@ -17,7 +17,7 @@
#include "tmsg.h"
#include "tq.h"
int32_t tdBuildDeleteReq(SVnode* pVnode, const char* stbFullName, const SSDataBlock* pDataBlock,
int32_t tqBuildDeleteReq(SVnode* pVnode, const char* stbFullName, const SSDataBlock* pDataBlock,
SBatchDeleteReq* deleteReq) {
ASSERT(pDataBlock->info.type == STREAM_DELETE_RESULT);
int32_t totRow = pDataBlock->info.rows;
@ -68,9 +68,10 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
int32_t padding1 = 0;
void* padding2 = taosMemoryMalloc(1);
void* padding2 = NULL;
taosArrayPush(schemaReqSz, &padding1);
taosArrayPush(schemaReqs, &padding2);
continue;
}
STagVal tagVal = {
@ -138,8 +139,7 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
continue;
}
int32_t rows = pDataBlock->info.rows;
// TODO min
int32_t rowSize = pDataBlock->info.rowSize;
/*int32_t rowSize = pDataBlock->info.rowSize;*/
int32_t maxLen = TD_ROW_MAX_BYTES_FROM_SCHEMA(pTSchema);
int32_t schemaLen = 0;
@ -150,7 +150,6 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
}
// assign data
// TODO
ret = rpcMallocCont(cap);
ret->header.vgId = pVnode->config.vgId;
ret->length = sizeof(SSubmitReq);
@ -161,13 +160,12 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
pDeleteReq->suid = suid;
tdBuildDeleteReq(pVnode, stbFullName, pDataBlock, pDeleteReq);
tqBuildDeleteReq(pVnode, stbFullName, pDataBlock, pDeleteReq);
continue;
}
blkHead->numOfRows = htonl(pDataBlock->info.rows);
blkHead->sversion = htonl(pTSchema->version);
// TODO
blkHead->suid = htobe64(suid);
// uid is assigned by vnode
blkHead->uid = 0;

View File

@ -108,29 +108,21 @@ int32_t tsdbInsertTableData(STsdb *pTsdb, int64_t version, SSubmitMsgIter *pMsgI
STbData *pTbData = NULL;
tb_uid_t suid = pMsgIter->suid;
tb_uid_t uid = pMsgIter->uid;
int32_t sverNew;
// check if table exists (todo: refact)
SMetaReader mr = {0};
// SMetaEntry me = {0};
metaReaderInit(&mr, pTsdb->pVnode->pMeta, 0);
if (metaGetTableEntryByUid(&mr, pMsgIter->uid) < 0) {
metaReaderClear(&mr);
code = TSDB_CODE_PAR_TABLE_NOT_EXIST;
SMetaInfo info;
code = metaGetInfo(pTsdb->pVnode->pMeta, uid, &info);
if (code) {
code = TSDB_CODE_TDB_TABLE_NOT_EXIST;
goto _err;
}
if (pRsp->tblFName) strcat(pRsp->tblFName, mr.me.name);
if (mr.me.type == TSDB_NORMAL_TABLE) {
sverNew = mr.me.ntbEntry.schemaRow.version;
} else {
tDecoderClear(&mr.coder);
metaGetTableEntryByUid(&mr, mr.me.ctbEntry.suid);
sverNew = mr.me.stbEntry.schemaRow.version;
if (info.suid != suid) {
code = TSDB_CODE_INVALID_MSG;
goto _err;
}
metaReaderClear(&mr);
pRsp->sver = sverNew;
if (info.suid) {
metaGetInfo(pTsdb->pVnode->pMeta, info.suid, &info);
}
pRsp->sver = info.skmVer;
// create/get STbData to op
code = tsdbGetOrCreateTbData(pMemTable, suid, uid, &pTbData);
@ -157,7 +149,17 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid
SVBufPool *pPool = pTsdb->pVnode->inUse;
TSDBKEY lastKey = {.version = version, .ts = eKey};
// check if table exists (todo)
// check if table exists
SMetaInfo info;
code = metaGetInfo(pTsdb->pVnode->pMeta, uid, &info);
if (code) {
code = TSDB_CODE_TDB_TABLE_NOT_EXIST;
goto _err;
}
if (info.suid != suid) {
code = TSDB_CODE_INVALID_MSG;
goto _err;
}
code = tsdbGetOrCreateTbData(pMemTable, suid, uid, &pTbData);
if (code) {
@ -196,7 +198,7 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid
tsdbCacheDeleteLast(pTsdb->lruCache, pTbData->uid, eKey);
}
tsdbError("vgId:%d, delete data from table suid:%" PRId64 " uid:%" PRId64 " skey:%" PRId64 " eKey:%" PRId64
tsdbInfo("vgId:%d, delete data from table suid:%" PRId64 " uid:%" PRId64 " skey:%" PRId64 " eKey:%" PRId64
" since %s",
TD_VID(pTsdb->pVnode), suid, uid, sKey, eKey, tstrerror(code));
return code;

View File

@ -1467,6 +1467,10 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
SColVal colVal;
SColVal *pColVal = &colVal;
memset(pColAgg, 0, sizeof(*pColAgg));
bool minAssigned = false;
bool maxAssigned = false;
*pColAgg = (SColumnDataAgg){.colId = pColData->cid};
for (int32_t iVal = 0; iVal < pColData->nVal; iVal++) {
tColDataGetValue(pColData, iVal, pColVal);
@ -1481,72 +1485,86 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
break;
case TSDB_DATA_TYPE_TINYINT: {
pColAgg->sum += colVal.value.i8;
if (pColAgg->min > colVal.value.i8) {
if (!minAssigned || pColAgg->min > colVal.value.i8) {
pColAgg->min = colVal.value.i8;
minAssigned = true;
}
if (pColAgg->max < colVal.value.i8) {
if (!maxAssigned || pColAgg->max < colVal.value.i8) {
pColAgg->max = colVal.value.i8;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_SMALLINT: {
pColAgg->sum += colVal.value.i16;
if (pColAgg->min > colVal.value.i16) {
if (!minAssigned || pColAgg->min > colVal.value.i16) {
pColAgg->min = colVal.value.i16;
minAssigned = true;
}
if (pColAgg->max < colVal.value.i16) {
if (!maxAssigned || pColAgg->max < colVal.value.i16) {
pColAgg->max = colVal.value.i16;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_INT: {
pColAgg->sum += colVal.value.i32;
if (pColAgg->min > colVal.value.i32) {
if (!minAssigned || pColAgg->min > colVal.value.i32) {
pColAgg->min = colVal.value.i32;
minAssigned = true;
}
if (pColAgg->max < colVal.value.i32) {
if (!maxAssigned || pColAgg->max < colVal.value.i32) {
pColAgg->max = colVal.value.i32;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_BIGINT: {
pColAgg->sum += colVal.value.i64;
if (pColAgg->min > colVal.value.i64) {
if (!minAssigned || pColAgg->min > colVal.value.i64) {
pColAgg->min = colVal.value.i64;
minAssigned = true;
}
if (pColAgg->max < colVal.value.i64) {
if (!maxAssigned || pColAgg->max < colVal.value.i64) {
pColAgg->max = colVal.value.i64;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_FLOAT: {
*(double*)(&pColAgg->sum) += colVal.value.f;
if (*(double*)(&pColAgg->min) > colVal.value.f) {
if (!minAssigned || *(double*)(&pColAgg->min) > colVal.value.f) {
*(double*)(&pColAgg->min) = colVal.value.f;
minAssigned = true;
}
if (*(double*)(&pColAgg->max) < colVal.value.f) {
if (!maxAssigned || *(double*)(&pColAgg->max) < colVal.value.f) {
*(double*)(&pColAgg->max) = colVal.value.f;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_DOUBLE: {
*(double*)(&pColAgg->sum) += colVal.value.d;
if (*(double*)(&pColAgg->min) > colVal.value.d) {
if (!minAssigned || *(double*)(&pColAgg->min) > colVal.value.d) {
*(double*)(&pColAgg->min) = colVal.value.d;
minAssigned = true;
}
if (*(double*)(&pColAgg->max) < colVal.value.d) {
if (!maxAssigned || *(double*)(&pColAgg->max) < colVal.value.d) {
*(double*)(&pColAgg->max) = colVal.value.d;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_VARCHAR:
break;
case TSDB_DATA_TYPE_TIMESTAMP: {
if (pColAgg->min > colVal.value.i64) {
if (!minAssigned || pColAgg->min > colVal.value.i64) {
pColAgg->min = colVal.value.i64;
minAssigned = true;
}
if (pColAgg->max < colVal.value.i64) {
if (!maxAssigned || pColAgg->max < colVal.value.i64) {
pColAgg->max = colVal.value.i64;
maxAssigned = true;
}
break;
}
@ -1554,41 +1572,49 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
break;
case TSDB_DATA_TYPE_UTINYINT: {
pColAgg->sum += colVal.value.u8;
if (pColAgg->min > colVal.value.u8) {
if (!minAssigned || pColAgg->min > colVal.value.u8) {
pColAgg->min = colVal.value.u8;
minAssigned = true;
}
if (pColAgg->max < colVal.value.u8) {
if (!maxAssigned || pColAgg->max < colVal.value.u8) {
pColAgg->max = colVal.value.u8;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_USMALLINT: {
pColAgg->sum += colVal.value.u16;
if (pColAgg->min > colVal.value.u16) {
if (!minAssigned || pColAgg->min > colVal.value.u16) {
pColAgg->min = colVal.value.u16;
minAssigned = true;
}
if (pColAgg->max < colVal.value.u16) {
if (!maxAssigned || pColAgg->max < colVal.value.u16) {
pColAgg->max = colVal.value.u16;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_UINT: {
pColAgg->sum += colVal.value.u32;
if (pColAgg->min > colVal.value.u32) {
if (!minAssigned || pColAgg->min > colVal.value.u32) {
pColAgg->min = colVal.value.u32;
minAssigned = true;
}
if (pColAgg->max < colVal.value.u32) {
if (!minAssigned || pColAgg->max < colVal.value.u32) {
pColAgg->max = colVal.value.u32;
maxAssigned = true;
}
break;
}
case TSDB_DATA_TYPE_UBIGINT: {
pColAgg->sum += colVal.value.u64;
if (pColAgg->min > colVal.value.u64) {
if (!minAssigned || pColAgg->min > colVal.value.u64) {
pColAgg->min = colVal.value.u64;
minAssigned = true;
}
if (pColAgg->max < colVal.value.u64) {
if (!maxAssigned || pColAgg->max < colVal.value.u64) {
pColAgg->max = colVal.value.u64;
maxAssigned = true;
}
break;
}

View File

@ -869,7 +869,7 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq
submitBlkRsp.uid = createTbReq.uid;
submitBlkRsp.tblFName = taosMemoryMalloc(strlen(pVnode->config.dbname) + strlen(createTbReq.name) + 2);
sprintf(submitBlkRsp.tblFName, "%s.", pVnode->config.dbname);
sprintf(submitBlkRsp.tblFName, "%s.%s", pVnode->config.dbname, createTbReq.name);
msgIter.uid = createTbReq.uid;
if (createTbReq.type == TSDB_CHILD_TABLE) {

View File

@ -3315,7 +3315,7 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
taosFillSetStartInfo(pInfo->pFillInfo, 0, pInfo->win.ekey);
} else {
blockDataUpdateTsWindow(pBlock, pInfo->primaryTsCol);
blockDataUpdateTsWindow(pBlock, pInfo->primarySrcSlotId);
doApplyScalarCalculation(pOperator, pBlock, order, scanFlag);
if (pInfo->curGroupId == 0 || pInfo->curGroupId == pInfo->pRes->info.groupId) {

View File

@ -353,6 +353,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
pCost->skipBlocks += 1;
*status = FUNC_DATA_REQUIRED_FILTEROUT;
return TSDB_CODE_SUCCESS;
}
@ -1174,19 +1175,19 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryTsIndex);
ASSERT(pColDataInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP);
TSKEY* tsCol = (TSKEY*)pColDataInfo->pData;
bool tableInserted = updateInfoIsTableInserted(pInfo->pUpdateInfo, pBlock->info.uid);
for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) {
SResultRowInfo dumyInfo;
dumyInfo.cur.pageId = -1;
bool isClosed = false;
STimeWindow win = {.skey = INT64_MIN, .ekey = INT64_MAX};
if (isOverdue(tsCol[rowId], &pInfo->twAggSup)) {
if (tableInserted && isOverdue(tsCol[rowId], &pInfo->twAggSup)) {
win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC);
isClosed = isCloseWindow(&win, &pInfo->twAggSup);
}
bool inserted = updateInfoIsTableInserted(pInfo->pUpdateInfo, pBlock->info.uid);
// must check update info first.
bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]);
bool closedWin = isClosed && inserted && isSignleIntervalWindow(pInfo) &&
bool closedWin = isClosed && isSignleIntervalWindow(pInfo) &&
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup);
if ((update || closedWin) && out) {
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid);

View File

@ -1210,7 +1210,7 @@ int32_t doMinMaxHelper(SqlFunctionCtx* pCtx, int32_t isMinFunc) {
int64_t val = GET_INT64_VAL(tval);
if ((prev < val) ^ isMinFunc) {
pBuf->v = val;
*(int64_t*)&pBuf->v = val;
if (pCtx->subsidiaries.num > 0) {
index = findRowIndex(pInput->startRowIndex, pInput->numOfRows, pCol, tval);
doSaveTupleData(pCtx, index, pCtx->pSrcBlock, &pBuf->tuplePos);
@ -1223,7 +1223,7 @@ int32_t doMinMaxHelper(SqlFunctionCtx* pCtx, int32_t isMinFunc) {
uint64_t val = GET_UINT64_VAL(tval);
if ((prev < val) ^ isMinFunc) {
pBuf->v = val;
*(uint64_t*)&pBuf->v = val;
if (pCtx->subsidiaries.num > 0) {
index = findRowIndex(pInput->startRowIndex, pInput->numOfRows, pCol, tval);
doSaveTupleData(pCtx, index, pCtx->pSrcBlock, &pBuf->tuplePos);
@ -1231,11 +1231,11 @@ int32_t doMinMaxHelper(SqlFunctionCtx* pCtx, int32_t isMinFunc) {
}
} else if (type == TSDB_DATA_TYPE_DOUBLE) {
double prev = 0;
GET_TYPED_DATA(prev, int64_t, type, &pBuf->v);
GET_TYPED_DATA(prev, double, type, &pBuf->v);
double val = GET_DOUBLE_VAL(tval);
if ((prev < val) ^ isMinFunc) {
pBuf->v = val;
*(double*)&pBuf->v = val;
if (pCtx->subsidiaries.num > 0) {
index = findRowIndex(pInput->startRowIndex, pInput->numOfRows, pCol, tval);
doSaveTupleData(pCtx, index, pCtx->pSrcBlock, &pBuf->tuplePos);
@ -1243,11 +1243,11 @@ int32_t doMinMaxHelper(SqlFunctionCtx* pCtx, int32_t isMinFunc) {
}
} else if (type == TSDB_DATA_TYPE_FLOAT) {
double prev = 0;
GET_TYPED_DATA(prev, int64_t, type, &pBuf->v);
GET_TYPED_DATA(prev, double, type, &pBuf->v);
double val = GET_DOUBLE_VAL(tval);
if ((prev < val) ^ isMinFunc) {
pBuf->v = val;
*(double*)&pBuf->v = val;
}
if (pCtx->subsidiaries.num > 0) {
@ -4918,6 +4918,16 @@ int32_t mavgFunction(SqlFunctionCtx* pCtx) {
return numOfElems;
}
static SSampleInfo* getSampleOutputInfo(SqlFunctionCtx* pCtx) {
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
SSampleInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
pInfo->data = (char*)pInfo + sizeof(SSampleInfo);
pInfo->tuplePos = (STuplePos*)((char*)pInfo + sizeof(SSampleInfo) + pInfo->samples * pInfo->colBytes);
return pInfo;
}
bool getSampleFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
SColumnNode* pCol = (SColumnNode*)nodesListGetNode(pFunc->pParameterList, 0);
SValueNode* pVal = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 1);
@ -4972,7 +4982,7 @@ static void doReservoirSample(SqlFunctionCtx* pCtx, SSampleInfo* pInfo, char* da
int32_t sampleFunction(SqlFunctionCtx* pCtx) {
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
SSampleInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
SSampleInfo* pInfo = getSampleOutputInfo(pCtx);
SInputColumnInfoData* pInput = &pCtx->input;
@ -4998,7 +5008,7 @@ int32_t sampleFunction(SqlFunctionCtx* pCtx) {
int32_t sampleFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo* pEntryInfo = GET_RES_INFO(pCtx);
SSampleInfo* pInfo = GET_ROWCELL_INTERBUF(pEntryInfo);
SSampleInfo* pInfo = getSampleOutputInfo(pCtx);
pEntryInfo->complete = true;
int32_t slotId = pCtx->pExpr->base.resSchema.slotId;

View File

@ -37,9 +37,13 @@ uint32_t taosRandR(uint32_t *pSeed) {
uint32_t taosSafeRand(void) {
#ifdef WINDOWS
uint32_t seed;
uint32_t seed = taosRand();
HCRYPTPROV hCryptProv;
if (!CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, 0)) return seed;
if (!CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, 0)) {
if (!CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET)) {
return seed;
}
}
if (hCryptProv != NULL) {
if (!CryptGenRandom(hCryptProv, 4, &seed)) return seed;
}

View File

@ -237,8 +237,8 @@
./test.sh -f tsim/stream/distributeInterval0.sim
./test.sh -f tsim/stream/distributeIntervalRetrive0.sim
./test.sh -f tsim/stream/distributeSession0.sim
#./test.sh -f tsim/stream/session0.sim
#./test.sh -f tsim/stream/session1.sim
./test.sh -f tsim/stream/session0.sim
./test.sh -f tsim/stream/session1.sim
./test.sh -f tsim/stream/state0.sim
./test.sh -f tsim/stream/triggerInterval0.sim
./test.sh -f tsim/stream/triggerSession0.sim

View File

@ -76,17 +76,16 @@ if $data03 != NULL then
return -1
endi
sql insert into mt_unsigned_1 values(now, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+1s, 1, 2, 3, 4, 5, 6, 7, 8, 9);
sql_error insert into mt_unsigned_1 values(now, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now, NULL, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now, 255, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now, NULL, 65535, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now, NULL, NULL, 4294967295, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now, NULL, NULL, NULL, 18446744073709551615, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+1s, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+2s, 1, 2, 3, 4, 5, 6, 7, 8, 9);
sql_error insert into mt_unsigned_1 values(now+3s, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now+4s, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now+5s, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL);
sql_error insert into mt_unsigned_1 values(now+6s, NULL, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+7s, 255, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+8s, NULL, 65535, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+9s, NULL, NULL, 4294967295, NULL, NULL, NULL, NULL, NULL, NULL);
sql insert into mt_unsigned_1 values(now+10s, NULL, NULL, NULL, 18446744073709551615, NULL, NULL, NULL, NULL, NULL);
sql select count(a),count(b),count(c),count(d), count(e) from mt_unsigned_1
if $rows != 1 then

View File

@ -83,22 +83,22 @@ if $data11 != 3 then
goto loop0
endi
if $data12 != NULL then
if $data12 != 10 then
print ======data12=$data12
goto loop0
endi
if $data13 != NULL then
if $data13 != 10 then
print ======data13=$data13
goto loop0
endi
if $data14 != NULL then
if $data14 != 1.100000000 then
print ======data14=$data14
return -1
endi
if $data15 != NULL then
if $data15 != 0.000000000 then
print ======data15=$data15
return -1
endi
@ -141,38 +141,38 @@ if $data01 != 7 then
goto loop1
endi
if $data02 != NULL then
if $data02 != 18 then
print =====data02=$data02
goto loop1
endi
if $data03 != NULL then
if $data03 != 4 then
print =====data03=$data03
goto loop1
endi
if $data04 != NULL then
print ======$data04
if $data04 != 1.000000000 then
print ======data04=$data04
return -1
endi
if $data05 != NULL then
print ======$data05
if $data05 != 1.154700538 then
print ======data05=$data05
return -1
endi
if $data06 != 4 then
print ======$data06
print ======data06=$data06
return -1
endi
if $data07 != 1.000000000 then
print ======$data07
print ======data07=$data07
return -1
endi
if $data08 != 13 then
print ======$data08
print ======data08=$data08
return -1
endi

View File

@ -69,7 +69,7 @@ class TDTestCase:
comput_irate_value = origin_result[1][0]*1000/( origin_result[1][-1] - origin_result[0][-1])
else:
comput_irate_value = (origin_result[1][0] - origin_result[0][0])*1000/( origin_result[1][-1] - origin_result[0][-1])
if abs(comput_irate_value - irate_value) <= 0.0000001:
if abs(comput_irate_value - irate_value) <= 0.001: # set as 0.001 avoid floating point precision calculation errors
tdLog.info(" irate work as expected , sql is %s "% irate_sql)
else:
tdLog.exit(" irate work not as expected , sql is %s "% irate_sql)

View File

@ -25,13 +25,13 @@ from util.cases import *
from util.sql import *
from util.dnodes import *
dbname = 'db'
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
def mavg_query_form(self, sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
def mavg_query_form(self, sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr=f"{dbname}.t1", condition=""):
'''
mavg function:
@ -50,7 +50,7 @@ class TDTestCase:
return f"{sel} {func} {col} {m_comm} {k} {r_comm} {alias} {fr} {table_expr} {condition}"
def checkmavg(self,sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
def checkmavg(self,sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr=f"{dbname}.t1", condition=""):
# print(self.mavg_query_form(sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
# table_expr=table_expr, condition=condition))
line = sys._getframe().f_back.f_lineno
@ -62,7 +62,7 @@ class TDTestCase:
table_expr=table_expr, condition=condition
))
sql = "select * from t1"
sql = f"select * from {dbname}.t1"
collist = tdSql.getColNameList(sql)
if not isinstance(col, str):
@ -326,9 +326,9 @@ class TDTestCase:
self.checkmavg(**case6)
# # case7~8: nested query
# case7 = {"table_expr": "(select c1 from stb1)"}
# case7 = {"table_expr": f"(select c1 from {dbname}.stb1)"}
# self.checkmavg(**case7)
# case8 = {"table_expr": "(select mavg(c1, 1) c1 from stb1 group by tbname)"}
# case8 = {"table_expr": f"(select mavg(c1, 1) c1 from {dbname}.stb1 group by tbname)"}
# self.checkmavg(**case8)
# case9~10: mix with tbname/ts/tag/col
@ -362,7 +362,7 @@ class TDTestCase:
self.checkmavg(**case17)
# # case18~19: with group by
# case19 = {
# "table_expr": "stb1",
# "table_expr": f"{dbname}.stb1",
# "condition": "partition by tbname"
# }
# self.checkmavg(**case19)
@ -371,14 +371,14 @@ class TDTestCase:
# case20 = {"condition": "order by ts"}
# self.checkmavg(**case20)
#case21 = {
# "table_expr": "stb1",
# "table_expr": f"{dbname}.stb1",
# "condition": "group by tbname order by tbname"
#}
#self.checkmavg(**case21)
# # case22: with union
# case22 = {
# "condition": "union all select mavg( c1 , 1 ) from t2"
# "condition": f"union all select mavg( c1 , 1 ) from {dbname}.t2"
# }
# self.checkmavg(**case22)
@ -486,32 +486,33 @@ class TDTestCase:
#tdSql.query(" select mavg( c1 , 1 ) + 2 from t1 ")
err41 = {"alias": "+ avg(c1)"}
self.checkmavg(**err41) # mix with arithmetic 2
err42 = {"alias": ", c1"}
self.checkmavg(**err42) # mix with other col
# err43 = {"table_expr": "stb1"}
# err42 = {"alias": ", c1"}
# self.checkmavg(**err42) # mix with other col
# err43 = {"table_expr": f"{dbname}.stb1"}
# self.checkmavg(**err43) # select stb directly
err44 = {
"col": "stb1.c1",
"table_expr": "stb1, stb2",
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
}
self.checkmavg(**err44) # stb join
# err44 = {
# "col": "stb1.c1",
# "table_expr": "stb1, stb2",
# "condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
# }
# self.checkmavg(**err44) # stb join
tdSql.query("select mavg( stb1.c1 , 1 ) from stb1, stb2 where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts;")
err45 = {
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
}
self.checkmavg(**err45) # interval
err46 = {
"table_expr": "t1",
"table_expr": f"{dbname}.t1",
"condition": "group by c6"
}
self.checkmavg(**err46) # group by normal col
err47 = {
"table_expr": "stb1",
"table_expr": f"{dbname}.stb1",
"condition": "group by tbname slimit 1 "
}
# self.checkmavg(**err47) # with slimit
err48 = {
"table_expr": "stb1",
"table_expr": f"{dbname}.stb1",
"condition": "group by tbname slimit 1 soffset 1"
}
# self.checkmavg(**err48) # with soffset
@ -554,8 +555,8 @@ class TDTestCase:
err67 = {"k": 0.999999}
self.checkmavg(**err67) # k: left out of [1, 1000]
err68 = {
"table_expr": "stb1",
"condition": "group by tbname order by tbname" # order by tbname not supported
"table_expr": f"{dbname}.stb1",
"condition": f"group by tbname order by tbname" # order by tbname not supported
}
self.checkmavg(**err68)
@ -565,42 +566,42 @@ class TDTestCase:
for i in range(tbnum):
for j in range(data_row):
tdSql.execute(
f"insert into t{i} values ("
f"insert into {dbname}.t{i} values ("
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
)
tdSql.execute(
f"insert into t{i} values ("
f"insert into {dbname}.t{i} values ("
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
)
tdSql.execute(
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
f"insert into {dbname}.tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
)
pass
def mavg_test_table(self,tbnum: int) -> None :
tdSql.execute("drop database if exists db")
tdSql.execute("create database if not exists db keep 3650")
tdSql.execute("use db")
tdSql.execute(f"drop database if exists {dbname}")
tdSql.execute(f"create database if not exists {dbname} keep 3650")
tdSql.execute(f"use {dbname}")
tdSql.execute(
"create stable db.stb1 (\
f"create stable {dbname}.stb1 (\
ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool, \
c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)\
) \
tags(st1 int)"
)
tdSql.execute(
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
f"create stable {dbname}.stb2 (ts timestamp, c1 int) tags(st2 int)"
)
for i in range(tbnum):
tdSql.execute(f"create table t{i} using stb1 tags({i})")
tdSql.execute(f"create table tt{i} using stb2 tags({i})")
tdSql.execute(f"create table {dbname}.t{i} using {dbname}.stb1 tags({i})")
tdSql.execute(f"create table {dbname}.tt{i} using {dbname}.stb2 tags({i})")
pass
@ -617,25 +618,25 @@ class TDTestCase:
tdLog.printNoPrefix("######## insert only NULL test:")
for i in range(tbnum):
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
tdSql.execute(f"insert into {dbname}.t{i}(ts) values ({nowtime - 5})")
tdSql.execute(f"insert into {dbname}.t{i}(ts) values ({nowtime + 5})")
self.mavg_current_query()
self.mavg_error_query()
tdLog.printNoPrefix("######## insert data in the range near the max(bigint/double):")
# self.mavg_test_table(tbnum)
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
# tdSql.execute(f"insert into {dbname}.t1(ts, c1,c2,c5,c7) values "
# f"({nowtime - (per_table_rows + 1) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
# tdSql.execute(f"insert into {dbname}.t1(ts, c1,c2,c5,c7) values "
# f"({nowtime - (per_table_rows + 2) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
# self.mavg_current_query()
# self.mavg_error_query()
tdLog.printNoPrefix("######## insert data in the range near the min(bigint/double):")
# self.mavg_test_table(tbnum)
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
# tdSql.execute(f"insert into {dbname}.t1(ts, c1,c2,c5,c7) values "
# f"({nowtime - (per_table_rows + 1) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {1-2**63})")
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
# tdSql.execute(f"insert into {dbname}.t1(ts, c1,c2,c5,c7) values "
# f"({nowtime - (per_table_rows + 2) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {512-2**63})")
# self.mavg_current_query()
# self.mavg_error_query()
@ -649,9 +650,9 @@ class TDTestCase:
tdLog.printNoPrefix("######## insert data mix with NULL test:")
for i in range(tbnum):
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
tdSql.execute(f"insert into {dbname}.t{i}(ts) values ({nowtime})")
tdSql.execute(f"insert into {dbname}.t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
tdSql.execute(f"insert into {dbname}.t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
self.mavg_current_query()
self.mavg_error_query()
@ -664,67 +665,64 @@ class TDTestCase:
tdDnodes.start(index)
self.mavg_current_query()
self.mavg_error_query()
tdSql.query("select mavg(1,1) from t1")
tdSql.query(f"select mavg(1,1) from {dbname}.t1")
tdSql.checkRows(7)
tdSql.checkData(0,0,1.000000000)
tdSql.checkData(1,0,1.000000000)
tdSql.checkData(5,0,1.000000000)
tdSql.query("select mavg(abs(c1),1) from t1")
tdSql.query(f"select mavg(abs(c1),1) from {dbname}.t1")
tdSql.checkRows(4)
def mavg_support_stable(self):
tdSql.query(" select mavg(1,3) from stb1 ")
tdSql.query(f" select mavg(1,3) from {dbname}.stb1 ")
tdSql.checkRows(68)
tdSql.checkData(0,0,1.000000000)
tdSql.query("select mavg(c1,3) from stb1 partition by tbname ")
tdSql.query(f"select mavg(c1,3) from {dbname}.stb1 partition by tbname ")
tdSql.checkRows(20)
# tdSql.query("select mavg(st1,3) from stb1 partition by tbname")
# tdSql.checkRows(38)
tdSql.query("select mavg(st1+c1,3) from stb1 partition by tbname")
tdSql.query(f"select mavg(st1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(50)
tdSql.query(f"select mavg(st1+c1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(20)
tdSql.query("select mavg(st1+c1,3) from stb1 partition by tbname")
tdSql.query(f"select mavg(st1+c1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(20)
tdSql.query("select mavg(st1+c1,3) from stb1 partition by tbname")
tdSql.query(f"select mavg(st1+c1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(20)
# # bug need fix
# tdSql.query("select mavg(st1+c1,3) from stb1 partition by tbname slimit 1 ")
# tdSql.checkRows(2)
# tdSql.error("select mavg(st1+c1,3) from stb1 partition by tbname limit 1 ")
# bug need fix
tdSql.query("select mavg(st1+c1,3) from stb1 partition by tbname")
tdSql.query(f"select mavg(st1+c1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(20)
# bug need fix
# tdSql.query("select tbname , mavg(c1,3) from stb1 partition by tbname")
# tdSql.checkRows(38)
# tdSql.query("select tbname , mavg(st1,3) from stb1 partition by tbname")
# tdSql.checkRows(38)
# tdSql.query("select tbname , mavg(st1,3) from stb1 partition by tbname slimit 1")
# tdSql.checkRows(2)
tdSql.query(f"select tbname , mavg(c1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(20)
tdSql.query(f"select tbname , mavg(st1,3) from {dbname}.stb1 partition by tbname")
tdSql.checkRows(50)
tdSql.query(f"select tbname , mavg(st1,3) from {dbname}.stb1 partition by tbname slimit 1")
tdSql.checkRows(5)
# partition by tags
# tdSql.query("select st1 , mavg(c1,3) from stb1 partition by st1")
# tdSql.checkRows(38)
# tdSql.query("select mavg(c1,3) from stb1 partition by st1")
# tdSql.checkRows(38)
# tdSql.query("select st1 , mavg(c1,3) from stb1 partition by st1 slimit 1")
# tdSql.checkRows(2)
# tdSql.query("select mavg(c1,3) from stb1 partition by st1 slimit 1")
# tdSql.checkRows(2)
tdSql.query(f"select st1 , mavg(c1,3) from {dbname}.stb1 partition by st1")
tdSql.checkRows(20)
tdSql.query(f"select mavg(c1,3) from {dbname}.stb1 partition by st1")
tdSql.checkRows(20)
tdSql.query(f"select st1 , mavg(c1,3) from {dbname}.stb1 partition by st1 slimit 1")
tdSql.checkRows(2)
tdSql.query(f"select mavg(c1,3) from {dbname}.stb1 partition by st1 slimit 1")
tdSql.checkRows(2)
# partition by col
# tdSql.query("select c1 , mavg(c1,3) from stb1 partition by c1")
# tdSql.checkRows(38)
# tdSql.query("select mavg(c1 ,3) from stb1 partition by c1")
# tdSql.checkRows(38)
# tdSql.query("select c1 , mavg(c1,3) from stb1 partition by st1 slimit 1")
# tdSql.checkRows(2)
# tdSql.query("select diff(c1) from stb1 partition by st1 slimit 1")
# tdSql.checkRows(2)
tdSql.query(f"select c1 , mavg(c1,3) from {dbname}.stb1 partition by c1")
tdSql.checkRows(0)
tdSql.query(f"select c1 , mavg(c1,1) from {dbname}.stb1 partition by c1")
tdSql.checkRows(40)
tdSql.query(f"select c1, c2, c3, c4, mavg(c1,3) from {dbname}.stb1 partition by tbname ")
tdSql.checkRows(20)
tdSql.query(f"select c1, c2, c3, c4, mavg(123,3) from {dbname}.stb1 partition by tbname ")
tdSql.checkRows(50)
def run(self):
import traceback

View File

@ -873,7 +873,7 @@ class TDTestCase:
# bug need fix
tdSql.query("select c1 ,t1, sample(c1,2) from db.stb1 partition by c1 ")
tdSql.query("select sample(c1,2) from db.stb1 partition by c1 ")
# tdSql.query("select c1 ,ind, sample(c1,2) from sample_db.st partition by c1 ")
tdSql.query("select c1 ,ind, sample(c1,2) from sample_db.st partition by c1 ")
def run(self):
import traceback

View File

@ -113,7 +113,7 @@ python3 ./test.py -f 2-query/hyperloglog.py -R
python3 ./test.py -f 2-query/interp.py
python3 ./test.py -f 2-query/interp.py -R
python3 ./test.py -f 2-query/irate.py
# python3 ./test.py -f 2-query/irate.py -R
python3 ./test.py -f 2-query/irate.py -R
python3 ./test.py -f 2-query/join.py
python3 ./test.py -f 2-query/join.py -R
python3 ./test.py -f 2-query/last_row.py
@ -169,7 +169,7 @@ python3 ./test.py -f 2-query/query_cols_tags_and_or.py
python3 ./test.py -f 2-query/elapsed.py
python3 ./test.py -f 2-query/csum.py
#python3 ./test.py -f 2-query/mavg.py
python3 ./test.py -f 2-query/mavg.py
python3 ./test.py -f 2-query/sample.py
python3 ./test.py -f 2-query/function_diff.py
python3 ./test.py -f 2-query/unique.py
@ -358,7 +358,7 @@ python3 ./test.py -f 2-query/interp.py -Q 2
python3 ./test.py -f 2-query/avg.py -Q 2
# python3 ./test.py -f 2-query/elapsed.py -Q 2
python3 ./test.py -f 2-query/csum.py -Q 2
#python3 ./test.py -f 2-query/mavg.py -Q 2
python3 ./test.py -f 2-query/mavg.py -Q 2
python3 ./test.py -f 2-query/sample.py -Q 2
python3 ./test.py -f 2-query/function_diff.py -Q 2
python3 ./test.py -f 2-query/unique.py -Q 2
@ -445,7 +445,7 @@ python3 ./test.py -f 2-query/query_cols_tags_and_or.py -Q 3
# python3 ./test.py -f 2-query/avg.py -Q 3
# python3 ./test.py -f 2-query/elapsed.py -Q 3
python3 ./test.py -f 2-query/csum.py -Q 3
#python3 ./test.py -f 2-query/mavg.py -Q 3
python3 ./test.py -f 2-query/mavg.py -Q 3
python3 ./test.py -f 2-query/sample.py -Q 3
python3 ./test.py -f 2-query/function_diff.py -Q 3
python3 ./test.py -f 2-query/unique.py -Q 3

View File

@ -4,9 +4,9 @@ python3 .\test.py -f 0-others\taosShellError.py
python3 .\test.py -f 0-others\taosShellNetChk.py
python3 .\test.py -f 0-others\telemetry.py
python3 .\test.py -f 0-others\taosdMonitor.py
python3 .\test.py -f 0-others\udfTest.py
python3 .\test.py -f 0-others\udf_create.py
python3 .\test.py -f 0-others\udf_restart_taosd.py
@REM python3 .\test.py -f 0-others\udfTest.py
@REM python3 .\test.py -f 0-others\udf_create.py
@REM python3 .\test.py -f 0-others\udf_restart_taosd.py
@REM python3 .\test.py -f 0-others\cachelast.py
@REM python3 .\test.py -f 0-others\user_control.py