Merge branch '3.0' into feature/compressData
This commit is contained in:
commit
a02426fd2f
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
title: Insert
|
title: Insert
|
||||||
sidebar_label: Insert
|
sidebar_label: Insert
|
||||||
description: This document describes how to insert data into TDengine.
|
description: This document describes the SQL commands and syntax for inserting data into TDengine.
|
||||||
---
|
---
|
||||||
|
|
||||||
## Syntax
|
## Syntax
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: Data Subscription
|
title: Data Subscription SQL Reference
|
||||||
sidebar_label: Data Subscription
|
sidebar_label: Data Subscription
|
||||||
description: This document describes the SQL statements related to the data subscription component of TDengine.
|
description: This document describes the SQL statements related to the data subscription component of TDengine.
|
||||||
---
|
---
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: Stream Processing
|
title: Stream Processing SQL Reference
|
||||||
sidebar_label: Stream Processing
|
sidebar_label: Stream Processing
|
||||||
description: This document describes the SQL statements related to the stream processing component of TDengine.
|
description: This document describes the SQL statements related to the stream processing component of TDengine.
|
||||||
---
|
---
|
||||||
|
@ -148,7 +148,7 @@ T = latest event time - watermark
|
||||||
|
|
||||||
The window closing time for each batch of data that arrives at the system is updated using the preceding formula, and all windows are closed whose closing time is less than T. If the triggering method is WINDOW_CLOSE or MAX_DELAY, the aggregate result for the window is pushed.
|
The window closing time for each batch of data that arrives at the system is updated using the preceding formula, and all windows are closed whose closing time is less than T. If the triggering method is WINDOW_CLOSE or MAX_DELAY, the aggregate result for the window is pushed.
|
||||||
|
|
||||||
Stream processing strategy for expired data
|
## Stream processing strategy for expired data
|
||||||
The data in expired windows is tagged as expired. TDengine stream processing provides two methods for handling such data:
|
The data in expired windows is tagged as expired. TDengine stream processing provides two methods for handling such data:
|
||||||
|
|
||||||
1. Drop the data. This is the default and often only handling method for most stream processing engines.
|
1. Drop the data. This is the default and often only handling method for most stream processing engines.
|
||||||
|
@ -157,6 +157,14 @@ The data in expired windows is tagged as expired. TDengine stream processing pro
|
||||||
|
|
||||||
In both of these methods, configuring the watermark is essential for obtaining accurate results (if expired data is dropped) and avoiding repeated triggers that affect system performance (if expired data is recalculated).
|
In both of these methods, configuring the watermark is essential for obtaining accurate results (if expired data is dropped) and avoiding repeated triggers that affect system performance (if expired data is recalculated).
|
||||||
|
|
||||||
|
## Stream processing strategy for modifying data
|
||||||
|
|
||||||
|
TDengine provides two ways to handle modified data, which are specified by the IGNORE UPDATE option:
|
||||||
|
|
||||||
|
1. Check whether the data has been modified, i.e. IGNORE UPDATE 0, and recalculate the corresponding window if the data has been modified.
|
||||||
|
|
||||||
|
2. Do not check whether the data has been modified, and calculate all the data as incremental data, i.e. IGNORE UPDATE 1, the default configuration.
|
||||||
|
|
||||||
## Supported functions
|
## Supported functions
|
||||||
|
|
||||||
All [scalar functions](../function/#scalar-functions) are available in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings:
|
All [scalar functions](../function/#scalar-functions) are available in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: User-Defined Functions (UDF)
|
title: User-Defined Functions (UDF) SQL Reference
|
||||||
sidebar_label: User-Defined Functions
|
sidebar_label: User-Defined Functions
|
||||||
description: This document describes the SQL statements related to user-defined functions (UDF) in TDengine.
|
description: This document describes the SQL statements related to user-defined functions (UDF) in TDengine.
|
||||||
---
|
---
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: TDinsight - Grafana-based Zero-Dependency Monitoring Solution for TDengine
|
title: TDinsight
|
||||||
sidebar_label: TDinsight
|
sidebar_label: TDinsight
|
||||||
description: This document describes TDinsight, a monitoring solution for TDengine.
|
description: This document describes TDinsight, a monitoring solution for TDengine.
|
||||||
---
|
---
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: Quickly Build IT DevOps Visualization System with TDengine + Telegraf + Grafana
|
title: IT Visualization with TDengine + Telegraf + Grafana
|
||||||
sidebar_label: TDengine + Telegraf + Grafana
|
sidebar_label: TDengine + Telegraf + Grafana
|
||||||
description: This document describes how to create an IT visualization system by integrating TDengine with Telegraf and Grafana.
|
description: This document describes how to create an IT visualization system by integrating TDengine with Telegraf and Grafana.
|
||||||
---
|
---
|
||||||
|
|
|
@ -0,0 +1,66 @@
|
||||||
|
use taos::*;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let dsn = "taos://localhost:6030";
|
||||||
|
let builder = TaosBuilder::from_dsn(dsn)?;
|
||||||
|
|
||||||
|
let taos = builder.build()?;
|
||||||
|
|
||||||
|
// ANCHOR: create_db_and_table
|
||||||
|
let db = "power";
|
||||||
|
// create database
|
||||||
|
taos.exec_many([
|
||||||
|
format!("DROP DATABASE IF EXISTS `{db}`"),
|
||||||
|
format!("CREATE DATABASE `{db}`"),
|
||||||
|
format!("USE `{db}`"),
|
||||||
|
])
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// create table
|
||||||
|
taos.exec_many([
|
||||||
|
// create super table
|
||||||
|
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
|
||||||
|
TAGS (`groupid` INT, `location` BINARY(24))",
|
||||||
|
]).await?;
|
||||||
|
// ANCHOR_END: create_db_and_table
|
||||||
|
|
||||||
|
// ANCHOR: insert_data
|
||||||
|
let inserted = taos.exec("INSERT INTO " +
|
||||||
|
"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') " +
|
||||||
|
"VALUES " +
|
||||||
|
"(NOW + 1a, 10.30000, 219, 0.31000) " +
|
||||||
|
"(NOW + 2a, 12.60000, 218, 0.33000) " +
|
||||||
|
"(NOW + 3a, 12.30000, 221, 0.31000) " +
|
||||||
|
"power.d1002 USING power.meters TAGS(3, 'California.SanFrancisco') " +
|
||||||
|
"VALUES " +
|
||||||
|
"(NOW + 1a, 10.30000, 218, 0.25000) ").await?;
|
||||||
|
|
||||||
|
println!("inserted: {} rows", inserted);
|
||||||
|
// ANCHOR_END: insert_data
|
||||||
|
|
||||||
|
// ANCHOR: query_data
|
||||||
|
let mut result = taos.query("SELECT * FROM power.meters").await?;
|
||||||
|
|
||||||
|
for field in result.fields() {
|
||||||
|
println!("got field: {}", field.name());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut rows = result.rows();
|
||||||
|
let mut nrows = 0;
|
||||||
|
while let Some(row) = rows.try_next().await? {
|
||||||
|
for (col, (name, value)) in row.enumerate() {
|
||||||
|
println!(
|
||||||
|
"[{}] got value in col {} (named `{:>8}`): {}",
|
||||||
|
nrows, col, name, value
|
||||||
|
);
|
||||||
|
}
|
||||||
|
nrows += 1;
|
||||||
|
}
|
||||||
|
// ANCHOR_END: query_data
|
||||||
|
|
||||||
|
// ANCHOR: query_with_req_id
|
||||||
|
let result = taos.query_with_req_id("SELECT * FROM power.meters", 0).await?;
|
||||||
|
// ANCHOR_END: query_with_req_id
|
||||||
|
|
||||||
|
}
|
|
@ -0,0 +1,80 @@
|
||||||
|
use taos_query::common::SchemalessPrecision;
|
||||||
|
use taos_query::common::SchemalessProtocol;
|
||||||
|
use taos_query::common::SmlDataBuilder;
|
||||||
|
|
||||||
|
use crate::AsyncQueryable;
|
||||||
|
use crate::AsyncTBuilder;
|
||||||
|
use crate::TaosBuilder;
|
||||||
|
|
||||||
|
async fn put() -> anyhow::Result<()> {
|
||||||
|
std::env::set_var("RUST_LOG", "taos=debug");
|
||||||
|
pretty_env_logger::init();
|
||||||
|
let dsn =
|
||||||
|
std::env::var("TDENGINE_ClOUD_DSN").unwrap_or("http://localhost:6041".to_string());
|
||||||
|
log::debug!("dsn: {:?}", &dsn);
|
||||||
|
|
||||||
|
let client = TaosBuilder::from_dsn(dsn)?.build().await?;
|
||||||
|
|
||||||
|
let db = "power";
|
||||||
|
|
||||||
|
client.exec(format!("drop database if exists {db}")).await?;
|
||||||
|
|
||||||
|
client
|
||||||
|
.exec(format!("create database if not exists {db}"))
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// should specify database before insert
|
||||||
|
client.exec(format!("use {db}")).await?;
|
||||||
|
|
||||||
|
// SchemalessProtocol::Line
|
||||||
|
let data = [
|
||||||
|
"meters,groupid=2,location=California.SanFrancisco current=10.3000002f64,voltage=219i32,phase=0.31f64 1626006833639000000",
|
||||||
|
]
|
||||||
|
.map(String::from)
|
||||||
|
.to_vec();
|
||||||
|
|
||||||
|
let sml_data = SmlDataBuilder::default()
|
||||||
|
.protocol(SchemalessProtocol::Line)
|
||||||
|
.precision(SchemalessPrecision::Millisecond)
|
||||||
|
.data(data.clone())
|
||||||
|
.ttl(1000)
|
||||||
|
.req_id(100u64)
|
||||||
|
.build()?;
|
||||||
|
assert_eq!(client.put(&sml_data).await?, ());
|
||||||
|
|
||||||
|
// SchemalessProtocol::Telnet
|
||||||
|
let data = [
|
||||||
|
"meters.current 1648432611249 10.3 location=California.SanFrancisco group=2",
|
||||||
|
]
|
||||||
|
.map(String::from)
|
||||||
|
.to_vec();
|
||||||
|
|
||||||
|
let sml_data = SmlDataBuilder::default()
|
||||||
|
.protocol(SchemalessProtocol::Telnet)
|
||||||
|
.precision(SchemalessPrecision::Millisecond)
|
||||||
|
.data(data.clone())
|
||||||
|
.ttl(1000)
|
||||||
|
.req_id(200u64)
|
||||||
|
.build()?;
|
||||||
|
assert_eq!(client.put(&sml_data).await?, ());
|
||||||
|
|
||||||
|
// SchemalessProtocol::Json
|
||||||
|
let data = [
|
||||||
|
r#"[{"metric": "meters.current", "timestamp": 1681345954000, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}}, {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"#
|
||||||
|
]
|
||||||
|
.map(String::from)
|
||||||
|
.to_vec();
|
||||||
|
|
||||||
|
let sml_data = SmlDataBuilder::default()
|
||||||
|
.protocol(SchemalessProtocol::Json)
|
||||||
|
.precision(SchemalessPrecision::Millisecond)
|
||||||
|
.data(data.clone())
|
||||||
|
.ttl(1000)
|
||||||
|
.req_id(300u64)
|
||||||
|
.build()?;
|
||||||
|
assert_eq!(client.put(&sml_data).await?, ());
|
||||||
|
|
||||||
|
client.exec(format!("drop database if exists {db}")).await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
|
@ -0,0 +1,37 @@
|
||||||
|
use taos::*;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let taos = TaosBuilder::from_dsn("taos://")?.build().await?;
|
||||||
|
|
||||||
|
taos.exec("DROP DATABASE IF EXISTS power").await?;
|
||||||
|
taos.create_database("power").await?;
|
||||||
|
taos.use_database("power").await?;
|
||||||
|
taos.exec("CREATE STABLE IF NOT EXISTS meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)").await?;
|
||||||
|
|
||||||
|
let mut stmt = Stmt::init(&taos).await?;
|
||||||
|
stmt.prepare("INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)").await?;
|
||||||
|
|
||||||
|
const NUM_TABLES: usize = 10;
|
||||||
|
const NUM_ROWS: usize = 10;
|
||||||
|
for i in 0..NUM_TABLES {
|
||||||
|
let table_name = format!("d{}", i);
|
||||||
|
let tags = vec![Value::VarChar("California.SanFransico".into()), Value::Int(2)];
|
||||||
|
stmt.set_tbname_tags(&table_name, &tags).await?;
|
||||||
|
for j in 0..NUM_ROWS {
|
||||||
|
let values = vec![
|
||||||
|
ColumnView::from_millis_timestamp(vec![1648432611249 + j as i64]),
|
||||||
|
ColumnView::from_floats(vec![10.3 + j as f32]),
|
||||||
|
ColumnView::from_ints(vec![219 + j as i32]),
|
||||||
|
ColumnView::from_floats(vec![0.31 + j as f32]),
|
||||||
|
];
|
||||||
|
stmt.bind(&values).await?;
|
||||||
|
}
|
||||||
|
stmt.add_batch().await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// execute.
|
||||||
|
let rows = stmt.execute().await?;
|
||||||
|
assert_eq!(rows, NUM_TABLES * NUM_ROWS);
|
||||||
|
Ok(())
|
||||||
|
}
|
|
@ -0,0 +1,166 @@
|
||||||
|
use std::time::Duration;
|
||||||
|
use std::str::FromStr;
|
||||||
|
|
||||||
|
use taos::*;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
pretty_env_logger::formatted_timed_builder()
|
||||||
|
.filter_level(log::LevelFilter::Info)
|
||||||
|
.init();
|
||||||
|
use taos_query::prelude::*;
|
||||||
|
let dsn = "taos://localhost:6030".to_string();
|
||||||
|
log::info!("dsn: {}", dsn);
|
||||||
|
let mut dsn = Dsn::from_str(&dsn)?;
|
||||||
|
|
||||||
|
let taos = TaosBuilder::from_dsn(&dsn)?.build().await?;
|
||||||
|
|
||||||
|
// prepare database and table
|
||||||
|
taos.exec_many([
|
||||||
|
"drop topic if exists topic_meters",
|
||||||
|
"drop database if exists power",
|
||||||
|
"create database if not exists power WAL_RETENTION_PERIOD 86400",
|
||||||
|
"use power",
|
||||||
|
|
||||||
|
"CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))",
|
||||||
|
|
||||||
|
"create table if not exists power.d001 using power.meters tags(1,'location')",
|
||||||
|
|
||||||
|
])
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
taos.exec_many([
|
||||||
|
"drop database if exists db2",
|
||||||
|
"create database if not exists db2 wal_retention_period 3600",
|
||||||
|
"use db2",
|
||||||
|
])
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// ANCHOR: create_topic
|
||||||
|
taos.exec_many([
|
||||||
|
"CREATE TOPIC IF NOT EXISTS topic_meters AS SELECT ts, current, voltage, phase, groupid, location FROM power.meters",
|
||||||
|
])
|
||||||
|
.await?;
|
||||||
|
// ANCHOR_END: create_topic
|
||||||
|
|
||||||
|
// ANCHOR: create_consumer
|
||||||
|
dsn.params.insert("group.id".to_string(), "abc".to_string());
|
||||||
|
dsn.params.insert("auto.offset.reset".to_string(), "earliest".to_string());
|
||||||
|
|
||||||
|
let builder = TmqBuilder::from_dsn(&dsn)?;
|
||||||
|
let mut consumer = builder.build().await?;
|
||||||
|
// ANCHOR_END: create_consumer
|
||||||
|
|
||||||
|
// ANCHOR: subscribe
|
||||||
|
consumer.subscribe(["topic_meters"]).await?;
|
||||||
|
// ANCHOR_END: subscribe
|
||||||
|
|
||||||
|
// ANCHOR: consume
|
||||||
|
{
|
||||||
|
let mut stream = consumer.stream_with_timeout(Timeout::from_secs(1));
|
||||||
|
|
||||||
|
while let Some((offset, message)) = stream.try_next().await? {
|
||||||
|
|
||||||
|
let topic: &str = offset.topic();
|
||||||
|
let database = offset.database();
|
||||||
|
let vgroup_id = offset.vgroup_id();
|
||||||
|
log::debug!(
|
||||||
|
"topic: {}, database: {}, vgroup_id: {}",
|
||||||
|
topic,
|
||||||
|
database,
|
||||||
|
vgroup_id
|
||||||
|
);
|
||||||
|
|
||||||
|
match message {
|
||||||
|
MessageSet::Meta(meta) => {
|
||||||
|
log::info!("Meta");
|
||||||
|
let raw = meta.as_raw_meta().await?;
|
||||||
|
taos.write_raw_meta(&raw).await?;
|
||||||
|
|
||||||
|
let json = meta.as_json_meta().await?;
|
||||||
|
let sql = json.to_string();
|
||||||
|
if let Err(err) = taos.exec(sql).await {
|
||||||
|
println!("maybe error: {}", err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MessageSet::Data(data) => {
|
||||||
|
log::info!("Data");
|
||||||
|
while let Some(data) = data.fetch_raw_block().await? {
|
||||||
|
log::debug!("data: {:?}", data);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MessageSet::MetaData(meta, data) => {
|
||||||
|
log::info!("MetaData");
|
||||||
|
let raw = meta.as_raw_meta().await?;
|
||||||
|
taos.write_raw_meta(&raw).await?;
|
||||||
|
|
||||||
|
let json = meta.as_json_meta().await?;
|
||||||
|
let sql = json.to_string();
|
||||||
|
if let Err(err) = taos.exec(sql).await {
|
||||||
|
println!("maybe error: {}", err);
|
||||||
|
}
|
||||||
|
|
||||||
|
while let Some(data) = data.fetch_raw_block().await? {
|
||||||
|
log::debug!("data: {:?}", data);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
consumer.commit(offset).await?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// ANCHOR_END: consume
|
||||||
|
|
||||||
|
// ANCHOR: assignments
|
||||||
|
let assignments = consumer.assignments().await.unwrap();
|
||||||
|
log::info!("assignments: {:?}", assignments);
|
||||||
|
// ANCHOR_END: assignments
|
||||||
|
|
||||||
|
// seek offset
|
||||||
|
for topic_vec_assignment in assignments {
|
||||||
|
let topic = &topic_vec_assignment.0;
|
||||||
|
let vec_assignment = topic_vec_assignment.1;
|
||||||
|
for assignment in vec_assignment {
|
||||||
|
let vgroup_id = assignment.vgroup_id();
|
||||||
|
let current = assignment.current_offset();
|
||||||
|
let begin = assignment.begin();
|
||||||
|
let end = assignment.end();
|
||||||
|
log::debug!(
|
||||||
|
"topic: {}, vgroup_id: {}, current offset: {} begin {}, end: {}",
|
||||||
|
topic,
|
||||||
|
vgroup_id,
|
||||||
|
current,
|
||||||
|
begin,
|
||||||
|
end
|
||||||
|
);
|
||||||
|
// ANCHOR: seek_offset
|
||||||
|
let res = consumer.offset_seek(topic, vgroup_id, end).await;
|
||||||
|
if res.is_err() {
|
||||||
|
log::error!("seek offset error: {:?}", res);
|
||||||
|
let a = consumer.assignments().await.unwrap();
|
||||||
|
log::error!("assignments: {:?}", a);
|
||||||
|
}
|
||||||
|
// ANCHOR_END: seek_offset
|
||||||
|
}
|
||||||
|
|
||||||
|
let topic_assignment = consumer.topic_assignment(topic).await;
|
||||||
|
log::debug!("topic assignment: {:?}", topic_assignment);
|
||||||
|
}
|
||||||
|
|
||||||
|
// after seek offset
|
||||||
|
let assignments = consumer.assignments().await.unwrap();
|
||||||
|
log::info!("after seek offset assignments: {:?}", assignments);
|
||||||
|
|
||||||
|
// ANCHOR: unsubscribe
|
||||||
|
consumer.unsubscribe().await;
|
||||||
|
// ANCHOR_END: unsubscribe
|
||||||
|
|
||||||
|
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||||
|
|
||||||
|
taos.exec_many([
|
||||||
|
"drop database db2",
|
||||||
|
"drop topic topic_meters",
|
||||||
|
"drop database power",
|
||||||
|
])
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
|
@ -201,9 +201,9 @@ TDengine 对于过期数据提供两种处理方式,由 IGNORE EXPIRED 选项
|
||||||
|
|
||||||
TDengine 对于修改数据提供两种处理方式,由 IGNORE UPDATE 选项指定:
|
TDengine 对于修改数据提供两种处理方式,由 IGNORE UPDATE 选项指定:
|
||||||
|
|
||||||
1. 检查数据是否被修改,即 IGNORE UPDATE 0:默认配置,如果被修改,则重新计算对应窗口。
|
1. 检查数据是否被修改,即 IGNORE UPDATE 0,如果数据被修改,则重新计算对应窗口。
|
||||||
|
|
||||||
2. 不检查数据是否被修改,全部按增量数据计算,即 IGNORE UPDATE 1。
|
2. 不检查数据是否被修改,全部按增量数据计算,即 IGNORE UPDATE 1,默认配置。
|
||||||
|
|
||||||
|
|
||||||
## 写入已存在的超级表
|
## 写入已存在的超级表
|
||||||
|
|
|
@ -355,6 +355,8 @@ typedef struct SMetaHbInfo SMetaHbInfo;
|
||||||
typedef struct SDispatchMsgInfo {
|
typedef struct SDispatchMsgInfo {
|
||||||
SStreamDispatchReq* pData; // current dispatch data
|
SStreamDispatchReq* pData; // current dispatch data
|
||||||
int8_t dispatchMsgType;
|
int8_t dispatchMsgType;
|
||||||
|
int64_t checkpointId;// checkpoint id msg
|
||||||
|
int32_t transId; // transId for current checkpoint
|
||||||
int16_t msgType; // dispatch msg type
|
int16_t msgType; // dispatch msg type
|
||||||
int32_t retryCount; // retry send data count
|
int32_t retryCount; // retry send data count
|
||||||
int64_t startTs; // dispatch start time, record total elapsed time for dispatch
|
int64_t startTs; // dispatch start time, record total elapsed time for dispatch
|
||||||
|
|
|
@ -496,6 +496,11 @@ static int32_t hbAsyncCallBack(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
pInst->onlineDnodes = pInst->totalDnodes ? 0 : -1;
|
pInst->onlineDnodes = pInst->totalDnodes ? 0 : -1;
|
||||||
tscDebug("hb rsp error %s, update server status %d/%d", tstrerror(code), pInst->onlineDnodes, pInst->totalDnodes);
|
tscDebug("hb rsp error %s, update server status %d/%d", tstrerror(code), pInst->onlineDnodes, pInst->totalDnodes);
|
||||||
|
taosThreadMutexUnlock(&clientHbMgr.lock);
|
||||||
|
taosMemoryFree(pMsg->pData);
|
||||||
|
taosMemoryFree(pMsg->pEpSet);
|
||||||
|
tFreeClientHbBatchRsp(&pRsp);
|
||||||
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (rspNum) {
|
if (rspNum) {
|
||||||
|
|
|
@ -1626,6 +1626,22 @@ void changeByteEndian(char* pData){
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void tmqGetRawDataRowsPrecisionFromRes(void *pRetrieve, void** rawData, int64_t *rows, int32_t *precision){
|
||||||
|
if(*(int64_t*)pRetrieve == 0){
|
||||||
|
*rawData = ((SRetrieveTableRsp*)pRetrieve)->data;
|
||||||
|
*rows = htobe64(((SRetrieveTableRsp*)pRetrieve)->numOfRows);
|
||||||
|
if(precision != NULL){
|
||||||
|
*precision = ((SRetrieveTableRsp*)pRetrieve)->precision;
|
||||||
|
}
|
||||||
|
}else if(*(int64_t*)pRetrieve == 1){
|
||||||
|
*rawData = ((SRetrieveTableRspForTmq*)pRetrieve)->data;
|
||||||
|
*rows = htobe64(((SRetrieveTableRspForTmq*)pRetrieve)->numOfRows);
|
||||||
|
if(precision != NULL){
|
||||||
|
*precision = ((SRetrieveTableRspForTmq*)pRetrieve)->precision;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void tmqBuildRspFromWrapperInner(SMqPollRspWrapper* pWrapper, SMqClientVg* pVg, int64_t* numOfRows, SMqRspObj* pRspObj) {
|
static void tmqBuildRspFromWrapperInner(SMqPollRspWrapper* pWrapper, SMqClientVg* pVg, int64_t* numOfRows, SMqRspObj* pRspObj) {
|
||||||
(*numOfRows) = 0;
|
(*numOfRows) = 0;
|
||||||
tstrncpy(pRspObj->topic, pWrapper->topicHandle->topicName, TSDB_TOPIC_FNAME_LEN);
|
tstrncpy(pRspObj->topic, pWrapper->topicHandle->topicName, TSDB_TOPIC_FNAME_LEN);
|
||||||
|
@ -1648,13 +1664,7 @@ static void tmqBuildRspFromWrapperInner(SMqPollRspWrapper* pWrapper, SMqClientVg
|
||||||
void* rawData = NULL;
|
void* rawData = NULL;
|
||||||
int64_t rows = 0;
|
int64_t rows = 0;
|
||||||
// deal with compatibility
|
// deal with compatibility
|
||||||
if(*(int64_t*)pRetrieve == 0){
|
tmqGetRawDataRowsPrecisionFromRes(pRetrieve, &rawData, &rows, NULL);
|
||||||
rawData = ((SRetrieveTableRsp*)pRetrieve)->data;
|
|
||||||
rows = htobe64(((SRetrieveTableRsp*)pRetrieve)->numOfRows);
|
|
||||||
}else if(*(int64_t*)pRetrieve == 1){
|
|
||||||
rawData = ((SRetrieveTableRspForTmq*)pRetrieve)->data;
|
|
||||||
rows = htobe64(((SRetrieveTableRspForTmq*)pRetrieve)->numOfRows);
|
|
||||||
}
|
|
||||||
|
|
||||||
pVg->numOfRows += rows;
|
pVg->numOfRows += rows;
|
||||||
(*numOfRows) += rows;
|
(*numOfRows) += rows;
|
||||||
|
@ -2625,18 +2635,22 @@ SReqResultInfo* tmqGetNextResInfo(TAOS_RES* res, bool convertUcs4) {
|
||||||
pRspObj->resIter++;
|
pRspObj->resIter++;
|
||||||
|
|
||||||
if (pRspObj->resIter < pRspObj->rsp.blockNum) {
|
if (pRspObj->resIter < pRspObj->rsp.blockNum) {
|
||||||
SRetrieveTableRspForTmq* pRetrieveTmq =
|
|
||||||
(SRetrieveTableRspForTmq*)taosArrayGetP(pRspObj->rsp.blockData, pRspObj->resIter);
|
|
||||||
if (pRspObj->rsp.withSchema) {
|
if (pRspObj->rsp.withSchema) {
|
||||||
doFreeReqResultInfo(&pRspObj->resInfo);
|
doFreeReqResultInfo(&pRspObj->resInfo);
|
||||||
SSchemaWrapper* pSW = (SSchemaWrapper*)taosArrayGetP(pRspObj->rsp.blockSchema, pRspObj->resIter);
|
SSchemaWrapper* pSW = (SSchemaWrapper*)taosArrayGetP(pRspObj->rsp.blockSchema, pRspObj->resIter);
|
||||||
setResSchemaInfo(&pRspObj->resInfo, pSW->pSchema, pSW->nCols);
|
setResSchemaInfo(&pRspObj->resInfo, pSW->pSchema, pSW->nCols);
|
||||||
}
|
}
|
||||||
|
|
||||||
pRspObj->resInfo.pData = (void*)pRetrieveTmq->data;
|
void* pRetrieve = taosArrayGetP(pRspObj->rsp.blockData, pRspObj->resIter);
|
||||||
pRspObj->resInfo.numOfRows = htobe64(pRetrieveTmq->numOfRows);
|
void* rawData = NULL;
|
||||||
|
int64_t rows = 0;
|
||||||
|
int32_t precision = 0;
|
||||||
|
tmqGetRawDataRowsPrecisionFromRes(pRetrieve, &rawData, &rows, &precision);
|
||||||
|
|
||||||
|
pRspObj->resInfo.pData = rawData;
|
||||||
|
pRspObj->resInfo.numOfRows = rows;
|
||||||
pRspObj->resInfo.current = 0;
|
pRspObj->resInfo.current = 0;
|
||||||
pRspObj->resInfo.precision = pRetrieveTmq->precision;
|
pRspObj->resInfo.precision = precision;
|
||||||
|
|
||||||
// TODO handle the compressed case
|
// TODO handle the compressed case
|
||||||
pRspObj->resInfo.totalRows += pRspObj->resInfo.numOfRows;
|
pRspObj->resInfo.totalRows += pRspObj->resInfo.numOfRows;
|
||||||
|
|
|
@ -160,6 +160,7 @@ typedef struct {
|
||||||
ETrnConflct conflict;
|
ETrnConflct conflict;
|
||||||
ETrnExec exec;
|
ETrnExec exec;
|
||||||
EOperType oper;
|
EOperType oper;
|
||||||
|
bool changeless;
|
||||||
int32_t code;
|
int32_t code;
|
||||||
int32_t failedTimes;
|
int32_t failedTimes;
|
||||||
void* rpcRsp;
|
void* rpcRsp;
|
||||||
|
|
|
@ -81,6 +81,7 @@ void mndTransSetDbName(STrans *pTrans, const char *dbname, const char *stbnam
|
||||||
void mndTransSetArbGroupId(STrans *pTrans, int32_t groupId);
|
void mndTransSetArbGroupId(STrans *pTrans, int32_t groupId);
|
||||||
void mndTransSetSerial(STrans *pTrans);
|
void mndTransSetSerial(STrans *pTrans);
|
||||||
void mndTransSetParallel(STrans *pTrans);
|
void mndTransSetParallel(STrans *pTrans);
|
||||||
|
void mndTransSetChangeless(STrans *pTrans);
|
||||||
void mndTransSetOper(STrans *pTrans, EOperType oper);
|
void mndTransSetOper(STrans *pTrans, EOperType oper);
|
||||||
int32_t mndTransCheckConflict(SMnode *pMnode, STrans *pTrans);
|
int32_t mndTransCheckConflict(SMnode *pMnode, STrans *pTrans);
|
||||||
#ifndef BUILD_NO_CALL
|
#ifndef BUILD_NO_CALL
|
||||||
|
|
|
@ -292,16 +292,16 @@ static void storeOffsetRows(SMnode *pMnode, SMqHbReq *req, SMqConsumerObj *pCons
|
||||||
static int32_t buildMqHbRsp(SRpcMsg *pMsg, SMqHbRsp *rsp){
|
static int32_t buildMqHbRsp(SRpcMsg *pMsg, SMqHbRsp *rsp){
|
||||||
int32_t tlen = tSerializeSMqHbRsp(NULL, 0, rsp);
|
int32_t tlen = tSerializeSMqHbRsp(NULL, 0, rsp);
|
||||||
if (tlen <= 0){
|
if (tlen <= 0){
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_TMQ_INVALID_MSG;
|
||||||
}
|
}
|
||||||
void *buf = rpcMallocCont(tlen);
|
void *buf = rpcMallocCont(tlen);
|
||||||
if (buf == NULL) {
|
if (buf == NULL) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
if(tSerializeSMqHbRsp(buf, tlen, rsp) != 0){
|
if(tSerializeSMqHbRsp(buf, tlen, rsp) <= 0){
|
||||||
rpcFreeCont(buf);
|
rpcFreeCont(buf);
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_TMQ_INVALID_MSG;
|
||||||
}
|
}
|
||||||
pMsg->info.rsp = buf;
|
pMsg->info.rsp = buf;
|
||||||
pMsg->info.rspLen = tlen;
|
pMsg->info.rspLen = tlen;
|
||||||
|
@ -316,7 +316,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) {
|
||||||
SMqConsumerObj *pConsumer = NULL;
|
SMqConsumerObj *pConsumer = NULL;
|
||||||
|
|
||||||
if (tDeserializeSMqHbReq(pMsg->pCont, pMsg->contLen, &req) < 0) {
|
if (tDeserializeSMqHbReq(pMsg->pCont, pMsg->contLen, &req) < 0) {
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
code = TSDB_CODE_TMQ_INVALID_MSG;
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -822,7 +822,7 @@ _OVER:
|
||||||
"msg:%p, type:%s failed to process since %s, mnode restored:%d stopped:%d, sync restored:%d "
|
"msg:%p, type:%s failed to process since %s, mnode restored:%d stopped:%d, sync restored:%d "
|
||||||
"role:%s, redirect numOfEps:%d inUse:%d, type:%s",
|
"role:%s, redirect numOfEps:%d inUse:%d, type:%s",
|
||||||
pMsg, TMSG_INFO(pMsg->msgType), terrstr(), pMnode->restored, pMnode->stopped, state.restored,
|
pMsg, TMSG_INFO(pMsg->msgType), terrstr(), pMnode->restored, pMnode->stopped, state.restored,
|
||||||
syncStr(state.restored), epSet.numOfEps, epSet.inUse, TMSG_INFO(pMsg->msgType));
|
syncStr(state.state), epSet.numOfEps, epSet.inUse, TMSG_INFO(pMsg->msgType));
|
||||||
|
|
||||||
if (epSet.numOfEps <= 0) return -1;
|
if (epSet.numOfEps <= 0) return -1;
|
||||||
|
|
||||||
|
|
|
@ -739,6 +739,8 @@ void mndTransSetSerial(STrans *pTrans) { pTrans->exec = TRN_EXEC_SERIAL; }
|
||||||
|
|
||||||
void mndTransSetParallel(STrans *pTrans) { pTrans->exec = TRN_EXEC_PARALLEL; }
|
void mndTransSetParallel(STrans *pTrans) { pTrans->exec = TRN_EXEC_PARALLEL; }
|
||||||
|
|
||||||
|
void mndTransSetChangeless(STrans *pTrans) { pTrans->changeless = true; }
|
||||||
|
|
||||||
void mndTransSetOper(STrans *pTrans, EOperType oper) { pTrans->oper = oper; }
|
void mndTransSetOper(STrans *pTrans, EOperType oper) { pTrans->oper = oper; }
|
||||||
|
|
||||||
static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) {
|
static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) {
|
||||||
|
@ -855,6 +857,58 @@ int32_t mndTransCheckConflict(SMnode *pMnode, STrans *pTrans) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool mndTransActionsOfSameType(SArray *pActions) {
|
||||||
|
int32_t size = taosArrayGetSize(pActions);
|
||||||
|
ETrnAct lastActType = TRANS_ACTION_NULL;
|
||||||
|
bool same = true;
|
||||||
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
|
STransAction *pAction = taosArrayGet(pActions, i);
|
||||||
|
if (i > 0) {
|
||||||
|
if (lastActType != pAction->actionType) {
|
||||||
|
same = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lastActType = pAction->actionType;
|
||||||
|
}
|
||||||
|
return same;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t mndTransCheckParallelActions(SMnode *pMnode, STrans *pTrans) {
|
||||||
|
if (pTrans->exec == TRN_EXEC_PARALLEL) {
|
||||||
|
if (mndTransActionsOfSameType(pTrans->redoActions) == false) {
|
||||||
|
terrno = TSDB_CODE_MND_TRANS_INVALID_STAGE;
|
||||||
|
mError("trans:%d, types of parallel redo actions are not the same", pTrans->id);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
||||||
|
if (mndTransActionsOfSameType(pTrans->undoActions) == false) {
|
||||||
|
terrno = TSDB_CODE_MND_TRANS_INVALID_STAGE;
|
||||||
|
mError("trans:%d, types of parallel undo actions are not the same", pTrans->id);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t mndTransCheckCommitActions(SMnode *pMnode, STrans *pTrans) {
|
||||||
|
if (!pTrans->changeless && taosArrayGetSize(pTrans->commitActions) <= 0) {
|
||||||
|
terrno = TSDB_CODE_MND_TRANS_CLOG_IS_NULL;
|
||||||
|
mError("trans:%d, commit actions of non-changeless trans are empty", pTrans->id);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
if (mndTransActionsOfSameType(pTrans->commitActions) == false) {
|
||||||
|
terrno = TSDB_CODE_MND_TRANS_INVALID_STAGE;
|
||||||
|
mError("trans:%d, types of commit actions are not the same", pTrans->id);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
|
int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
|
||||||
if (pTrans == NULL) return -1;
|
if (pTrans == NULL) return -1;
|
||||||
|
|
||||||
|
@ -862,9 +916,11 @@ int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (taosArrayGetSize(pTrans->commitActions) <= 0) {
|
if (mndTransCheckParallelActions(pMnode, pTrans) != 0) {
|
||||||
terrno = TSDB_CODE_MND_TRANS_CLOG_IS_NULL;
|
return -1;
|
||||||
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
|
}
|
||||||
|
|
||||||
|
if (mndTransCheckCommitActions(pMnode, pTrans) != 0) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1281,24 +1337,25 @@ static int32_t mndTransExecuteActions(SMnode *pMnode, STrans *pTrans, SArray *pA
|
||||||
|
|
||||||
static int32_t mndTransExecuteRedoActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
static int32_t mndTransExecuteRedoActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->redoActions, topHalf);
|
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->redoActions, topHalf);
|
||||||
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS && code != TSDB_CODE_MND_TRANS_CTX_SWITCH) {
|
||||||
mError("failed to execute redoActions since:%s, code:0x%x", terrstr(), terrno);
|
mError("trans:%d, failed to execute redoActions since:%s, code:0x%x, topHalf:%d", pTrans->id, terrstr(), terrno,
|
||||||
|
topHalf);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndTransExecuteUndoActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
static int32_t mndTransExecuteUndoActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->undoActions, topHalf);
|
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->undoActions, topHalf);
|
||||||
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS && code != TSDB_CODE_MND_TRANS_CTX_SWITCH) {
|
||||||
mError("failed to execute undoActions since %s", terrstr());
|
mError("trans:%d, failed to execute undoActions since %s. topHalf:%d", pTrans->id, terrstr(), topHalf);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndTransExecuteCommitActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
static int32_t mndTransExecuteCommitActions(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->commitActions, topHalf);
|
int32_t code = mndTransExecuteActions(pMnode, pTrans, pTrans->commitActions, topHalf);
|
||||||
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS && code != TSDB_CODE_MND_TRANS_CTX_SWITCH) {
|
||||||
mError("failed to execute commitActions since %s", terrstr());
|
mError("trans:%d, failed to execute commitActions since %s. topHalf:%d", pTrans->id, terrstr(), topHalf);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
|
@ -2342,24 +2342,7 @@ int32_t mndAddVgroupBalanceToTrans(SMnode *pMnode, SVgObj *pVgroup, STrans *pTra
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, pVgroup) != 0) {
|
|
||||||
mError("trans:%d, vgid:%d failed to be balanced to dnode:%d", pTrans->id, vgid, dnodeId);
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
|
|
||||||
mndReleaseDb(pMnode, pDb);
|
mndReleaseDb(pMnode, pDb);
|
||||||
|
|
||||||
SSdbRaw *pRaw = mndVgroupActionEncode(pVgroup);
|
|
||||||
if (pRaw == NULL) {
|
|
||||||
mError("trans:%d, vgid:%d failed to encode action to dnode:%d", pTrans->id, vgid, dnodeId);
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
|
|
||||||
sdbFreeRaw(pRaw);
|
|
||||||
mError("trans:%d, vgid:%d failed to append commit log dnode:%d", pTrans->id, vgid, dnodeId);
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
(void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
|
||||||
} else {
|
} else {
|
||||||
mInfo("trans:%d, vgid:%d cant be balanced to dnode:%d, exist:%d, online:%d", pTrans->id, vgid, dnodeId, exist,
|
mInfo("trans:%d, vgid:%d cant be balanced to dnode:%d, exist:%d, online:%d", pTrans->id, vgid, dnodeId, exist,
|
||||||
online);
|
online);
|
||||||
|
|
|
@ -79,6 +79,7 @@ int32_t metaSnapRead(SMetaSnapReader* pReader, uint8_t** ppData) {
|
||||||
int32_t nKey = 0;
|
int32_t nKey = 0;
|
||||||
int32_t nData = 0;
|
int32_t nData = 0;
|
||||||
STbDbKey key;
|
STbDbKey key;
|
||||||
|
SMetaInfo info;
|
||||||
|
|
||||||
*ppData = NULL;
|
*ppData = NULL;
|
||||||
for (;;) {
|
for (;;) {
|
||||||
|
@ -91,7 +92,8 @@ int32_t metaSnapRead(SMetaSnapReader* pReader, uint8_t** ppData) {
|
||||||
goto _exit;
|
goto _exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (key.version < pReader->sver) {
|
if (key.version < pReader->sver //
|
||||||
|
|| metaGetInfo(pReader->pMeta, key.uid, &info, NULL) == TSDB_CODE_NOT_FOUND) {
|
||||||
tdbTbcMoveToNext(pReader->pTbc);
|
tdbTbcMoveToNext(pReader->pTbc);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
|
@ -923,13 +923,14 @@ int32_t handleStep2Async(SStreamTask* pStreamTask, void* param) {
|
||||||
STaskId hId = pStreamTask->hTaskInfo.id;
|
STaskId hId = pStreamTask->hTaskInfo.id;
|
||||||
SStreamTask* pTask = streamMetaAcquireTask(pStreamTask->pMeta, hId.streamId, hId.taskId);
|
SStreamTask* pTask = streamMetaAcquireTask(pStreamTask->pMeta, hId.streamId, hId.taskId);
|
||||||
if (pTask == NULL) {
|
if (pTask == NULL) {
|
||||||
// todo handle error
|
tqWarn("s-task:0x%x failed to acquired it to exec step 2, scan wal quit", (int32_t) hId.taskId);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
doStartFillhistoryStep2(pTask, pStreamTask, pTq);
|
doStartFillhistoryStep2(pTask, pStreamTask, pTq);
|
||||||
|
|
||||||
streamMetaReleaseTask(pMeta, pTask);
|
streamMetaReleaseTask(pMeta, pTask);
|
||||||
return 0;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
// this function should be executed by only one thread, so we set an sentinel to protect this function
|
// this function should be executed by only one thread, so we set an sentinel to protect this function
|
||||||
|
|
|
@ -311,7 +311,7 @@ int32_t tqTaosxScanLog(STQ* pTq, STqHandle* pHandle, SPackedData submit, STaosxR
|
||||||
SSDataBlock* pBlock = taosArrayGet(pBlocks, i);
|
SSDataBlock* pBlock = taosArrayGet(pBlocks, i);
|
||||||
tqAddBlockDataToRsp(pBlock, (SMqDataRsp*)pRsp, taosArrayGetSize(pBlock->pDataBlock),
|
tqAddBlockDataToRsp(pBlock, (SMqDataRsp*)pRsp, taosArrayGetSize(pBlock->pDataBlock),
|
||||||
pTq->pVnode->config.tsdbCfg.precision);
|
pTq->pVnode->config.tsdbCfg.precision);
|
||||||
totalRows += pBlock->info.rows;
|
*totalRows += pBlock->info.rows;
|
||||||
blockDataFreeRes(pBlock);
|
blockDataFreeRes(pBlock);
|
||||||
SSchemaWrapper* pSW = taosArrayGetP(pSchemas, i);
|
SSchemaWrapper* pSW = taosArrayGetP(pSchemas, i);
|
||||||
taosArrayPush(pRsp->blockSchema, &pSW);
|
taosArrayPush(pRsp->blockSchema, &pSW);
|
||||||
|
|
|
@ -850,12 +850,18 @@ int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
||||||
|
|
||||||
tqDebug("s-task:%s receive task-reset msg from mnode, reset status and ready for data processing", pTask->id.idStr);
|
tqDebug("s-task:%s receive task-reset msg from mnode, reset status and ready for data processing", pTask->id.idStr);
|
||||||
|
|
||||||
|
taosThreadMutexLock(&pTask->lock);
|
||||||
|
|
||||||
// clear flag set during do checkpoint, and open inputQ for all upstream tasks
|
// clear flag set during do checkpoint, and open inputQ for all upstream tasks
|
||||||
if (streamTaskGetStatus(pTask)->state == TASK_STATUS__CK) {
|
if (streamTaskGetStatus(pTask)->state == TASK_STATUS__CK) {
|
||||||
|
tqDebug("s-task:%s reset task status from checkpoint, current checkpointingId:%" PRId64 ", transId:%d",
|
||||||
|
pTask->id.idStr, pTask->chkInfo.checkpointingId, pTask->chkInfo.transId);
|
||||||
streamTaskClearCheckInfo(pTask, true);
|
streamTaskClearCheckInfo(pTask, true);
|
||||||
streamTaskSetStatusReady(pTask);
|
streamTaskSetStatusReady(pTask);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosThreadMutexUnlock(&pTask->lock);
|
||||||
|
|
||||||
streamMetaReleaseTask(pMeta, pTask);
|
streamMetaReleaseTask(pMeta, pTask);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -2140,6 +2140,9 @@ int32_t buildGroupIdMapForAllTables(STableListInfo* pTableListInfo, SReadHandle*
|
||||||
}
|
}
|
||||||
|
|
||||||
pTableListInfo->oneTableForEachGroup = groupByTbname;
|
pTableListInfo->oneTableForEachGroup = groupByTbname;
|
||||||
|
if (numOfTables == 1 && pTableListInfo->idInfo.tableType == TSDB_CHILD_TABLE) {
|
||||||
|
pTableListInfo->oneTableForEachGroup = true;
|
||||||
|
}
|
||||||
|
|
||||||
if (groupSort && groupByTbname) {
|
if (groupSort && groupByTbname) {
|
||||||
taosArraySort(pTableListInfo->pTableList, orderbyGroupIdComparFn);
|
taosArraySort(pTableListInfo->pTableList, orderbyGroupIdComparFn);
|
||||||
|
|
|
@ -889,14 +889,15 @@ static SSDataBlock* doGroupedTableScan(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
if (pTableScanInfo->countState < TABLE_COUNT_STATE_END) {
|
if (pTableScanInfo->countState < TABLE_COUNT_STATE_END) {
|
||||||
STableListInfo* pTableListInfo = pTableScanInfo->base.pTableListInfo;
|
STableListInfo* pTableListInfo = pTableScanInfo->base.pTableListInfo;
|
||||||
if (pTableListInfo->oneTableForEachGroup || pTableListInfo->groupOffset) { // group by tbname, group by tag + sort
|
if (pTableListInfo->oneTableForEachGroup || pTableListInfo->groupOffset) { // group by tbname, group by tag + sort
|
||||||
if (pTableScanInfo->countState < TABLE_COUNT_STATE_PROCESSED) {
|
if (pTableScanInfo->countState < TABLE_COUNT_STATE_PROCESSED) {
|
||||||
pTableScanInfo->countState = TABLE_COUNT_STATE_PROCESSED;
|
pTableScanInfo->countState = TABLE_COUNT_STATE_PROCESSED;
|
||||||
STableKeyInfo* pStart =
|
STableKeyInfo* pStart =
|
||||||
(STableKeyInfo*)tableListGetInfo(pTableScanInfo->base.pTableListInfo, pTableScanInfo->tableStartIndex);
|
(STableKeyInfo*)tableListGetInfo(pTableScanInfo->base.pTableListInfo, pTableScanInfo->tableStartIndex);
|
||||||
|
if (NULL == pStart) return NULL;
|
||||||
return getBlockForEmptyTable(pOperator, pStart);
|
return getBlockForEmptyTable(pOperator, pStart);
|
||||||
}
|
}
|
||||||
} else { // group by tag + no sort
|
} else { // group by tag + no sort
|
||||||
int32_t numOfTables = tableListGetSize(pTableListInfo);
|
int32_t numOfTables = tableListGetSize(pTableListInfo);
|
||||||
if (pTableScanInfo->tableEndIndex + 1 >= numOfTables) {
|
if (pTableScanInfo->tableEndIndex + 1 >= numOfTables) {
|
||||||
// get empty group, mark processed & rm from hash
|
// get empty group, mark processed & rm from hash
|
||||||
|
|
|
@ -983,7 +983,10 @@ static int32_t sysTableUserTagsFillOneTableTags(const SSysTableScanInfo* pInfo,
|
||||||
: (3 + DBL_MANT_DIG - DBL_MIN_EXP + VARSTR_HEADER_SIZE);
|
: (3 + DBL_MANT_DIG - DBL_MIN_EXP + VARSTR_HEADER_SIZE);
|
||||||
tagVarChar = taosMemoryCalloc(1, bufSize + 1);
|
tagVarChar = taosMemoryCalloc(1, bufSize + 1);
|
||||||
int32_t len = -1;
|
int32_t len = -1;
|
||||||
convertTagDataToStr(varDataVal(tagVarChar), tagType, tagData, tagLen, &len);
|
if (tagLen > 0)
|
||||||
|
convertTagDataToStr(varDataVal(tagVarChar), tagType, tagData, tagLen, &len);
|
||||||
|
else
|
||||||
|
len = 0;
|
||||||
varDataSetLen(tagVarChar, len);
|
varDataSetLen(tagVarChar, len);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -2018,9 +2018,17 @@ static int32_t translateCast(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||||
if (IS_STR_DATA_TYPE(para2Type)) {
|
if (IS_STR_DATA_TYPE(para2Type)) {
|
||||||
para2Bytes -= VARSTR_HEADER_SIZE;
|
para2Bytes -= VARSTR_HEADER_SIZE;
|
||||||
}
|
}
|
||||||
if (para2Bytes <= 0 || para2Bytes > 4096) { // cast dst var type length limits to 4096 bytes
|
if (para2Bytes <= 0 ||
|
||||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
para2Bytes > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { // cast dst var type length limits to 4096 bytes
|
||||||
"CAST function converted length should be in range (0, 4096] bytes");
|
if (TSDB_DATA_TYPE_NCHAR == para2Type) {
|
||||||
|
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||||
|
"CAST function converted length should be in range (0, %d] NCHARS",
|
||||||
|
(TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE)/TSDB_NCHAR_SIZE);
|
||||||
|
} else {
|
||||||
|
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||||
|
"CAST function converted length should be in range (0, %d] bytes",
|
||||||
|
TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// add database precision as param
|
// add database precision as param
|
||||||
|
|
|
@ -3598,15 +3598,15 @@ static int32_t doSaveTupleData(SSerializeDataHandle* pHandle, const void* pBuf,
|
||||||
int32_t saveTupleData(SqlFunctionCtx* pCtx, int32_t rowIndex, const SSDataBlock* pSrcBlock, STuplePos* pPos) {
|
int32_t saveTupleData(SqlFunctionCtx* pCtx, int32_t rowIndex, const SSDataBlock* pSrcBlock, STuplePos* pPos) {
|
||||||
prepareBuf(pCtx);
|
prepareBuf(pCtx);
|
||||||
|
|
||||||
SWinKey key;
|
SWinKey key = {0};
|
||||||
if (pCtx->saveHandle.pBuf == NULL) {
|
if (pCtx->saveHandle.pBuf == NULL) {
|
||||||
SColumnInfoData* pColInfo = taosArrayGet(pSrcBlock->pDataBlock, 0);
|
SColumnInfoData* pColInfo = pCtx->input.pPTS;
|
||||||
if (pColInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
|
if (!pColInfo || pColInfo->info.type != TSDB_DATA_TYPE_TIMESTAMP) {
|
||||||
int64_t skey = *(int64_t*)colDataGetData(pColInfo, rowIndex);
|
pColInfo = taosArrayGet(pSrcBlock->pDataBlock, 0);
|
||||||
|
|
||||||
key.groupId = pSrcBlock->info.id.groupId;
|
|
||||||
key.ts = skey;
|
|
||||||
}
|
}
|
||||||
|
ASSERT(pColInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP);
|
||||||
|
key.groupId = pSrcBlock->info.id.groupId;
|
||||||
|
key.ts = *(int64_t*)colDataGetData(pColInfo, rowIndex);;
|
||||||
}
|
}
|
||||||
|
|
||||||
char* buf = serializeTupleData(pSrcBlock, rowIndex, &pCtx->subsidiaries, pCtx->subsidiaries.buf);
|
char* buf = serializeTupleData(pSrcBlock, rowIndex, &pCtx->subsidiaries, pCtx->subsidiaries.buf);
|
||||||
|
|
|
@ -423,6 +423,13 @@ type_name(A) ::= DECIMAL.
|
||||||
type_name(A) ::= DECIMAL NK_LP NK_INTEGER NK_RP. { A = createDataType(TSDB_DATA_TYPE_DECIMAL); }
|
type_name(A) ::= DECIMAL NK_LP NK_INTEGER NK_RP. { A = createDataType(TSDB_DATA_TYPE_DECIMAL); }
|
||||||
type_name(A) ::= DECIMAL NK_LP NK_INTEGER NK_COMMA NK_INTEGER NK_RP. { A = createDataType(TSDB_DATA_TYPE_DECIMAL); }
|
type_name(A) ::= DECIMAL NK_LP NK_INTEGER NK_COMMA NK_INTEGER NK_RP. { A = createDataType(TSDB_DATA_TYPE_DECIMAL); }
|
||||||
|
|
||||||
|
%type type_name_default_len { SDataType }
|
||||||
|
%destructor type_name_default_len { }
|
||||||
|
type_name_default_len(A) ::= BINARY. { A = createVarLenDataType(TSDB_DATA_TYPE_BINARY, NULL); }
|
||||||
|
type_name_default_len(A) ::= NCHAR. { A = createVarLenDataType(TSDB_DATA_TYPE_NCHAR, NULL); }
|
||||||
|
type_name_default_len(A) ::= VARCHAR. { A = createVarLenDataType(TSDB_DATA_TYPE_VARCHAR, NULL); }
|
||||||
|
type_name_default_len(A) ::= VARBINARY. { A = createVarLenDataType(TSDB_DATA_TYPE_VARBINARY, NULL); }
|
||||||
|
|
||||||
%type tags_def_opt { SNodeList* }
|
%type tags_def_opt { SNodeList* }
|
||||||
%destructor tags_def_opt { nodesDestroyList($$); }
|
%destructor tags_def_opt { nodesDestroyList($$); }
|
||||||
tags_def_opt(A) ::= . { A = NULL; }
|
tags_def_opt(A) ::= . { A = NULL; }
|
||||||
|
@ -1119,6 +1126,9 @@ function_expression(A) ::= function_name(B) NK_LP expression_list(C) NK_RP(D).
|
||||||
function_expression(A) ::= star_func(B) NK_LP star_func_para_list(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, createFunctionNode(pCxt, &B, C)); }
|
function_expression(A) ::= star_func(B) NK_LP star_func_para_list(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, createFunctionNode(pCxt, &B, C)); }
|
||||||
function_expression(A) ::=
|
function_expression(A) ::=
|
||||||
CAST(B) NK_LP expr_or_subquery(C) AS type_name(D) NK_RP(E). { A = createRawExprNodeExt(pCxt, &B, &E, createCastFunctionNode(pCxt, releaseRawExprNode(pCxt, C), D)); }
|
CAST(B) NK_LP expr_or_subquery(C) AS type_name(D) NK_RP(E). { A = createRawExprNodeExt(pCxt, &B, &E, createCastFunctionNode(pCxt, releaseRawExprNode(pCxt, C), D)); }
|
||||||
|
function_expression(A) ::=
|
||||||
|
CAST(B) NK_LP expr_or_subquery(C) AS type_name_default_len(D) NK_RP(E). { A = createRawExprNodeExt(pCxt, &B, &E, createCastFunctionNode(pCxt, releaseRawExprNode(pCxt, C), D)); }
|
||||||
|
|
||||||
function_expression(A) ::= literal_func(B). { A = B; }
|
function_expression(A) ::= literal_func(B). { A = B; }
|
||||||
|
|
||||||
literal_func(A) ::= noarg_func(B) NK_LP NK_RP(C). { A = createRawExprNodeExt(pCxt, &B, &C, createFunctionNode(pCxt, &B, NULL)); }
|
literal_func(A) ::= noarg_func(B) NK_LP NK_RP(C). { A = createRawExprNodeExt(pCxt, &B, &C, createFunctionNode(pCxt, &B, NULL)); }
|
||||||
|
|
|
@ -1641,7 +1641,10 @@ SDataType createDataType(uint8_t type) {
|
||||||
}
|
}
|
||||||
|
|
||||||
SDataType createVarLenDataType(uint8_t type, const SToken* pLen) {
|
SDataType createVarLenDataType(uint8_t type, const SToken* pLen) {
|
||||||
SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = taosStr2Int32(pLen->z, NULL, 10)};
|
int32_t len = TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE;
|
||||||
|
if (type == TSDB_DATA_TYPE_NCHAR) len /= TSDB_NCHAR_SIZE;
|
||||||
|
if(pLen) len = taosStr2Int32(pLen->z, NULL, 10);
|
||||||
|
SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = len};
|
||||||
return dt;
|
return dt;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1566,11 +1566,13 @@ static int32_t parseValueTokenImpl(SInsertParseContext* pCxt, const char** pSql,
|
||||||
case TSDB_DATA_TYPE_NCHAR: {
|
case TSDB_DATA_TYPE_NCHAR: {
|
||||||
// if the converted output len is over than pColumnModel->bytes, return error: 'Argument list too long'
|
// if the converted output len is over than pColumnModel->bytes, return error: 'Argument list too long'
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
char* pUcs4 = taosMemoryCalloc(1, pSchema->bytes - VARSTR_HEADER_SIZE);
|
int64_t realLen = pToken->n << 2;
|
||||||
|
if (realLen > pSchema->bytes - VARSTR_HEADER_SIZE) realLen = pSchema->bytes - VARSTR_HEADER_SIZE;
|
||||||
|
char* pUcs4 = taosMemoryMalloc(realLen);
|
||||||
if (NULL == pUcs4) {
|
if (NULL == pUcs4) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
if (!taosMbsToUcs4(pToken->z, pToken->n, (TdUcs4*)pUcs4, pSchema->bytes - VARSTR_HEADER_SIZE, &len)) {
|
if (!taosMbsToUcs4(pToken->z, pToken->n, (TdUcs4*)pUcs4, realLen, &len)) {
|
||||||
taosMemoryFree(pUcs4);
|
taosMemoryFree(pUcs4);
|
||||||
if (errno == E2BIG) {
|
if (errno == E2BIG) {
|
||||||
return generateSyntaxErrMsg(&pCxt->msg, TSDB_CODE_PAR_VALUE_TOO_LONG, pSchema->name);
|
return generateSyntaxErrMsg(&pCxt->msg, TSDB_CODE_PAR_VALUE_TOO_LONG, pSchema->name);
|
||||||
|
|
|
@ -2121,7 +2121,6 @@ static int32_t translateMultiResFunc(STranslateContext* pCxt, SFunctionNode* pFu
|
||||||
}
|
}
|
||||||
if (tsKeepColumnName && 1 == LIST_LENGTH(pFunc->pParameterList) && !pFunc->node.asAlias && !pFunc->node.asParam) {
|
if (tsKeepColumnName && 1 == LIST_LENGTH(pFunc->pParameterList) && !pFunc->node.asAlias && !pFunc->node.asParam) {
|
||||||
strcpy(pFunc->node.userAlias, ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->userAlias);
|
strcpy(pFunc->node.userAlias, ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->userAlias);
|
||||||
strcpy(pFunc->node.aliasName, pFunc->node.userAlias);
|
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -2713,6 +2712,29 @@ static EDealRes rewriteExprToGroupKeyFunc(STranslateContext* pCxt, SNode** pNode
|
||||||
return (TSDB_CODE_SUCCESS == pCxt->errCode ? DEAL_RES_IGNORE_CHILD : DEAL_RES_ERROR);
|
return (TSDB_CODE_SUCCESS == pCxt->errCode ? DEAL_RES_IGNORE_CHILD : DEAL_RES_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool isTbnameFuction(SNode* pNode) {
|
||||||
|
return QUERY_NODE_FUNCTION == nodeType(pNode) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pNode)->funcType;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool hasTbnameFunction(SNodeList* pPartitionByList) {
|
||||||
|
SNode* pPartKey = NULL;
|
||||||
|
FOREACH(pPartKey, pPartitionByList) {
|
||||||
|
if (isTbnameFuction(pPartKey)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool fromSubtable(SNode* table) {
|
||||||
|
if (NULL == table) return false;
|
||||||
|
if (table->type == QUERY_NODE_REAL_TABLE && ((SRealTableNode*)table)->pMeta &&
|
||||||
|
((SRealTableNode*)table)->pMeta->tableType == TSDB_CHILD_TABLE) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
|
static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
|
||||||
STranslateContext* pCxt = (STranslateContext*)pContext;
|
STranslateContext* pCxt = (STranslateContext*)pContext;
|
||||||
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
||||||
|
@ -2724,15 +2746,25 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
|
||||||
}
|
}
|
||||||
SNode* pGroupNode = NULL;
|
SNode* pGroupNode = NULL;
|
||||||
FOREACH(pGroupNode, getGroupByList(pCxt)) {
|
FOREACH(pGroupNode, getGroupByList(pCxt)) {
|
||||||
if (nodesEqualNode(getGroupByNode(pGroupNode), *pNode)) {
|
SNode* pActualNode = getGroupByNode(pGroupNode);
|
||||||
|
if (nodesEqualNode(pActualNode, *pNode)) {
|
||||||
return DEAL_RES_IGNORE_CHILD;
|
return DEAL_RES_IGNORE_CHILD;
|
||||||
}
|
}
|
||||||
|
if (isTbnameFuction(pActualNode) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
|
||||||
|
((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {
|
||||||
|
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
SNode* pPartKey = NULL;
|
SNode* pPartKey = NULL;
|
||||||
|
bool partionByTbname = hasTbnameFunction(pSelect->pPartitionByList);
|
||||||
FOREACH(pPartKey, pSelect->pPartitionByList) {
|
FOREACH(pPartKey, pSelect->pPartitionByList) {
|
||||||
if (nodesEqualNode(pPartKey, *pNode)) {
|
if (nodesEqualNode(pPartKey, *pNode)) {
|
||||||
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||||
}
|
}
|
||||||
|
if ((partionByTbname) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
|
||||||
|
((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {
|
||||||
|
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (NULL != pSelect->pWindow && QUERY_NODE_STATE_WINDOW == nodeType(pSelect->pWindow)) {
|
if (NULL != pSelect->pWindow && QUERY_NODE_STATE_WINDOW == nodeType(pSelect->pWindow)) {
|
||||||
if (nodesEqualNode(((SStateWindowNode*)pSelect->pWindow)->pExpr, *pNode)) {
|
if (nodesEqualNode(((SStateWindowNode*)pSelect->pWindow)->pExpr, *pNode)) {
|
||||||
|
@ -2796,11 +2828,19 @@ static EDealRes doCheckAggColCoexist(SNode** pNode, void* pContext) {
|
||||||
return DEAL_RES_IGNORE_CHILD;
|
return DEAL_RES_IGNORE_CHILD;
|
||||||
}
|
}
|
||||||
SNode* pPartKey = NULL;
|
SNode* pPartKey = NULL;
|
||||||
|
bool partionByTbname = false;
|
||||||
|
if (fromSubtable(((SSelectStmt*)pCxt->pTranslateCxt->pCurrStmt)->pFromTable) ||
|
||||||
|
hasTbnameFunction(((SSelectStmt*)pCxt->pTranslateCxt->pCurrStmt)->pPartitionByList)) {
|
||||||
|
partionByTbname = true;
|
||||||
|
}
|
||||||
FOREACH(pPartKey, ((SSelectStmt*)pCxt->pTranslateCxt->pCurrStmt)->pPartitionByList) {
|
FOREACH(pPartKey, ((SSelectStmt*)pCxt->pTranslateCxt->pCurrStmt)->pPartitionByList) {
|
||||||
if (nodesEqualNode(pPartKey, *pNode)) {
|
if (nodesEqualNode(pPartKey, *pNode)) {
|
||||||
return rewriteExprToGroupKeyFunc(pCxt->pTranslateCxt, pNode);
|
return rewriteExprToGroupKeyFunc(pCxt->pTranslateCxt, pNode);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if (partionByTbname && QUERY_NODE_COLUMN == nodeType(*pNode) && ((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {
|
||||||
|
return rewriteExprToGroupKeyFunc(pCxt->pTranslateCxt, pNode);
|
||||||
|
}
|
||||||
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
|
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
|
||||||
pCxt->existCol = true;
|
pCxt->existCol = true;
|
||||||
}
|
}
|
||||||
|
@ -3970,22 +4010,12 @@ static int32_t checkStateExpr(STranslateContext* pCxt, SNode* pNode) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool hasPartitionByTbname(SNodeList* pPartitionByList) {
|
|
||||||
SNode* pPartKey = NULL;
|
|
||||||
FOREACH(pPartKey, pPartitionByList) {
|
|
||||||
if (QUERY_NODE_FUNCTION == nodeType(pPartKey) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pPartKey)->funcType) {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int32_t checkStateWindowForStream(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
static int32_t checkStateWindowForStream(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
||||||
if (!pCxt->createStream) {
|
if (!pCxt->createStream) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
if (TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
if (TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
||||||
!hasPartitionByTbname(pSelect->pPartitionByList)) {
|
!hasTbnameFunction(pSelect->pPartitionByList)) {
|
||||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY, "Unsupported stream query");
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY, "Unsupported stream query");
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
@ -7706,12 +7736,12 @@ static int32_t translateKillTransaction(STranslateContext* pCxt, SKillStmt* pStm
|
||||||
static bool crossTableWithoutAggOper(SSelectStmt* pSelect) {
|
static bool crossTableWithoutAggOper(SSelectStmt* pSelect) {
|
||||||
return NULL == pSelect->pWindow && !pSelect->hasAggFuncs && !pSelect->hasIndefiniteRowsFunc &&
|
return NULL == pSelect->pWindow && !pSelect->hasAggFuncs && !pSelect->hasIndefiniteRowsFunc &&
|
||||||
!pSelect->hasInterpFunc && TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
!pSelect->hasInterpFunc && TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
||||||
!hasPartitionByTbname(pSelect->pPartitionByList);
|
!hasTbnameFunction(pSelect->pPartitionByList);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool crossTableWithUdaf(SSelectStmt* pSelect) {
|
static bool crossTableWithUdaf(SSelectStmt* pSelect) {
|
||||||
return pSelect->hasUdaf && TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
return pSelect->hasUdaf && TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
||||||
!hasPartitionByTbname(pSelect->pPartitionByList);
|
!hasTbnameFunction(pSelect->pPartitionByList);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t checkCreateStream(STranslateContext* pCxt, SCreateStreamStmt* pStmt) {
|
static int32_t checkCreateStream(STranslateContext* pCxt, SCreateStreamStmt* pStmt) {
|
||||||
|
@ -7967,10 +7997,10 @@ static int32_t subtableExprHasColumnOrPseudoColumn(SNode* pNode) {
|
||||||
|
|
||||||
static int32_t checkStreamQuery(STranslateContext* pCxt, SCreateStreamStmt* pStmt) {
|
static int32_t checkStreamQuery(STranslateContext* pCxt, SCreateStreamStmt* pStmt) {
|
||||||
SSelectStmt* pSelect = (SSelectStmt*)pStmt->pQuery;
|
SSelectStmt* pSelect = (SSelectStmt*)pStmt->pQuery;
|
||||||
if ((SRealTableNode*)pSelect->pFromTable && ((SRealTableNode*)pSelect->pFromTable)->pMeta &&
|
if ( (SRealTableNode*)pSelect->pFromTable && ((SRealTableNode*)pSelect->pFromTable)->pMeta
|
||||||
TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
&& TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType
|
||||||
!hasPartitionByTbname(pSelect->pPartitionByList) && pSelect->pWindow != NULL &&
|
&& !hasTbnameFunction(pSelect->pPartitionByList)
|
||||||
pSelect->pWindow->type == QUERY_NODE_EVENT_WINDOW) {
|
&& pSelect->pWindow != NULL && pSelect->pWindow->type == QUERY_NODE_EVENT_WINDOW) {
|
||||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY,
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY,
|
||||||
"Event window for stream on super table must patitioned by table name");
|
"Event window for stream on super table must patitioned by table name");
|
||||||
}
|
}
|
||||||
|
@ -7995,9 +8025,9 @@ static int32_t checkStreamQuery(STranslateContext* pCxt, SCreateStreamStmt* pStm
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pSelect->pWindow != NULL && pSelect->pWindow->type == QUERY_NODE_COUNT_WINDOW) {
|
if (pSelect->pWindow != NULL && pSelect->pWindow->type == QUERY_NODE_COUNT_WINDOW) {
|
||||||
if ((SRealTableNode*)pSelect->pFromTable && ((SRealTableNode*)pSelect->pFromTable)->pMeta &&
|
if ( (SRealTableNode*)pSelect->pFromTable && ((SRealTableNode*)pSelect->pFromTable)->pMeta
|
||||||
TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType &&
|
&& TSDB_SUPER_TABLE == ((SRealTableNode*)pSelect->pFromTable)->pMeta->tableType
|
||||||
!hasPartitionByTbname(pSelect->pPartitionByList)) {
|
&& !hasTbnameFunction(pSelect->pPartitionByList) ) {
|
||||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY,
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STREAM_QUERY,
|
||||||
"Count window for stream on super table must patitioned by table name");
|
"Count window for stream on super table must patitioned by table name");
|
||||||
}
|
}
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -521,6 +521,9 @@ static int32_t createScanLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
|
||||||
} else if (pSelect->pPartitionByList) {
|
} else if (pSelect->pPartitionByList) {
|
||||||
isCountByTag = !keysHasCol(pSelect->pPartitionByList);
|
isCountByTag = !keysHasCol(pSelect->pPartitionByList);
|
||||||
}
|
}
|
||||||
|
if (pScan->tableType == TSDB_CHILD_TABLE) {
|
||||||
|
isCountByTag = true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
pScan->isCountByTag = isCountByTag;
|
pScan->isCountByTag = isCountByTag;
|
||||||
|
|
||||||
|
|
|
@ -486,6 +486,7 @@ int32_t schHandleNotifyCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
||||||
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
||||||
code);
|
code);
|
||||||
|
rpcReleaseHandle(pMsg->handle, TAOS_CONN_CLIENT);
|
||||||
if (pMsg) {
|
if (pMsg) {
|
||||||
taosMemoryFree(pMsg->pData);
|
taosMemoryFree(pMsg->pData);
|
||||||
taosMemoryFree(pMsg->pEpSet);
|
taosMemoryFree(pMsg->pEpSet);
|
||||||
|
@ -526,6 +527,7 @@ int32_t schHandleHbCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
|
|
||||||
if (code) {
|
if (code) {
|
||||||
qError("hb rsp error:%s", tstrerror(code));
|
qError("hb rsp error:%s", tstrerror(code));
|
||||||
|
rpcReleaseHandle(pMsg->handle, TAOS_CONN_CLIENT);
|
||||||
SCH_ERR_JRET(code);
|
SCH_ERR_JRET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1181,7 +1183,7 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
|
||||||
qMsg.queryId = pJob->queryId;
|
qMsg.queryId = pJob->queryId;
|
||||||
qMsg.taskId = pTask->taskId;
|
qMsg.taskId = pTask->taskId;
|
||||||
qMsg.refId = pJob->refId;
|
qMsg.refId = pJob->refId;
|
||||||
qMsg.execId = pTask->execId;
|
qMsg.execId = *(int32_t*)param;
|
||||||
|
|
||||||
msgSize = tSerializeSTaskDropReq(NULL, 0, &qMsg);
|
msgSize = tSerializeSTaskDropReq(NULL, 0, &qMsg);
|
||||||
if (msgSize < 0) {
|
if (msgSize < 0) {
|
||||||
|
|
|
@ -371,14 +371,13 @@ int32_t schChkUpdateRedirectCtx(SSchJob *pJob, SSchTask *pTask, SEpSet *pEpSet,
|
||||||
pCtx->roundTotal = pEpSet->numOfEps;
|
pCtx->roundTotal = pEpSet->numOfEps;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
if (pCtx->roundTimes >= pCtx->roundTotal) {
|
if (pCtx->roundTimes >= pCtx->roundTotal) {
|
||||||
int64_t nowTs = taosGetTimestampMs();
|
int64_t nowTs = taosGetTimestampMs();
|
||||||
int64_t lastTime = nowTs - pCtx->startTs;
|
int64_t lastTime = nowTs - pCtx->startTs;
|
||||||
if (lastTime > tsMaxRetryWaitTime) {
|
if (lastTime > tsMaxRetryWaitTime) {
|
||||||
SCH_TASK_DLOG("task no more redirect retry since timeout, now:%" PRId64 ", start:%" PRId64 ", max:%d, total:%d",
|
SCH_TASK_DLOG("task no more redirect retry since timeout, now:%" PRId64 ", start:%" PRId64 ", max:%d, total:%d",
|
||||||
nowTs, pCtx->startTs, tsMaxRetryWaitTime, pCtx->totalTimes);
|
nowTs, pCtx->startTs, tsMaxRetryWaitTime, pCtx->totalTimes);
|
||||||
pJob->noMoreRetry = true;
|
pJob->noMoreRetry = true;
|
||||||
SCH_ERR_RET(SCH_GET_REDIRECT_CODE(pJob, rspCode));
|
SCH_ERR_RET(SCH_GET_REDIRECT_CODE(pJob, rspCode));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -418,7 +417,7 @@ void schResetTaskForRetry(SSchJob *pJob, SSchTask *pTask) {
|
||||||
taosMemoryFreeClear(pTask->msg);
|
taosMemoryFreeClear(pTask->msg);
|
||||||
pTask->msgLen = 0;
|
pTask->msgLen = 0;
|
||||||
pTask->lastMsgType = 0;
|
pTask->lastMsgType = 0;
|
||||||
pTask->childReady = 0;
|
pTask->childReady = 0;
|
||||||
memset(&pTask->succeedAddr, 0, sizeof(pTask->succeedAddr));
|
memset(&pTask->succeedAddr, 0, sizeof(pTask->succeedAddr));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -505,11 +504,11 @@ int32_t schHandleTaskSetRetry(SSchJob *pJob, SSchTask *pTask, SDataBuf *pData, i
|
||||||
pLevel->taskExecDoneNum = 0;
|
pLevel->taskExecDoneNum = 0;
|
||||||
pLevel->taskLaunchedNum = 0;
|
pLevel->taskLaunchedNum = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
SCH_RESET_JOB_LEVEL_IDX(pJob);
|
SCH_RESET_JOB_LEVEL_IDX(pJob);
|
||||||
|
|
||||||
code = schDoTaskRedirect(pJob, pTask, pData, rspCode);
|
code = schDoTaskRedirect(pJob, pTask, pData, rspCode);
|
||||||
|
|
||||||
taosMemoryFreeClear(pData->pData);
|
taosMemoryFreeClear(pData->pData);
|
||||||
taosMemoryFreeClear(pData->pEpSet);
|
taosMemoryFreeClear(pData->pEpSet);
|
||||||
|
|
||||||
|
@ -627,7 +626,7 @@ int32_t schTaskCheckSetRetry(SSchJob *pJob, SSchTask *pTask, int32_t errCode, bo
|
||||||
pTask->maxRetryTimes);
|
pTask->maxRetryTimes);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (TSDB_CODE_SCH_TIMEOUT_ERROR == errCode) {
|
if (TSDB_CODE_SCH_TIMEOUT_ERROR == errCode) {
|
||||||
pTask->maxExecTimes++;
|
pTask->maxExecTimes++;
|
||||||
pTask->maxRetryTimes++;
|
pTask->maxRetryTimes++;
|
||||||
|
@ -862,7 +861,9 @@ void schDropTaskOnExecNode(SSchJob *pJob, SSchTask *pTask) {
|
||||||
while (nodeInfo) {
|
while (nodeInfo) {
|
||||||
if (nodeInfo->handle) {
|
if (nodeInfo->handle) {
|
||||||
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
|
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
|
||||||
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK, NULL);
|
void *pExecId = taosHashGetKey(nodeInfo, NULL);
|
||||||
|
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK, pExecId);
|
||||||
|
|
||||||
SCH_TASK_DLOG("start to drop task's %dth execNode", i);
|
SCH_TASK_DLOG("start to drop task's %dth execNode", i);
|
||||||
} else {
|
} else {
|
||||||
SCH_TASK_DLOG("no need to drop task %dth execNode", i);
|
SCH_TASK_DLOG("no need to drop task %dth execNode", i);
|
||||||
|
@ -901,7 +902,6 @@ int32_t schNotifyTaskOnExecNode(SSchJob *pJob, SSchTask *pTask, ETaskNotifyType
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t schProcessOnTaskStatusRsp(SQueryNodeEpId *pEpId, SArray *pStatusList) {
|
int32_t schProcessOnTaskStatusRsp(SQueryNodeEpId *pEpId, SArray *pStatusList) {
|
||||||
int32_t taskNum = (int32_t)taosArrayGetSize(pStatusList);
|
int32_t taskNum = (int32_t)taosArrayGetSize(pStatusList);
|
||||||
SSchTask *pTask = NULL;
|
SSchTask *pTask = NULL;
|
||||||
|
@ -1269,7 +1269,7 @@ int32_t schNotifyTaskInHashList(SSchJob *pJob, SHashObj *list, ETaskNotifyType t
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
SCH_ERR_RET(schNotifyTaskOnExecNode(pJob, pCurrTask, type));
|
SCH_ERR_RET(schNotifyTaskOnExecNode(pJob, pCurrTask, type));
|
||||||
|
|
||||||
void *pIter = taosHashIterate(list, NULL);
|
void *pIter = taosHashIterate(list, NULL);
|
||||||
while (pIter) {
|
while (pIter) {
|
||||||
SSchTask *pTask = *(SSchTask **)pIter;
|
SSchTask *pTask = *(SSchTask **)pIter;
|
||||||
|
@ -1277,7 +1277,7 @@ int32_t schNotifyTaskInHashList(SSchJob *pJob, SHashObj *list, ETaskNotifyType t
|
||||||
SCH_LOCK_TASK(pTask);
|
SCH_LOCK_TASK(pTask);
|
||||||
code = schNotifyTaskOnExecNode(pJob, pTask, type);
|
code = schNotifyTaskOnExecNode(pJob, pTask, type);
|
||||||
SCH_UNLOCK_TASK(pTask);
|
SCH_UNLOCK_TASK(pTask);
|
||||||
|
|
||||||
if (TSDB_CODE_SUCCESS != code) {
|
if (TSDB_CODE_SUCCESS != code) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -1289,7 +1289,6 @@ int32_t schNotifyTaskInHashList(SSchJob *pJob, SHashObj *list, ETaskNotifyType t
|
||||||
SCH_RET(code);
|
SCH_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t schExecRemoteFetch(SSchJob *pJob, SSchTask *pTask) {
|
int32_t schExecRemoteFetch(SSchJob *pJob, SSchTask *pTask) {
|
||||||
SCH_RET(schBuildAndSendMsg(pJob, pJob->fetchTask, &pJob->resNode, SCH_FETCH_TYPE(pJob->fetchTask), NULL));
|
SCH_RET(schBuildAndSendMsg(pJob, pJob->fetchTask, &pJob->resNode, SCH_FETCH_TYPE(pJob->fetchTask), NULL));
|
||||||
}
|
}
|
||||||
|
|
|
@ -278,6 +278,7 @@ void streamTaskClearCheckInfo(SStreamTask* pTask, bool clearChkpReadyMsg) {
|
||||||
pTask->chkInfo.numOfNotReady = 0;
|
pTask->chkInfo.numOfNotReady = 0;
|
||||||
pTask->chkInfo.transId = 0;
|
pTask->chkInfo.transId = 0;
|
||||||
pTask->chkInfo.dispatchCheckpointTrigger = false;
|
pTask->chkInfo.dispatchCheckpointTrigger = false;
|
||||||
|
pTask->chkInfo.downstreamAlignNum = 0;
|
||||||
|
|
||||||
streamTaskOpenAllUpstreamInput(pTask); // open inputQ for all upstream tasks
|
streamTaskOpenAllUpstreamInput(pTask); // open inputQ for all upstream tasks
|
||||||
if (clearChkpReadyMsg) {
|
if (clearChkpReadyMsg) {
|
||||||
|
|
|
@ -321,6 +321,8 @@ void clearBufferedDispatchMsg(SStreamTask* pTask) {
|
||||||
destroyDispatchMsg(pMsgInfo->pData, getNumOfDispatchBranch(pTask));
|
destroyDispatchMsg(pMsgInfo->pData, getNumOfDispatchBranch(pTask));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pMsgInfo->checkpointId = -1;
|
||||||
|
pMsgInfo->transId = -1;
|
||||||
pMsgInfo->pData = NULL;
|
pMsgInfo->pData = NULL;
|
||||||
pMsgInfo->dispatchMsgType = 0;
|
pMsgInfo->dispatchMsgType = 0;
|
||||||
}
|
}
|
||||||
|
@ -332,6 +334,12 @@ static int32_t doBuildDispatchMsg(SStreamTask* pTask, const SStreamDataBlock* pD
|
||||||
|
|
||||||
pTask->msgInfo.dispatchMsgType = pData->type;
|
pTask->msgInfo.dispatchMsgType = pData->type;
|
||||||
|
|
||||||
|
if (pData->type == STREAM_INPUT__CHECKPOINT_TRIGGER) {
|
||||||
|
SSDataBlock* p = taosArrayGet(pData->blocks, 0);
|
||||||
|
pTask->msgInfo.checkpointId = p->info.version;
|
||||||
|
pTask->msgInfo.transId = p->info.window.ekey;
|
||||||
|
}
|
||||||
|
|
||||||
if (pTask->outputInfo.type == TASK_OUTPUT__FIXED_DISPATCH) {
|
if (pTask->outputInfo.type == TASK_OUTPUT__FIXED_DISPATCH) {
|
||||||
SStreamDispatchReq* pReq = taosMemoryCalloc(1, sizeof(SStreamDispatchReq));
|
SStreamDispatchReq* pReq = taosMemoryCalloc(1, sizeof(SStreamDispatchReq));
|
||||||
|
|
||||||
|
@ -950,9 +958,21 @@ void streamClearChkptReadyMsg(SStreamTask* pTask) {
|
||||||
// this message has been sent successfully, let's try next one.
|
// this message has been sent successfully, let's try next one.
|
||||||
static int32_t handleDispatchSuccessRsp(SStreamTask* pTask, int32_t downstreamId) {
|
static int32_t handleDispatchSuccessRsp(SStreamTask* pTask, int32_t downstreamId) {
|
||||||
stDebug("s-task:%s destroy dispatch msg:%p", pTask->id.idStr, pTask->msgInfo.pData);
|
stDebug("s-task:%s destroy dispatch msg:%p", pTask->id.idStr, pTask->msgInfo.pData);
|
||||||
|
|
||||||
bool delayDispatch = (pTask->msgInfo.dispatchMsgType == STREAM_INPUT__CHECKPOINT_TRIGGER);
|
bool delayDispatch = (pTask->msgInfo.dispatchMsgType == STREAM_INPUT__CHECKPOINT_TRIGGER);
|
||||||
if (delayDispatch) {
|
if (delayDispatch) {
|
||||||
pTask->chkInfo.dispatchCheckpointTrigger = true;
|
taosThreadMutexLock(&pTask->lock);
|
||||||
|
// we only set the dispatch msg info for current checkpoint trans
|
||||||
|
if (streamTaskGetStatus(pTask)->state == TASK_STATUS__CK && pTask->chkInfo.checkpointingId == pTask->msgInfo.checkpointId) {
|
||||||
|
ASSERT(pTask->chkInfo.transId == pTask->msgInfo.transId);
|
||||||
|
pTask->chkInfo.dispatchCheckpointTrigger = true;
|
||||||
|
stDebug("s-task:%s checkpoint-trigger msg rsp for checkpointId:%" PRId64 " transId:%d confirmed",
|
||||||
|
pTask->id.idStr, pTask->msgInfo.checkpointId, pTask->msgInfo.transId);
|
||||||
|
} else {
|
||||||
|
stWarn("s-task:%s checkpoint-trigger msg rsp for checkpointId:%" PRId64 " transId:%d discard, since expired",
|
||||||
|
pTask->id.idStr, pTask->msgInfo.checkpointId, pTask->msgInfo.transId);
|
||||||
|
}
|
||||||
|
taosThreadMutexUnlock(&pTask->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
clearBufferedDispatchMsg(pTask);
|
clearBufferedDispatchMsg(pTask);
|
||||||
|
|
|
@ -119,7 +119,11 @@ int32_t streamReExecScanHistoryFuture(SStreamTask* pTask, int32_t idleDuration)
|
||||||
|
|
||||||
// add ref for task
|
// add ref for task
|
||||||
SStreamTask* p = streamMetaAcquireTask(pTask->pMeta, pTask->id.streamId, pTask->id.taskId);
|
SStreamTask* p = streamMetaAcquireTask(pTask->pMeta, pTask->id.streamId, pTask->id.taskId);
|
||||||
ASSERT(p != NULL);
|
if (p == NULL) {
|
||||||
|
stError("s-task:0x%x failed to acquire task, status:%s, not exec scan-history data", pTask->id.taskId,
|
||||||
|
streamTaskGetStatus(pTask)->name);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
pTask->schedHistoryInfo.numOfTicks = numOfTicks;
|
pTask->schedHistoryInfo.numOfTicks = numOfTicks;
|
||||||
|
|
||||||
|
|
|
@ -380,12 +380,12 @@ void tFreeStreamTask(SStreamTask* pTask) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pTask->hTaskInfo.pTimer != NULL) {
|
if (pTask->hTaskInfo.pTimer != NULL) {
|
||||||
taosTmrStop(pTask->hTaskInfo.pTimer);
|
/*bool ret = */taosTmrStop(pTask->hTaskInfo.pTimer);
|
||||||
pTask->hTaskInfo.pTimer = NULL;
|
pTask->hTaskInfo.pTimer = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pTask->msgInfo.pTimer != NULL) {
|
if (pTask->msgInfo.pTimer != NULL) {
|
||||||
taosTmrStop(pTask->msgInfo.pTimer);
|
/*bool ret = */taosTmrStop(pTask->msgInfo.pTimer);
|
||||||
pTask->msgInfo.pTimer = NULL;
|
pTask->msgInfo.pTimer = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -543,8 +543,6 @@ void streamTaskSetStatusReady(SStreamTask* pTask) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
taosThreadMutexLock(&pTask->lock);
|
|
||||||
|
|
||||||
pSM->prev.state = pSM->current;
|
pSM->prev.state = pSM->current;
|
||||||
pSM->prev.evt = 0;
|
pSM->prev.evt = 0;
|
||||||
|
|
||||||
|
@ -552,8 +550,6 @@ void streamTaskSetStatusReady(SStreamTask* pTask) {
|
||||||
pSM->startTs = taosGetTimestampMs();
|
pSM->startTs = taosGetTimestampMs();
|
||||||
pSM->pActiveTrans = NULL;
|
pSM->pActiveTrans = NULL;
|
||||||
taosArrayClear(pSM->pWaitingEventList);
|
taosArrayClear(pSM->pWaitingEventList);
|
||||||
|
|
||||||
taosThreadMutexUnlock(&pTask->lock);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
STaskStateTrans createStateTransform(ETaskStatus current, ETaskStatus next, EStreamTaskEvent event, __state_trans_fn fn,
|
STaskStateTrans createStateTransform(ETaskStatus current, ETaskStatus next, EStreamTaskEvent event, __state_trans_fn fn,
|
||||||
|
|
|
@ -276,16 +276,18 @@ int32_t syncForceBecomeFollower(SSyncNode* ths, const SRpcMsg* pRpcMsg) {
|
||||||
|
|
||||||
int32_t syncBecomeAssignedLeader(SSyncNode* ths, SRpcMsg* pRpcMsg) {
|
int32_t syncBecomeAssignedLeader(SSyncNode* ths, SRpcMsg* pRpcMsg) {
|
||||||
int32_t ret = -1;
|
int32_t ret = -1;
|
||||||
|
int32_t errcode = TSDB_CODE_MND_ARB_TOKEN_MISMATCH;
|
||||||
|
void* pHead = NULL;
|
||||||
|
int32_t contLen = 0;
|
||||||
|
|
||||||
SVArbSetAssignedLeaderReq req = {0};
|
SVArbSetAssignedLeaderReq req = {0};
|
||||||
if (tDeserializeSVArbSetAssignedLeaderReq((char*)pRpcMsg->pCont + sizeof(SMsgHead), pRpcMsg->contLen, &req) != 0) {
|
if (tDeserializeSVArbSetAssignedLeaderReq((char*)pRpcMsg->pCont + sizeof(SMsgHead), pRpcMsg->contLen, &req) != 0) {
|
||||||
sError("vgId:%d, failed to deserialize SVArbSetAssignedLeaderReq", ths->vgId);
|
sError("vgId:%d, failed to deserialize SVArbSetAssignedLeaderReq", ths->vgId);
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
return -1;
|
errcode = terrno;
|
||||||
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t errcode = TSDB_CODE_MND_ARB_TOKEN_MISMATCH;
|
|
||||||
|
|
||||||
if (ths->arbTerm > req.arbTerm) {
|
if (ths->arbTerm > req.arbTerm) {
|
||||||
sInfo("vgId:%d, skip to set assigned leader, msg with lower term, local:%" PRId64 "msg:%" PRId64, ths->vgId,
|
sInfo("vgId:%d, skip to set assigned leader, msg with lower term, local:%" PRId64 "msg:%" PRId64, ths->vgId,
|
||||||
ths->arbTerm, req.arbTerm);
|
ths->arbTerm, req.arbTerm);
|
||||||
|
@ -294,50 +296,58 @@ int32_t syncBecomeAssignedLeader(SSyncNode* ths, SRpcMsg* pRpcMsg) {
|
||||||
|
|
||||||
ths->arbTerm = TMAX(req.arbTerm, ths->arbTerm);
|
ths->arbTerm = TMAX(req.arbTerm, ths->arbTerm);
|
||||||
|
|
||||||
if (strncmp(req.memberToken, ths->arbToken, TSDB_ARB_TOKEN_SIZE) == 0) {
|
if (strncmp(req.memberToken, ths->arbToken, TSDB_ARB_TOKEN_SIZE) != 0) {
|
||||||
if (ths->state != TAOS_SYNC_STATE_ASSIGNED_LEADER) {
|
|
||||||
raftStoreNextTerm(ths);
|
|
||||||
if (terrno != TSDB_CODE_SUCCESS) {
|
|
||||||
sError("vgId:%d, failed to set next term since:%s", ths->vgId, terrstr());
|
|
||||||
goto _OVER;
|
|
||||||
}
|
|
||||||
syncNodeBecomeAssignedLeader(ths);
|
|
||||||
|
|
||||||
if (syncNodeAppendNoop(ths) < 0) {
|
|
||||||
sError("vgId:%d, assigned leader failed to append noop entry since %s", ths->vgId, terrstr());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
errcode = TSDB_CODE_SUCCESS;
|
|
||||||
} else {
|
|
||||||
sInfo("vgId:%d, skip to set assigned leader, token mismatch, local:%s, msg:%s", ths->vgId, ths->arbToken,
|
sInfo("vgId:%d, skip to set assigned leader, token mismatch, local:%s, msg:%s", ths->vgId, ths->arbToken,
|
||||||
req.memberToken);
|
req.memberToken);
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (ths->state != TAOS_SYNC_STATE_ASSIGNED_LEADER) {
|
||||||
|
terrno = TSDB_CODE_SUCCESS;
|
||||||
|
raftStoreNextTerm(ths);
|
||||||
|
if (terrno != TSDB_CODE_SUCCESS) {
|
||||||
|
sError("vgId:%d, failed to set next term since:%s", ths->vgId, terrstr());
|
||||||
|
errcode = terrno;
|
||||||
|
goto _OVER;
|
||||||
|
}
|
||||||
|
syncNodeBecomeAssignedLeader(ths);
|
||||||
|
|
||||||
|
if (syncNodeAppendNoop(ths) < 0) {
|
||||||
|
sError("vgId:%d, assigned leader failed to append noop entry since %s", ths->vgId, terrstr());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
SVArbSetAssignedLeaderRsp rsp = {0};
|
SVArbSetAssignedLeaderRsp rsp = {0};
|
||||||
rsp.arbToken = req.arbToken;
|
rsp.arbToken = req.arbToken;
|
||||||
rsp.memberToken = req.memberToken;
|
rsp.memberToken = req.memberToken;
|
||||||
rsp.vgId = ths->vgId;
|
rsp.vgId = ths->vgId;
|
||||||
|
|
||||||
int32_t contLen = tSerializeSVArbSetAssignedLeaderRsp(NULL, 0, &rsp);
|
contLen = tSerializeSVArbSetAssignedLeaderRsp(NULL, 0, &rsp);
|
||||||
if (contLen <= 0) {
|
if (contLen <= 0) {
|
||||||
sError("vgId:%d, failed to serialize SVArbSetAssignedLeaderRsp", ths->vgId);
|
sError("vgId:%d, failed to serialize SVArbSetAssignedLeaderRsp", ths->vgId);
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
errcode = terrno;
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
void* pHead = rpcMallocCont(contLen);
|
pHead = rpcMallocCont(contLen);
|
||||||
if (!pHead) {
|
if (!pHead) {
|
||||||
sError("vgId:%d, failed to malloc memory for SVArbSetAssignedLeaderRsp", ths->vgId);
|
sError("vgId:%d, failed to malloc memory for SVArbSetAssignedLeaderRsp", ths->vgId);
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
errcode = terrno;
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
if (tSerializeSVArbSetAssignedLeaderRsp(pHead, contLen, &rsp) <= 0) {
|
if (tSerializeSVArbSetAssignedLeaderRsp(pHead, contLen, &rsp) <= 0) {
|
||||||
sError("vgId:%d, failed to serialize SVArbSetAssignedLeaderRsp", ths->vgId);
|
sError("vgId:%d, failed to serialize SVArbSetAssignedLeaderRsp", ths->vgId);
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
errcode = terrno;
|
||||||
rpcFreeCont(pHead);
|
rpcFreeCont(pHead);
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
errcode = TSDB_CODE_SUCCESS;
|
||||||
|
ret = 0;
|
||||||
|
|
||||||
|
_OVER:;
|
||||||
SRpcMsg rspMsg = {
|
SRpcMsg rspMsg = {
|
||||||
.code = errcode,
|
.code = errcode,
|
||||||
.pCont = pHead,
|
.pCont = pHead,
|
||||||
|
@ -347,9 +357,6 @@ int32_t syncBecomeAssignedLeader(SSyncNode* ths, SRpcMsg* pRpcMsg) {
|
||||||
|
|
||||||
tmsgSendRsp(&rspMsg);
|
tmsgSendRsp(&rspMsg);
|
||||||
|
|
||||||
ret = 0;
|
|
||||||
|
|
||||||
_OVER:
|
|
||||||
tFreeSVArbSetAssignedLeaderReq(&req);
|
tFreeSVArbSetAssignedLeaderReq(&req);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,16 +19,12 @@ extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#include <uv.h>
|
#include <uv.h>
|
||||||
#include "os.h"
|
|
||||||
#include "taoserror.h"
|
|
||||||
#include "theap.h"
|
#include "theap.h"
|
||||||
#include "tmisce.h"
|
|
||||||
#include "tmsg.h"
|
#include "tmsg.h"
|
||||||
#include "transLog.h"
|
#include "transLog.h"
|
||||||
#include "transportInt.h"
|
#include "transportInt.h"
|
||||||
#include "trpc.h"
|
#include "trpc.h"
|
||||||
#include "ttrace.h"
|
#include "ttrace.h"
|
||||||
#include "tutil.h"
|
|
||||||
|
|
||||||
typedef bool (*FilteFunc)(void* arg);
|
typedef bool (*FilteFunc)(void* arg);
|
||||||
|
|
||||||
|
@ -115,9 +111,12 @@ typedef SRpcConnInfo STransHandleInfo;
|
||||||
|
|
||||||
// ref mgt handle
|
// ref mgt handle
|
||||||
typedef struct SExHandle {
|
typedef struct SExHandle {
|
||||||
void* handle;
|
void* handle;
|
||||||
int64_t refId;
|
int64_t refId;
|
||||||
void* pThrd;
|
void* pThrd;
|
||||||
|
queue q;
|
||||||
|
int8_t inited;
|
||||||
|
SRWLatch latch;
|
||||||
} SExHandle;
|
} SExHandle;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
|
|
@ -92,6 +92,7 @@ typedef struct SCliMsg {
|
||||||
int64_t refId;
|
int64_t refId;
|
||||||
uint64_t st;
|
uint64_t st;
|
||||||
int sent; //(0: no send, 1: alread sent)
|
int sent; //(0: no send, 1: alread sent)
|
||||||
|
queue seqq;
|
||||||
} SCliMsg;
|
} SCliMsg;
|
||||||
|
|
||||||
typedef struct SCliThrd {
|
typedef struct SCliThrd {
|
||||||
|
@ -121,11 +122,7 @@ typedef struct SCliThrd {
|
||||||
SHashObj* batchCache;
|
SHashObj* batchCache;
|
||||||
|
|
||||||
SCliMsg* stopMsg;
|
SCliMsg* stopMsg;
|
||||||
|
bool quit;
|
||||||
bool quit;
|
|
||||||
|
|
||||||
int newConnCount;
|
|
||||||
SHashObj* msgCount;
|
|
||||||
} SCliThrd;
|
} SCliThrd;
|
||||||
|
|
||||||
typedef struct SCliObj {
|
typedef struct SCliObj {
|
||||||
|
@ -262,10 +259,8 @@ static void cliWalkCb(uv_handle_t* handle, void* arg);
|
||||||
} \
|
} \
|
||||||
if (i == sz) { \
|
if (i == sz) { \
|
||||||
pMsg = NULL; \
|
pMsg = NULL; \
|
||||||
tDebug("msg not found, %" PRIu64 "", ahandle); \
|
|
||||||
} else { \
|
} else { \
|
||||||
pMsg = transQueueRm(&conn->cliMsgs, i); \
|
pMsg = transQueueRm(&conn->cliMsgs, i); \
|
||||||
tDebug("msg found, %" PRIu64 "", ahandle); \
|
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
@ -343,6 +338,34 @@ bool cliMaySendCachedMsg(SCliConn* conn) {
|
||||||
_RETURN:
|
_RETURN:
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
bool cliConnSendSeqMsg(int64_t refId, SCliConn* conn) {
|
||||||
|
if (refId == 0) return false;
|
||||||
|
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
||||||
|
if (exh == NULL) {
|
||||||
|
tDebug("release conn %p, refId: %" PRId64 "", conn, refId);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
taosWLockLatch(&exh->latch);
|
||||||
|
if (exh->handle == NULL) exh->handle = conn;
|
||||||
|
exh->inited = 1;
|
||||||
|
if (!QUEUE_IS_EMPTY(&exh->q)) {
|
||||||
|
queue* h = QUEUE_HEAD(&exh->q);
|
||||||
|
QUEUE_REMOVE(h);
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
SCliMsg* t = QUEUE_DATA(h, SCliMsg, seqq);
|
||||||
|
transCtxMerge(&conn->ctx, &t->ctx->appCtx);
|
||||||
|
transQueuePush(&conn->cliMsgs, t);
|
||||||
|
tDebug("pop from conn %p, refId: %" PRId64 "", conn, refId);
|
||||||
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
|
cliSend(conn);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
tDebug("empty conn %p, refId: %" PRId64 "", conn, refId);
|
||||||
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
void cliHandleResp(SCliConn* conn) {
|
void cliHandleResp(SCliConn* conn) {
|
||||||
SCliThrd* pThrd = conn->hostThrd;
|
SCliThrd* pThrd = conn->hostThrd;
|
||||||
STrans* pTransInst = pThrd->pTransInst;
|
STrans* pTransInst = pThrd->pTransInst;
|
||||||
|
@ -439,8 +462,14 @@ void cliHandleResp(SCliConn* conn) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
int64_t refId = (pMsg == NULL ? 0 : (int64_t)(pMsg->msg.info.handle));
|
||||||
|
tDebug("conn %p msg refId: %" PRId64 "", conn, refId);
|
||||||
destroyCmsg(pMsg);
|
destroyCmsg(pMsg);
|
||||||
|
|
||||||
|
if (cliConnSendSeqMsg(refId, conn)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (cliMaySendCachedMsg(conn) == true) {
|
if (cliMaySendCachedMsg(conn) == true) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -451,6 +480,21 @@ void cliHandleResp(SCliConn* conn) {
|
||||||
|
|
||||||
uv_read_start((uv_stream_t*)conn->stream, cliAllocRecvBufferCb, cliRecvCb);
|
uv_read_start((uv_stream_t*)conn->stream, cliAllocRecvBufferCb, cliRecvCb);
|
||||||
}
|
}
|
||||||
|
static void cliDestroyMsgInExhandle(int64_t refId) {
|
||||||
|
if (refId == 0) return;
|
||||||
|
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
||||||
|
if (exh) {
|
||||||
|
taosWLockLatch(&exh->latch);
|
||||||
|
while (!QUEUE_IS_EMPTY(&exh->q)) {
|
||||||
|
queue* h = QUEUE_HEAD(&exh->q);
|
||||||
|
QUEUE_REMOVE(h);
|
||||||
|
SCliMsg* t = QUEUE_DATA(h, SCliMsg, seqq);
|
||||||
|
destroyCmsg(t);
|
||||||
|
}
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void cliHandleExceptImpl(SCliConn* pConn, int32_t code) {
|
void cliHandleExceptImpl(SCliConn* pConn, int32_t code) {
|
||||||
if (transQueueEmpty(&pConn->cliMsgs)) {
|
if (transQueueEmpty(&pConn->cliMsgs)) {
|
||||||
|
@ -510,6 +554,8 @@ void cliHandleExceptImpl(SCliConn* pConn, int32_t code) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pMsg == NULL || (pMsg && pMsg->type != Release)) {
|
if (pMsg == NULL || (pMsg && pMsg->type != Release)) {
|
||||||
|
int64_t refId = (pMsg == NULL ? 0 : (int64_t)(pMsg->msg.info.handle));
|
||||||
|
cliDestroyMsgInExhandle(refId);
|
||||||
if (cliAppCb(pConn, &transMsg, pMsg) != 0) {
|
if (cliAppCb(pConn, &transMsg, pMsg) != 0) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -678,7 +724,7 @@ static SCliConn* getConnFromPool2(SCliThrd* pThrd, char* key, SCliMsg** pMsg) {
|
||||||
}
|
}
|
||||||
list->numOfConn++;
|
list->numOfConn++;
|
||||||
}
|
}
|
||||||
tTrace("%s numOfConn: %d, limit: %d", pTransInst->label, list->numOfConn, pTransInst->connLimitNum);
|
tDebug("%s numOfConn: %d, limit: %d, dst:%s", pTransInst->label, list->numOfConn, pTransInst->connLimitNum, key);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -742,13 +788,13 @@ static void addConnToPool(void* pool, SCliConn* conn) {
|
||||||
QUEUE_PUSH(&conn->list->conns, &conn->q);
|
QUEUE_PUSH(&conn->list->conns, &conn->q);
|
||||||
conn->list->size += 1;
|
conn->list->size += 1;
|
||||||
|
|
||||||
if (conn->list->size >= 20) {
|
if (conn->list->size >= 10) {
|
||||||
STaskArg* arg = taosMemoryCalloc(1, sizeof(STaskArg));
|
STaskArg* arg = taosMemoryCalloc(1, sizeof(STaskArg));
|
||||||
arg->param1 = conn;
|
arg->param1 = conn;
|
||||||
arg->param2 = thrd;
|
arg->param2 = thrd;
|
||||||
|
|
||||||
STrans* pTransInst = thrd->pTransInst;
|
STrans* pTransInst = thrd->pTransInst;
|
||||||
conn->task = transDQSched(thrd->timeoutQueue, doCloseIdleConn, arg, CONN_PERSIST_TIME(pTransInst->idleTime));
|
conn->task = transDQSched(thrd->timeoutQueue, doCloseIdleConn, arg, 10 * CONN_PERSIST_TIME(pTransInst->idleTime));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
static int32_t allocConnRef(SCliConn* conn, bool update) {
|
static int32_t allocConnRef(SCliConn* conn, bool update) {
|
||||||
|
@ -761,8 +807,10 @@ static int32_t allocConnRef(SCliConn* conn, bool update) {
|
||||||
exh->handle = conn;
|
exh->handle = conn;
|
||||||
exh->pThrd = conn->hostThrd;
|
exh->pThrd = conn->hostThrd;
|
||||||
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
||||||
conn->refId = exh->refId;
|
QUEUE_INIT(&exh->q);
|
||||||
|
taosInitRWLatch(&exh->latch);
|
||||||
|
|
||||||
|
conn->refId = exh->refId;
|
||||||
if (conn->refId == -1) {
|
if (conn->refId == -1) {
|
||||||
taosMemoryFree(exh);
|
taosMemoryFree(exh);
|
||||||
}
|
}
|
||||||
|
@ -779,9 +827,11 @@ static int32_t specifyConnRef(SCliConn* conn, bool update, int64_t handle) {
|
||||||
if (exh == NULL) {
|
if (exh == NULL) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
taosWLockLatch(&exh->latch);
|
||||||
exh->handle = conn;
|
exh->handle = conn;
|
||||||
exh->pThrd = conn->hostThrd;
|
exh->pThrd = conn->hostThrd;
|
||||||
conn->refId = exh->refId;
|
conn->refId = exh->refId;
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
|
||||||
transReleaseExHandle(transGetRefMgt(), handle);
|
transReleaseExHandle(transGetRefMgt(), handle);
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -882,7 +932,6 @@ static void cliDestroyConn(SCliConn* conn, bool clear) {
|
||||||
}
|
}
|
||||||
|
|
||||||
conn->list = NULL;
|
conn->list = NULL;
|
||||||
pThrd->newConnCount--;
|
|
||||||
|
|
||||||
transReleaseExHandle(transGetRefMgt(), conn->refId);
|
transReleaseExHandle(transGetRefMgt(), conn->refId);
|
||||||
transRemoveExHandle(transGetRefMgt(), conn->refId);
|
transRemoveExHandle(transGetRefMgt(), conn->refId);
|
||||||
|
@ -1190,7 +1239,6 @@ static void cliHandleBatchReq(SCliBatch* pBatch, SCliThrd* pThrd) {
|
||||||
addr.sin_port = (uint16_t)htons(pList->port);
|
addr.sin_port = (uint16_t)htons(pList->port);
|
||||||
|
|
||||||
tTrace("%s conn %p try to connect to %s", pTransInst->label, conn, pList->dst);
|
tTrace("%s conn %p try to connect to %s", pTransInst->label, conn, pList->dst);
|
||||||
pThrd->newConnCount++;
|
|
||||||
int32_t fd = taosCreateSocketWithTimeout(TRANS_CONN_TIMEOUT * 10);
|
int32_t fd = taosCreateSocketWithTimeout(TRANS_CONN_TIMEOUT * 10);
|
||||||
if (fd == -1) {
|
if (fd == -1) {
|
||||||
tError("%s conn %p failed to create socket, reason:%s", transLabel(pTransInst), conn,
|
tError("%s conn %p failed to create socket, reason:%s", transLabel(pTransInst), conn,
|
||||||
|
@ -1392,7 +1440,10 @@ static void cliHandleRelease(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosRLockLatch(&exh->latch);
|
||||||
SCliConn* conn = exh->handle;
|
SCliConn* conn = exh->handle;
|
||||||
|
taosRUnLockLatch(&exh->latch);
|
||||||
|
|
||||||
transReleaseExHandle(transGetRefMgt(), refId);
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
tDebug("%s conn %p start to release to inst", CONN_GET_INST_LABEL(conn), conn);
|
tDebug("%s conn %p start to release to inst", CONN_GET_INST_LABEL(conn), conn);
|
||||||
|
|
||||||
|
@ -1425,7 +1476,9 @@ SCliConn* cliGetConn(SCliMsg** pMsg, SCliThrd* pThrd, bool* ignore, char* addr)
|
||||||
*ignore = true;
|
*ignore = true;
|
||||||
return NULL;
|
return NULL;
|
||||||
} else {
|
} else {
|
||||||
|
taosRLockLatch(&exh->latch);
|
||||||
conn = exh->handle;
|
conn = exh->handle;
|
||||||
|
taosRUnLockLatch(&exh->latch);
|
||||||
if (conn == NULL) {
|
if (conn == NULL) {
|
||||||
conn = getConnFromPool2(pThrd, addr, pMsg);
|
conn = getConnFromPool2(pThrd, addr, pMsg);
|
||||||
if (conn != NULL) specifyConnRef(conn, true, refId);
|
if (conn != NULL) specifyConnRef(conn, true, refId);
|
||||||
|
@ -1439,7 +1492,7 @@ SCliConn* cliGetConn(SCliMsg** pMsg, SCliThrd* pThrd, bool* ignore, char* addr)
|
||||||
if (conn != NULL) {
|
if (conn != NULL) {
|
||||||
tTrace("%s conn %p get from conn pool:%p", CONN_GET_INST_LABEL(conn), conn, pThrd->pool);
|
tTrace("%s conn %p get from conn pool:%p", CONN_GET_INST_LABEL(conn), conn, pThrd->pool);
|
||||||
} else {
|
} else {
|
||||||
tTrace("%s not found conn in conn pool:%p", ((STrans*)pThrd->pTransInst)->label, pThrd->pool);
|
tTrace("%s not found conn in conn pool:%p, dst:%s", ((STrans*)pThrd->pTransInst)->label, pThrd->pool, addr);
|
||||||
}
|
}
|
||||||
return conn;
|
return conn;
|
||||||
}
|
}
|
||||||
|
@ -1598,7 +1651,6 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
addr.sin_port = (uint16_t)htons(port);
|
addr.sin_port = (uint16_t)htons(port);
|
||||||
|
|
||||||
tGTrace("%s conn %p try to connect to %s", pTransInst->label, conn, conn->dstAddr);
|
tGTrace("%s conn %p try to connect to %s", pTransInst->label, conn, conn->dstAddr);
|
||||||
pThrd->newConnCount++;
|
|
||||||
int32_t fd = taosCreateSocketWithTimeout(TRANS_CONN_TIMEOUT * 10);
|
int32_t fd = taosCreateSocketWithTimeout(TRANS_CONN_TIMEOUT * 10);
|
||||||
if (fd == -1) {
|
if (fd == -1) {
|
||||||
tGError("%s conn %p failed to create socket, reason:%s", transLabel(pTransInst), conn,
|
tGError("%s conn %p failed to create socket, reason:%s", transLabel(pTransInst), conn,
|
||||||
|
@ -1858,9 +1910,10 @@ void cliIteraConnMsgs(SCliConn* conn) {
|
||||||
bool cliRecvReleaseReq(SCliConn* conn, STransMsgHead* pHead) {
|
bool cliRecvReleaseReq(SCliConn* conn, STransMsgHead* pHead) {
|
||||||
if (pHead->release == 1 && (pHead->msgLen) == sizeof(*pHead)) {
|
if (pHead->release == 1 && (pHead->msgLen) == sizeof(*pHead)) {
|
||||||
uint64_t ahandle = pHead->ahandle;
|
uint64_t ahandle = pHead->ahandle;
|
||||||
tDebug("ahandle = %" PRIu64 "", ahandle);
|
|
||||||
SCliMsg* pMsg = NULL;
|
SCliMsg* pMsg = NULL;
|
||||||
CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle);
|
CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle);
|
||||||
|
tDebug("%s conn %p receive release request, refId:%" PRId64 ", may ignore", CONN_GET_INST_LABEL(conn), conn,
|
||||||
|
conn->refId);
|
||||||
|
|
||||||
transClearBuffer(&conn->readBuf);
|
transClearBuffer(&conn->readBuf);
|
||||||
transFreeMsg(transContFromHead((char*)pHead));
|
transFreeMsg(transContFromHead((char*)pHead));
|
||||||
|
@ -1869,6 +1922,9 @@ bool cliRecvReleaseReq(SCliConn* conn, STransMsgHead* pHead) {
|
||||||
SCliMsg* cliMsg = transQueueGet(&conn->cliMsgs, i);
|
SCliMsg* cliMsg = transQueueGet(&conn->cliMsgs, i);
|
||||||
if (cliMsg->type == Release) {
|
if (cliMsg->type == Release) {
|
||||||
ASSERTS(pMsg == NULL, "trans-cli recv invaid release-req");
|
ASSERTS(pMsg == NULL, "trans-cli recv invaid release-req");
|
||||||
|
tDebug("%s conn %p receive release request, refId:%" PRId64 ", ignore msg", CONN_GET_INST_LABEL(conn), conn,
|
||||||
|
conn->refId);
|
||||||
|
cliDestroyConn(conn, true);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1984,11 +2040,9 @@ static SCliThrd* createThrdObj(void* trans) {
|
||||||
taosMemoryFree(pThrd);
|
taosMemoryFree(pThrd);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
if (pTransInst->supportBatch) {
|
int32_t nSync = pTransInst->supportBatch ? 4 : 8;
|
||||||
pThrd->asyncPool = transAsyncPoolCreate(pThrd->loop, 4, pThrd, cliAsyncCb);
|
pThrd->asyncPool = transAsyncPoolCreate(pThrd->loop, nSync, pThrd, cliAsyncCb);
|
||||||
} else {
|
|
||||||
pThrd->asyncPool = transAsyncPoolCreate(pThrd->loop, 8, pThrd, cliAsyncCb);
|
|
||||||
}
|
|
||||||
if (pThrd->asyncPool == NULL) {
|
if (pThrd->asyncPool == NULL) {
|
||||||
tError("failed to init async pool");
|
tError("failed to init async pool");
|
||||||
uv_loop_close(pThrd->loop);
|
uv_loop_close(pThrd->loop);
|
||||||
|
@ -2029,8 +2083,6 @@ static SCliThrd* createThrdObj(void* trans) {
|
||||||
|
|
||||||
pThrd->quit = false;
|
pThrd->quit = false;
|
||||||
|
|
||||||
pThrd->newConnCount = 0;
|
|
||||||
pThrd->msgCount = taosHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
|
||||||
return pThrd;
|
return pThrd;
|
||||||
}
|
}
|
||||||
static void destroyThrdObj(SCliThrd* pThrd) {
|
static void destroyThrdObj(SCliThrd* pThrd) {
|
||||||
|
@ -2076,7 +2128,6 @@ static void destroyThrdObj(SCliThrd* pThrd) {
|
||||||
pIter = (void**)taosHashIterate(pThrd->batchCache, pIter);
|
pIter = (void**)taosHashIterate(pThrd->batchCache, pIter);
|
||||||
}
|
}
|
||||||
taosHashCleanup(pThrd->batchCache);
|
taosHashCleanup(pThrd->batchCache);
|
||||||
taosHashCleanup(pThrd->msgCount);
|
|
||||||
taosMemoryFree(pThrd);
|
taosMemoryFree(pThrd);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2095,14 +2146,7 @@ void cliSendQuit(SCliThrd* thrd) {
|
||||||
void cliWalkCb(uv_handle_t* handle, void* arg) {
|
void cliWalkCb(uv_handle_t* handle, void* arg) {
|
||||||
if (!uv_is_closing(handle)) {
|
if (!uv_is_closing(handle)) {
|
||||||
if (uv_handle_get_type(handle) == UV_TIMER) {
|
if (uv_handle_get_type(handle) == UV_TIMER) {
|
||||||
// SCliConn* pConn = handle->data;
|
// do nothing
|
||||||
// if (pConn != NULL && pConn->timer != NULL) {
|
|
||||||
// SCliThrd* pThrd = pConn->hostThrd;
|
|
||||||
// uv_timer_stop((uv_timer_t*)handle);
|
|
||||||
// handle->data = NULL;
|
|
||||||
// taosArrayPush(pThrd->timerList, &pConn->timer);
|
|
||||||
// pConn->timer = NULL;
|
|
||||||
// }
|
|
||||||
} else {
|
} else {
|
||||||
uv_read_stop((uv_stream_t*)handle);
|
uv_read_stop((uv_stream_t*)handle);
|
||||||
}
|
}
|
||||||
|
@ -2137,18 +2181,23 @@ static void doCloseIdleConn(void* param) {
|
||||||
cliDestroyConn(conn, true);
|
cliDestroyConn(conn, true);
|
||||||
taosMemoryFree(arg);
|
taosMemoryFree(arg);
|
||||||
}
|
}
|
||||||
|
static void cliSchedMsgToDebug(SCliMsg* pMsg, char* label) {
|
||||||
|
if (!(rpcDebugFlag & DEBUG_DEBUG)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
STransConnCtx* pCtx = pMsg->ctx;
|
||||||
|
STraceId* trace = &pMsg->msg.info.traceId;
|
||||||
|
char tbuf[512] = {0};
|
||||||
|
EPSET_TO_STR(&pCtx->epSet, tbuf);
|
||||||
|
tGDebug("%s retry on next node,use:%s, step: %d,timeout:%" PRId64 "", label, tbuf, pCtx->retryStep,
|
||||||
|
pCtx->retryNextInterval);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
static void cliSchedMsgToNextNode(SCliMsg* pMsg, SCliThrd* pThrd) {
|
static void cliSchedMsgToNextNode(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
STrans* pTransInst = pThrd->pTransInst;
|
STrans* pTransInst = pThrd->pTransInst;
|
||||||
STransConnCtx* pCtx = pMsg->ctx;
|
STransConnCtx* pCtx = pMsg->ctx;
|
||||||
|
cliSchedMsgToDebug(pMsg, transLabel(pThrd->pTransInst));
|
||||||
if (rpcDebugFlag & DEBUG_DEBUG) {
|
|
||||||
STraceId* trace = &pMsg->msg.info.traceId;
|
|
||||||
char tbuf[512] = {0};
|
|
||||||
EPSET_TO_STR(&pCtx->epSet, tbuf);
|
|
||||||
tGDebug("%s retry on next node,use:%s, step: %d,timeout:%" PRId64 "", transLabel(pThrd->pTransInst), tbuf,
|
|
||||||
pCtx->retryStep, pCtx->retryNextInterval);
|
|
||||||
}
|
|
||||||
|
|
||||||
STaskArg* arg = taosMemoryMalloc(sizeof(STaskArg));
|
STaskArg* arg = taosMemoryMalloc(sizeof(STaskArg));
|
||||||
arg->param1 = pMsg;
|
arg->param1 = pMsg;
|
||||||
|
@ -2157,12 +2206,6 @@ static void cliSchedMsgToNextNode(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
transDQSched(pThrd->delayQueue, doDelayTask, arg, pCtx->retryNextInterval);
|
transDQSched(pThrd->delayQueue, doDelayTask, arg, pCtx->retryNextInterval);
|
||||||
}
|
}
|
||||||
|
|
||||||
FORCE_INLINE void cliCompareAndSwap(int8_t* val, int8_t exp, int8_t newVal) {
|
|
||||||
if (*val != exp) {
|
|
||||||
*val = newVal;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
FORCE_INLINE bool cliTryExtractEpSet(STransMsg* pResp, SEpSet* dst) {
|
FORCE_INLINE bool cliTryExtractEpSet(STransMsg* pResp, SEpSet* dst) {
|
||||||
if ((pResp == NULL || pResp->info.hasEpSet == 0)) {
|
if ((pResp == NULL || pResp->info.hasEpSet == 0)) {
|
||||||
return false;
|
return false;
|
||||||
|
@ -2504,21 +2547,7 @@ int transReleaseCliHandle(void* handle) {
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
static SCliMsg* transInitMsg(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STransCtx* ctx) {
|
||||||
int transSendRequest(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STransCtx* ctx) {
|
|
||||||
STrans* pTransInst = (STrans*)transAcquireExHandle(transGetInstMgt(), (int64_t)shandle);
|
|
||||||
if (pTransInst == NULL) {
|
|
||||||
transFreeMsg(pReq->pCont);
|
|
||||||
return TSDB_CODE_RPC_BROKEN_LINK;
|
|
||||||
}
|
|
||||||
|
|
||||||
SCliThrd* pThrd = transGetWorkThrd(pTransInst, (int64_t)pReq->info.handle);
|
|
||||||
if (pThrd == NULL) {
|
|
||||||
transFreeMsg(pReq->pCont);
|
|
||||||
transReleaseExHandle(transGetInstMgt(), (int64_t)shandle);
|
|
||||||
return TSDB_CODE_RPC_BROKEN_LINK;
|
|
||||||
}
|
|
||||||
|
|
||||||
TRACE_SET_MSGID(&pReq->info.traceId, tGenIdPI64());
|
TRACE_SET_MSGID(&pReq->info.traceId, tGenIdPI64());
|
||||||
STransConnCtx* pCtx = taosMemoryCalloc(1, sizeof(STransConnCtx));
|
STransConnCtx* pCtx = taosMemoryCalloc(1, sizeof(STransConnCtx));
|
||||||
epsetAssign(&pCtx->epSet, pEpSet);
|
epsetAssign(&pCtx->epSet, pEpSet);
|
||||||
|
@ -2535,12 +2564,48 @@ int transSendRequest(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STran
|
||||||
cliMsg->st = taosGetTimestampUs();
|
cliMsg->st = taosGetTimestampUs();
|
||||||
cliMsg->type = Normal;
|
cliMsg->type = Normal;
|
||||||
cliMsg->refId = (int64_t)shandle;
|
cliMsg->refId = (int64_t)shandle;
|
||||||
|
QUEUE_INIT(&cliMsg->seqq);
|
||||||
|
return cliMsg;
|
||||||
|
}
|
||||||
|
|
||||||
|
int transSendRequest(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STransCtx* ctx) {
|
||||||
|
STrans* pTransInst = (STrans*)transAcquireExHandle(transGetInstMgt(), (int64_t)shandle);
|
||||||
|
if (pTransInst == NULL) {
|
||||||
|
transFreeMsg(pReq->pCont);
|
||||||
|
return TSDB_CODE_RPC_BROKEN_LINK;
|
||||||
|
}
|
||||||
|
|
||||||
|
int64_t handle = (int64_t)pReq->info.handle;
|
||||||
|
SCliThrd* pThrd = transGetWorkThrd(pTransInst, handle);
|
||||||
|
if (pThrd == NULL) {
|
||||||
|
transFreeMsg(pReq->pCont);
|
||||||
|
transReleaseExHandle(transGetInstMgt(), (int64_t)shandle);
|
||||||
|
return TSDB_CODE_RPC_BROKEN_LINK;
|
||||||
|
}
|
||||||
|
if (handle != 0) {
|
||||||
|
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), handle);
|
||||||
|
if (exh != NULL) {
|
||||||
|
taosWLockLatch(&exh->latch);
|
||||||
|
if (exh->handle == NULL && exh->inited != 0) {
|
||||||
|
SCliMsg* pCliMsg = transInitMsg(shandle, pEpSet, pReq, ctx);
|
||||||
|
QUEUE_PUSH(&exh->q, &pCliMsg->seqq);
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
tDebug("msg refId: %" PRId64 "", handle);
|
||||||
|
transReleaseExHandle(transGetInstMgt(), (int64_t)shandle);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
exh->inited = 1;
|
||||||
|
taosWUnLockLatch(&exh->latch);
|
||||||
|
transReleaseExHandle(transGetRefMgt(), handle);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
SCliMsg* pCliMsg = transInitMsg(shandle, pEpSet, pReq, ctx);
|
||||||
|
|
||||||
STraceId* trace = &pReq->info.traceId;
|
STraceId* trace = &pReq->info.traceId;
|
||||||
tGDebug("%s send request at thread:%08" PRId64 ", dst:%s:%d, app:%p", transLabel(pTransInst), pThrd->pid,
|
tGDebug("%s send request at thread:%08" PRId64 ", dst:%s:%d, app:%p", transLabel(pTransInst), pThrd->pid,
|
||||||
EPSET_GET_INUSE_IP(&pCtx->epSet), EPSET_GET_INUSE_PORT(&pCtx->epSet), pReq->info.ahandle);
|
EPSET_GET_INUSE_IP(pEpSet), EPSET_GET_INUSE_PORT(pEpSet), pReq->info.ahandle);
|
||||||
if (0 != transAsyncSend(pThrd->asyncPool, &(cliMsg->q))) {
|
if (0 != transAsyncSend(pThrd->asyncPool, &(pCliMsg->q))) {
|
||||||
destroyCmsg(cliMsg);
|
destroyCmsg(pCliMsg);
|
||||||
transReleaseExHandle(transGetInstMgt(), (int64_t)shandle);
|
transReleaseExHandle(transGetInstMgt(), (int64_t)shandle);
|
||||||
return TSDB_CODE_RPC_BROKEN_LINK;
|
return TSDB_CODE_RPC_BROKEN_LINK;
|
||||||
}
|
}
|
||||||
|
@ -2726,6 +2791,8 @@ int transSetDefaultAddr(void* shandle, const char* ip, const char* fqdn) {
|
||||||
int64_t transAllocHandle() {
|
int64_t transAllocHandle() {
|
||||||
SExHandle* exh = taosMemoryCalloc(1, sizeof(SExHandle));
|
SExHandle* exh = taosMemoryCalloc(1, sizeof(SExHandle));
|
||||||
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
||||||
|
QUEUE_INIT(&exh->q);
|
||||||
|
taosInitRWLatch(&exh->latch);
|
||||||
tDebug("pre alloc refId %" PRId64 "", exh->refId);
|
tDebug("pre alloc refId %" PRId64 "", exh->refId);
|
||||||
|
|
||||||
return exh->refId;
|
return exh->refId;
|
||||||
|
|
|
@ -70,7 +70,7 @@ int32_t transDecompressMsg(char** msg, int32_t len) {
|
||||||
char* buf = taosMemoryCalloc(1, oriLen + sizeof(STransMsgHead));
|
char* buf = taosMemoryCalloc(1, oriLen + sizeof(STransMsgHead));
|
||||||
STransMsgHead* pNewHead = (STransMsgHead*)buf;
|
STransMsgHead* pNewHead = (STransMsgHead*)buf;
|
||||||
int32_t decompLen = LZ4_decompress_safe(pCont + sizeof(STransCompMsg), (char*)pNewHead->content,
|
int32_t decompLen = LZ4_decompress_safe(pCont + sizeof(STransCompMsg), (char*)pNewHead->content,
|
||||||
len - sizeof(STransMsgHead) - sizeof(STransCompMsg), oriLen);
|
len - sizeof(STransMsgHead) - sizeof(STransCompMsg), oriLen);
|
||||||
memcpy((char*)pNewHead, (char*)pHead, sizeof(STransMsgHead));
|
memcpy((char*)pNewHead, (char*)pHead, sizeof(STransMsgHead));
|
||||||
|
|
||||||
pNewHead->msgLen = htonl(oriLen + sizeof(STransMsgHead));
|
pNewHead->msgLen = htonl(oriLen + sizeof(STransMsgHead));
|
||||||
|
@ -158,6 +158,10 @@ int transResetBuffer(SConnBuffer* connBuf) {
|
||||||
p->left = -1;
|
p->left = -1;
|
||||||
p->total = 0;
|
p->total = 0;
|
||||||
p->len = 0;
|
p->len = 0;
|
||||||
|
if (p->cap > BUFFER_CAP) {
|
||||||
|
p->cap = BUFFER_CAP;
|
||||||
|
p->buf = taosMemoryRealloc(p->buf, p->cap);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
ASSERTS(0, "invalid read from sock buf");
|
ASSERTS(0, "invalid read from sock buf");
|
||||||
return -1;
|
return -1;
|
||||||
|
|
|
@ -761,9 +761,12 @@ static bool uvRecvReleaseReq(SSvrConn* pConn, STransMsgHead* pHead) {
|
||||||
tTrace("conn %p received release request", pConn);
|
tTrace("conn %p received release request", pConn);
|
||||||
|
|
||||||
STraceId traceId = pHead->traceId;
|
STraceId traceId = pHead->traceId;
|
||||||
pConn->status = ConnRelease;
|
|
||||||
transClearBuffer(&pConn->readBuf);
|
transClearBuffer(&pConn->readBuf);
|
||||||
transFreeMsg(transContFromHead((char*)pHead));
|
transFreeMsg(transContFromHead((char*)pHead));
|
||||||
|
if (pConn->status != ConnAcquire) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
pConn->status = ConnRelease;
|
||||||
|
|
||||||
STransMsg tmsg = {.code = 0, .info.handle = (void*)pConn, .info.traceId = traceId, .info.ahandle = (void*)0x9527};
|
STransMsg tmsg = {.code = 0, .info.handle = (void*)pConn, .info.traceId = traceId, .info.ahandle = (void*)0x9527};
|
||||||
SSvrMsg* srvMsg = taosMemoryCalloc(1, sizeof(SSvrMsg));
|
SSvrMsg* srvMsg = taosMemoryCalloc(1, sizeof(SSvrMsg));
|
||||||
|
@ -1090,6 +1093,7 @@ static FORCE_INLINE SSvrConn* createConn(void* hThrd) {
|
||||||
|
|
||||||
STrans* pTransInst = pThrd->pTransInst;
|
STrans* pTransInst = pThrd->pTransInst;
|
||||||
pConn->refId = exh->refId;
|
pConn->refId = exh->refId;
|
||||||
|
QUEUE_INIT(&exh->q);
|
||||||
transRefSrvHandle(pConn);
|
transRefSrvHandle(pConn);
|
||||||
tTrace("%s handle %p, conn %p created, refId:%" PRId64, transLabel(pTransInst), exh, pConn, pConn->refId);
|
tTrace("%s handle %p, conn %p created, refId:%" PRId64, transLabel(pTransInst), exh, pConn, pConn->refId);
|
||||||
return pConn;
|
return pConn;
|
||||||
|
@ -1121,6 +1125,7 @@ static int reallocConnRef(SSvrConn* conn) {
|
||||||
exh->handle = conn;
|
exh->handle = conn;
|
||||||
exh->pThrd = conn->hostThrd;
|
exh->pThrd = conn->hostThrd;
|
||||||
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
exh->refId = transAddExHandle(transGetRefMgt(), exh);
|
||||||
|
QUEUE_INIT(&exh->q);
|
||||||
transAcquireExHandle(transGetRefMgt(), exh->refId);
|
transAcquireExHandle(transGetRefMgt(), exh->refId);
|
||||||
conn->refId = exh->refId;
|
conn->refId = exh->refId;
|
||||||
|
|
||||||
|
|
|
@ -289,7 +289,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_DNODE_IN_DROPPING, "Dnode in dropping sta
|
||||||
// mnode-trans
|
// mnode-trans
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_ALREADY_EXIST, "Transaction already exists")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_ALREADY_EXIST, "Transaction already exists")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_NOT_EXIST, "Transaction not exists")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_NOT_EXIST, "Transaction not exists")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_INVALID_STAGE, "Invalid stage to kill")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_INVALID_STAGE, "Invalid transaction stage")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_CONFLICT, "Conflict transaction not completed")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_CONFLICT, "Conflict transaction not completed")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_CLOG_IS_NULL, "Transaction commitlog is null")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_CLOG_IS_NULL, "Transaction commitlog is null")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_NETWORK_UNAVAILL, "Unable to establish connection While execute transaction and will continue in the background")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TRANS_NETWORK_UNAVAILL, "Unable to establish connection While execute transaction and will continue in the background")
|
||||||
|
|
|
@ -242,6 +242,7 @@ void taosCloseLog() {
|
||||||
taosMemoryFreeClear(tsLogObj.logHandle->buffer);
|
taosMemoryFreeClear(tsLogObj.logHandle->buffer);
|
||||||
taosThreadMutexDestroy(&tsLogObj.logMutex);
|
taosThreadMutexDestroy(&tsLogObj.logMutex);
|
||||||
taosMemoryFreeClear(tsLogObj.logHandle);
|
taosMemoryFreeClear(tsLogObj.logHandle);
|
||||||
|
tsLogObj.logHandle = NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -285,8 +286,11 @@ static void taosKeepOldLog(char *oldName) {
|
||||||
taosRemoveOldFiles(tsLogDir, tsLogKeepDays);
|
taosRemoveOldFiles(tsLogDir, tsLogKeepDays);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
typedef struct {
|
||||||
static void *taosThreadToOpenNewFile(void *param) {
|
TdFilePtr pOldFile;
|
||||||
|
char keepName[LOG_FILE_NAME_LEN + 20];
|
||||||
|
} OldFileKeeper;
|
||||||
|
static OldFileKeeper *taosOpenNewFile() {
|
||||||
char keepName[LOG_FILE_NAME_LEN + 20];
|
char keepName[LOG_FILE_NAME_LEN + 20];
|
||||||
sprintf(keepName, "%s.%d", tsLogObj.logName, tsLogObj.flag);
|
sprintf(keepName, "%s.%d", tsLogObj.logName, tsLogObj.flag);
|
||||||
|
|
||||||
|
@ -312,13 +316,26 @@ static void *taosThreadToOpenNewFile(void *param) {
|
||||||
tsLogObj.logHandle->pFile = pFile;
|
tsLogObj.logHandle->pFile = pFile;
|
||||||
tsLogObj.lines = 0;
|
tsLogObj.lines = 0;
|
||||||
tsLogObj.openInProgress = 0;
|
tsLogObj.openInProgress = 0;
|
||||||
taosSsleep(20);
|
OldFileKeeper* oldFileKeeper = taosMemoryMalloc(sizeof(OldFileKeeper));
|
||||||
taosCloseLogByFd(pOldFile);
|
if (oldFileKeeper == NULL) {
|
||||||
|
uError("create old log keep info faild! mem is not enough.");
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
oldFileKeeper->pOldFile = pOldFile;
|
||||||
|
memcpy(oldFileKeeper->keepName, keepName, LOG_FILE_NAME_LEN + 20);
|
||||||
|
|
||||||
uInfo(" new log file:%d is opened", tsLogObj.flag);
|
uInfo(" new log file:%d is opened", tsLogObj.flag);
|
||||||
uInfo("==================================");
|
uInfo("==================================");
|
||||||
taosKeepOldLog(keepName);
|
return oldFileKeeper;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void *taosThreadToCloseOldFile(void* param) {
|
||||||
|
if(!param) return NULL;
|
||||||
|
OldFileKeeper* oldFileKeeper = (OldFileKeeper*)param;
|
||||||
|
taosSsleep(20);
|
||||||
|
taosCloseLogByFd(oldFileKeeper->pOldFile);
|
||||||
|
taosKeepOldLog(oldFileKeeper->keepName);
|
||||||
|
taosMemoryFree(oldFileKeeper);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -334,7 +351,8 @@ static int32_t taosOpenNewLogFile() {
|
||||||
taosThreadAttrInit(&attr);
|
taosThreadAttrInit(&attr);
|
||||||
taosThreadAttrSetDetachState(&attr, PTHREAD_CREATE_DETACHED);
|
taosThreadAttrSetDetachState(&attr, PTHREAD_CREATE_DETACHED);
|
||||||
|
|
||||||
taosThreadCreate(&thread, &attr, taosThreadToOpenNewFile, NULL);
|
OldFileKeeper* oldFileKeeper = taosOpenNewFile();
|
||||||
|
taosThreadCreate(&thread, &attr, taosThreadToCloseOldFile, oldFileKeeper);
|
||||||
taosThreadAttrDestroy(&attr);
|
taosThreadAttrDestroy(&attr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -347,10 +365,11 @@ void taosResetLog() {
|
||||||
// force create a new log file
|
// force create a new log file
|
||||||
tsLogObj.lines = tsNumOfLogLines + 10;
|
tsLogObj.lines = tsNumOfLogLines + 10;
|
||||||
|
|
||||||
taosOpenNewLogFile();
|
if (tsLogObj.logHandle) {
|
||||||
|
taosOpenNewLogFile();
|
||||||
uInfo("==================================");
|
uInfo("==================================");
|
||||||
uInfo(" reset log file ");
|
uInfo(" reset log file ");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool taosCheckFileIsOpen(char *logFileName) {
|
static bool taosCheckFileIsOpen(char *logFileName) {
|
||||||
|
|
|
@ -66,10 +66,10 @@ class TDTestCase:
|
||||||
tdSql.query('select * from (select tbname, avg(f) from st partition by tbname) a partition by a.tbname order by a.tbname');
|
tdSql.query('select * from (select tbname, avg(f) from st partition by tbname) a partition by a.tbname order by a.tbname');
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkCols(2)
|
tdSql.checkCols(2)
|
||||||
tdSql.checkData(0, 0, 'ct1');
|
tdSql.checkData(0, 0, 'ct1')
|
||||||
tdSql.checkData(0, 1, 6.0);
|
tdSql.checkData(0, 1, 6.0)
|
||||||
tdSql.checkData(1, 0, 'ct2');
|
tdSql.checkData(1, 0, 'ct2')
|
||||||
tdSql.checkData(1, 1, 12.0);
|
tdSql.checkData(1, 1, 12.0)
|
||||||
|
|
||||||
tdSql.error('select tbname from (select * from st)')
|
tdSql.error('select tbname from (select * from st)')
|
||||||
tdSql.error('select st.tbname from (select st.tbname from st)')
|
tdSql.error('select st.tbname from (select st.tbname from st)')
|
||||||
|
|
|
@ -870,7 +870,7 @@ sql_error select stddev(c2), tbname from select_tags_mt0;
|
||||||
sql_error select twa(c2), tbname from select_tags_mt0;
|
sql_error select twa(c2), tbname from select_tags_mt0;
|
||||||
sql_error select interp(c2), tbname from select_tags_mt0 where ts=100001;
|
sql_error select interp(c2), tbname from select_tags_mt0 where ts=100001;
|
||||||
|
|
||||||
sql_error select t1,t2,tbname from select_tags_mt0 group by tbname;
|
|
||||||
sql select count(tbname) from select_tags_mt0 interval(1d);
|
sql select count(tbname) from select_tags_mt0 interval(1d);
|
||||||
sql select count(tbname) from select_tags_mt0 group by t1;
|
sql select count(tbname) from select_tags_mt0 group by t1;
|
||||||
sql select count(tbname),SUM(T1) from select_tags_mt0 interval(1d);
|
sql select count(tbname),SUM(T1) from select_tags_mt0 interval(1d);
|
||||||
|
@ -888,16 +888,16 @@ sql_error select tbname, t1 from select_tags_mt0 interval(1y);
|
||||||
print ==================================>TD-4231
|
print ==================================>TD-4231
|
||||||
sql select t1,tbname from select_tags_mt0 where c1<0
|
sql select t1,tbname from select_tags_mt0 where c1<0
|
||||||
sql select t1,tbname from select_tags_mt0 where c1<0 and tbname in ('select_tags_tb12')
|
sql select t1,tbname from select_tags_mt0 where c1<0 and tbname in ('select_tags_tb12')
|
||||||
|
|
||||||
sql select tbname from select_tags_mt0 where tbname in ('select_tags_tb12');
|
sql select tbname from select_tags_mt0 where tbname in ('select_tags_tb12');
|
||||||
|
|
||||||
sql_error select first(c1), last(c2), t1 from select_tags_mt0 group by tbname;
|
sql select first(ts), tbname from select_tags_mt0 group by tbname;
|
||||||
sql_error select first(c1), last(c2), tbname, t2 from select_tags_mt0 group by tbname;
|
sql select count(c1) from select_tags_mt0 where c1=99 group by tbname;
|
||||||
sql_error select first(c1), count(*), t2, t1, tbname from select_tags_mt0 group by tbname;
|
sql select count(*),tbname from select_tags_mt0 group by tbname
|
||||||
#valid sql: select first(c1), t2 from select_tags_mt0 group by tbname;
|
|
||||||
|
|
||||||
#sql select first(ts), tbname from select_tags_mt0 group by tbname;
|
print ==================================> tag supported in group
|
||||||
#sql select count(c1) from select_tags_mt0 where c1=99 group by tbname;
|
sql select t1,t2,tbname from select_tags_mt0 group by tbname;
|
||||||
#sql select count(*),tbname from select_tags_mt0 group by tbname
|
sql select first(c1), last(c2), t1 from select_tags_mt0 group by tbname;
|
||||||
|
sql select first(c1), last(c2), tbname, t2 from select_tags_mt0 group by tbname;
|
||||||
|
sql select first(c1), count(*), t2, t1, tbname from select_tags_mt0 group by tbname;
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -3,15 +3,24 @@ system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
sql create database test;
|
sql create database test KEEP 36500;
|
||||||
sql use test;
|
sql use test;
|
||||||
sql create table st(ts timestamp, f int) tags(t int);
|
sql create table st(ts timestamp, f int) tags(t int);
|
||||||
sql insert into ct1 using st tags(1) values(now, 0)(now+1s, 1)(now+2s, 10)(now+3s, 11)
|
|
||||||
sql insert into ct2 using st tags(2) values(now+2s, 2)(now+3s, 3)
|
|
||||||
sql insert into ct3 using st tags(3) values(now+4s, 4)(now+5s, 5)
|
|
||||||
sql insert into ct4 using st tags(4) values(now+6s, 6)(now+7s, 7)
|
|
||||||
|
|
||||||
sql select count(*), spread(ts) from st where tbname='ct1'
|
$ms = 1712135244502
|
||||||
|
$ms1 = $ms + 1000
|
||||||
|
$ms2 = $ms + 2000
|
||||||
|
$ms3 = $ms + 3000
|
||||||
|
$ms4 = $ms + 4000
|
||||||
|
$ms5 = $ms + 5000
|
||||||
|
$ms6 = $ms + 6000
|
||||||
|
$ms7 = $ms + 7000
|
||||||
|
sql insert into ct1 using st tags(1) values($ms , 0)($ms1 , 1)($ms2 , 10)($ms3 , 11)
|
||||||
|
sql insert into ct2 using st tags(2) values($ms2 , 2)($ms3 , 3)
|
||||||
|
sql insert into ct3 using st tags(3) values($ms4 , 4)($ms5 , 5)
|
||||||
|
sql insert into ct4 using st tags(4) values($ms6 , 6)($ms7 , 7)
|
||||||
|
|
||||||
|
sql select count(*), spread(ts) from st where tbname='ct1'
|
||||||
print $data00, $data01
|
print $data00, $data01
|
||||||
if $data00 != @4@ then
|
if $data00 != @4@ then
|
||||||
return -1
|
return -1
|
||||||
|
|
|
@ -15,6 +15,8 @@ sql use test3;
|
||||||
sql create table t1(ts timestamp, a int, b int , c int, d double);
|
sql create table t1(ts timestamp, a int, b int , c int, d double);
|
||||||
sql create stream streams3 trigger at_once ignore expired 0 ignore update 0 into streamt3 as select _wstart, count(*) c1 from t1 state_window(a);
|
sql create stream streams3 trigger at_once ignore expired 0 ignore update 0 into streamt3 as select _wstart, count(*) c1 from t1 state_window(a);
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
|
||||||
sql insert into t1 values(1648791211000,1,2,3,1.0);
|
sql insert into t1 values(1648791211000,1,2,3,1.0);
|
||||||
sql insert into t1 values(1648791213000,2,2,3,1.1);
|
sql insert into t1 values(1648791213000,2,2,3,1.1);
|
||||||
sql insert into t1 values(1648791215000,3,2,3,1.1);
|
sql insert into t1 values(1648791215000,3,2,3,1.1);
|
||||||
|
@ -214,4 +216,232 @@ if $data[29][1] != 2 then
|
||||||
goto loop11
|
goto loop11
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
print step2=============
|
||||||
|
|
||||||
|
sql create database test4 vgroups 4;
|
||||||
|
sql use test4;
|
||||||
|
sql create stable st(ts timestamp,a int,b int,c int,d double) tags(ta int,tb int,tc int);
|
||||||
|
sql create table t1 using st tags(1,1,1);
|
||||||
|
sql create table t2 using st tags(2,2,2);
|
||||||
|
sql create stream streams4 trigger at_once ignore expired 0 ignore update 0 into streamt4 as select _wstart, first(a), b, c, ta, tb from st interval(1s);
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
|
||||||
|
sql insert into t1 values(1648791211000,1,2,3,1.0);
|
||||||
|
sql insert into t1 values(1648791213000,2,3,4,1.1);
|
||||||
|
sql insert into t2 values(1648791215000,3,4,5,1.1);
|
||||||
|
sql insert into t2 values(1648791217000,4,5,6,1.1);
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loop12:
|
||||||
|
|
||||||
|
sleep 200
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 10 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from streamt4 order by 1;
|
||||||
|
sql select * from streamt4 order by 1;
|
||||||
|
|
||||||
|
if $rows != 4 then
|
||||||
|
print ======rows=$rows
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data02 != 2 then
|
||||||
|
print ======data02=$data02
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data03 != 3 then
|
||||||
|
print ======data03=$data03
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data04 != 1 then
|
||||||
|
print ======data04=$data04
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data05 != 1 then
|
||||||
|
print ======data05=$data05
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
|
||||||
|
if $data22 != 4 then
|
||||||
|
print ======data22=$data22
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data23 != 5 then
|
||||||
|
print ======data23=$data23
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data24 != 2 then
|
||||||
|
print ======data24=$data24
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data25 != 2 then
|
||||||
|
print ======data25=$data25
|
||||||
|
goto loop12
|
||||||
|
endi
|
||||||
|
|
||||||
|
print step3=============
|
||||||
|
|
||||||
|
sql create database test5 vgroups 4;
|
||||||
|
sql use test5;
|
||||||
|
sql create stable st(ts timestamp,a int,b int,c int,d double) tags(ta int,tb int,tc int);
|
||||||
|
sql create table t1 using st tags(1,1,1);
|
||||||
|
sql create table t2 using st tags(2,2,2);
|
||||||
|
sql create stream streams5 trigger at_once ignore expired 0 ignore update 0 into streamt5 as select _wstart, b, c, ta, tb, max(b) from t1 interval(1s);
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
|
||||||
|
sql insert into t1 values(1648791211000,1,2,3,1.0);
|
||||||
|
sql insert into t1 values(1648791213000,2,3,4,1.1);
|
||||||
|
sql insert into t1 values(1648791215000,3,4,5,1.1);
|
||||||
|
sql insert into t1 values(1648791217000,4,5,6,1.1);
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loop13:
|
||||||
|
|
||||||
|
sleep 200
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 10 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from streamt5 order by 1;
|
||||||
|
sql select * from streamt5 order by 1;
|
||||||
|
|
||||||
|
if $rows != 4 then
|
||||||
|
print ======rows=$rows
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data01 != 2 then
|
||||||
|
print ======data02=$data02
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data02 != 3 then
|
||||||
|
print ======data03=$data03
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data03 != 1 then
|
||||||
|
print ======data04=$data04
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data04 != 1 then
|
||||||
|
print ======data05=$data05
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
|
||||||
|
if $data21 != 4 then
|
||||||
|
print ======data22=$data22
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data22 != 5 then
|
||||||
|
print ======data23=$data23
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data23 != 1 then
|
||||||
|
print ======data24=$data24
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data24 != 1 then
|
||||||
|
print ======data25=$data25
|
||||||
|
goto loop13
|
||||||
|
endi
|
||||||
|
|
||||||
|
print step4=============
|
||||||
|
|
||||||
|
sql create database test6 vgroups 4;
|
||||||
|
sql use test6;
|
||||||
|
sql create stable st(ts timestamp,a int,b int,c int,d double) tags(ta int,tb int,tc int);
|
||||||
|
sql create table t1 using st tags(1,1,1);
|
||||||
|
sql create table t2 using st tags(2,2,2);
|
||||||
|
sql create stream streams6 trigger at_once ignore expired 0 ignore update 0 into streamt6 as select _wstart, b, c,min(c), ta, tb from st interval(1s);
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
|
||||||
|
sql insert into t1 values(1648791211000,1,2,3,1.0);
|
||||||
|
sql insert into t1 values(1648791213000,2,3,4,1.1);
|
||||||
|
sql insert into t2 values(1648791215000,3,4,5,1.1);
|
||||||
|
sql insert into t2 values(1648791217000,4,5,6,1.1);
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loop14:
|
||||||
|
|
||||||
|
sleep 200
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 10 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from streamt6 order by 1;
|
||||||
|
sql select * from streamt6 order by 1;
|
||||||
|
|
||||||
|
if $rows != 4 then
|
||||||
|
print ======rows=$rows
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data01 != 2 then
|
||||||
|
print ======data02=$data02
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data02 != 3 then
|
||||||
|
print ======data03=$data03
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data04 != 1 then
|
||||||
|
print ======data04=$data04
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data05 != 1 then
|
||||||
|
print ======data05=$data05
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
|
||||||
|
if $data21 != 4 then
|
||||||
|
print ======data22=$data22
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data22 != 5 then
|
||||||
|
print ======data23=$data23
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data24 != 2 then
|
||||||
|
print ======data24=$data24
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $data25 != 2 then
|
||||||
|
print ======data25=$data25
|
||||||
|
goto loop14
|
||||||
|
endi
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
@ -50,10 +50,11 @@ class TDTestCase:
|
||||||
self.tbnum = 20
|
self.tbnum = 20
|
||||||
self.rowNum = 10
|
self.rowNum = 10
|
||||||
self.tag_dict = {
|
self.tag_dict = {
|
||||||
't0':'int'
|
't0':'int',
|
||||||
|
't1':f'nchar({self.nchar_length})'
|
||||||
}
|
}
|
||||||
self.tag_values = [
|
self.tag_values = [
|
||||||
f'1'
|
f'1', '""'
|
||||||
]
|
]
|
||||||
self.binary_str = 'taosdata'
|
self.binary_str = 'taosdata'
|
||||||
self.nchar_str = '涛思数据'
|
self.nchar_str = '涛思数据'
|
||||||
|
@ -72,7 +73,7 @@ class TDTestCase:
|
||||||
tdSql.execute(f'use {self.dbname}')
|
tdSql.execute(f'use {self.dbname}')
|
||||||
tdSql.execute(self.setsql.set_create_stable_sql(self.stbname,self.column_dict,self.tag_dict))
|
tdSql.execute(self.setsql.set_create_stable_sql(self.stbname,self.column_dict,self.tag_dict))
|
||||||
for i in range(self.tbnum):
|
for i in range(self.tbnum):
|
||||||
tdSql.execute(f"create table {self.stbname}_{i} using {self.stbname} tags({self.tag_values[0]})")
|
tdSql.execute(f"create table {self.stbname}_{i} using {self.stbname} tags({self.tag_values[0]}, {self.tag_values[1]})")
|
||||||
self.insert_data(self.column_dict,f'{self.stbname}_{i}',self.rowNum)
|
self.insert_data(self.column_dict,f'{self.stbname}_{i}',self.rowNum)
|
||||||
def count_check(self):
|
def count_check(self):
|
||||||
tdSql.query('select count(*) from information_schema.ins_tables')
|
tdSql.query('select count(*) from information_schema.ins_tables')
|
||||||
|
@ -313,6 +314,11 @@ class TDTestCase:
|
||||||
tdSql.error('alter cluster "activeCode" ""')
|
tdSql.error('alter cluster "activeCode" ""')
|
||||||
tdSql.execute('alter cluster "activeCode" "revoked"')
|
tdSql.execute('alter cluster "activeCode" "revoked"')
|
||||||
|
|
||||||
|
def test_query_ins_tags(self):
|
||||||
|
sql = f'select tag_name, tag_value from information_schema.ins_tags where table_name = "{self.stbname}_0"'
|
||||||
|
tdSql.query(sql)
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self.prepare_data()
|
self.prepare_data()
|
||||||
self.count_check()
|
self.count_check()
|
||||||
|
@ -322,6 +328,7 @@ class TDTestCase:
|
||||||
self.ins_stable_check2()
|
self.ins_stable_check2()
|
||||||
self.ins_dnodes_check()
|
self.ins_dnodes_check()
|
||||||
self.ins_grants_check()
|
self.ins_grants_check()
|
||||||
|
self.test_query_ins_tags()
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
|
|
@ -79,6 +79,10 @@ class TDTestCase:
|
||||||
tdSql.query(f"select cast(c1 as binary(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c1 as binary(32)) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c1)):
|
for i in range(len(data_t1_c1)):
|
||||||
tdSql.checkData( i, 0, str(data_t1_c1[i]) )
|
tdSql.checkData( i, 0, str(data_t1_c1[i]) )
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(c1 as binary) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c1)):
|
||||||
|
tdSql.checkData( i, 0, str(data_t1_c1[i]) )
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step6: cast int to nchar, expect changes to str(int) ")
|
tdLog.printNoPrefix("==========step6: cast int to nchar, expect changes to str(int) ")
|
||||||
|
|
||||||
|
@ -130,6 +134,13 @@ class TDTestCase:
|
||||||
tdSql.query(f"select cast(c2 as binary(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c2 as binary(32)) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c2)):
|
for i in range(len(data_t1_c2)):
|
||||||
tdSql.checkData( i, 0, str(data_t1_c2[i]) )
|
tdSql.checkData( i, 0, str(data_t1_c2[i]) )
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(c2 as binary) as b from {self.dbname}.ct4")
|
||||||
|
for i in range(len(data_ct4_c2)):
|
||||||
|
tdSql.checkData( i, 0, str(data_ct4_c2[i]) )
|
||||||
|
tdSql.query(f"select cast(c2 as binary) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c2)):
|
||||||
|
tdSql.checkData( i, 0, str(data_t1_c2[i]) )
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step10: cast bigint to nchar, expect changes to str(int) ")
|
tdLog.printNoPrefix("==========step10: cast bigint to nchar, expect changes to str(int) ")
|
||||||
|
|
||||||
|
@ -184,6 +195,13 @@ class TDTestCase:
|
||||||
tdSql.query(f"select cast(c3 as binary(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c3 as binary(32)) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c3)):
|
for i in range(len(data_t1_c3)):
|
||||||
tdSql.checkData( i, 0, str(data_t1_c3[i]) )
|
tdSql.checkData( i, 0, str(data_t1_c3[i]) )
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(c3 as binary) as b from {self.dbname}.ct4")
|
||||||
|
for i in range(len(data_ct4_c3)):
|
||||||
|
tdSql.checkData( i, 0, str(data_ct4_c3[i]) )
|
||||||
|
tdSql.query(f"select cast(c3 as binary) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c3)):
|
||||||
|
tdSql.checkData( i, 0, str(data_t1_c3[i]) )
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step14: cast smallint to nchar, expect changes to str(int) ")
|
tdLog.printNoPrefix("==========step14: cast smallint to nchar, expect changes to str(int) ")
|
||||||
|
|
||||||
|
@ -235,6 +253,13 @@ class TDTestCase:
|
||||||
tdSql.query(f"select cast(c4 as binary(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c4 as binary(32)) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c4)):
|
for i in range(len(data_t1_c4)):
|
||||||
tdSql.checkData( i, 0, str(data_t1_c4[i]) )
|
tdSql.checkData( i, 0, str(data_t1_c4[i]) )
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(c4 as binary) as b from {self.dbname}.ct4")
|
||||||
|
for i in range(len(data_ct4_c4)):
|
||||||
|
tdSql.checkData( i, 0, str(data_ct4_c4[i]) )
|
||||||
|
tdSql.query(f"select cast(c4 as binary) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c4)):
|
||||||
|
tdSql.checkData( i, 0, str(data_t1_c4[i]) )
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step18: cast tinyint to nchar, expect changes to str(int) ")
|
tdLog.printNoPrefix("==========step18: cast tinyint to nchar, expect changes to str(int) ")
|
||||||
|
|
||||||
|
@ -282,6 +307,12 @@ class TDTestCase:
|
||||||
for i in range(len(data_ct4_c5)):
|
for i in range(len(data_ct4_c5)):
|
||||||
tdSql.checkData( i, 0, str(data_ct4_c5[i]) ) if data_ct4_c5[i] is None else tdSql.checkData( i, 0, f'{data_ct4_c5[i]:.6f}' )
|
tdSql.checkData( i, 0, str(data_ct4_c5[i]) ) if data_ct4_c5[i] is None else tdSql.checkData( i, 0, f'{data_ct4_c5[i]:.6f}' )
|
||||||
tdSql.query(f"select cast(c5 as binary(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c5 as binary(32)) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c5)):
|
||||||
|
tdSql.checkData( i, 0, str(data_t1_c5[i]) ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
||||||
|
tdSql.query(f"select cast(c5 as binary) as b from {self.dbname}.ct4")
|
||||||
|
for i in range(len(data_ct4_c5)):
|
||||||
|
tdSql.checkData( i, 0, str(data_ct4_c5[i]) ) if data_ct4_c5[i] is None else tdSql.checkData( i, 0, f'{data_ct4_c5[i]:.6f}' )
|
||||||
|
tdSql.query(f"select cast(c5 as binary) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c5)):
|
for i in range(len(data_t1_c5)):
|
||||||
tdSql.checkData( i, 0, str(data_t1_c5[i]) ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
tdSql.checkData( i, 0, str(data_t1_c5[i]) ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
||||||
|
|
||||||
|
@ -290,6 +321,12 @@ class TDTestCase:
|
||||||
for i in range(len(data_ct4_c5)):
|
for i in range(len(data_ct4_c5)):
|
||||||
tdSql.checkData( i, 0, None ) if data_ct4_c5[i] is None else tdSql.checkData( i, 0, f'{data_ct4_c5[i]:.6f}' )
|
tdSql.checkData( i, 0, None ) if data_ct4_c5[i] is None else tdSql.checkData( i, 0, f'{data_ct4_c5[i]:.6f}' )
|
||||||
tdSql.query(f"select cast(c5 as nchar(32)) as b from {self.dbname}.t1")
|
tdSql.query(f"select cast(c5 as nchar(32)) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c5)):
|
||||||
|
tdSql.checkData( i, 0, None ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
||||||
|
tdSql.query(f"select cast(c5 as nchar) as b from {self.dbname}.t1")
|
||||||
|
for i in range(len(data_t1_c5)):
|
||||||
|
tdSql.checkData( i, 0, None ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
||||||
|
tdSql.query(f"select cast(c5 as varchar) as b from {self.dbname}.t1")
|
||||||
for i in range(len(data_t1_c5)):
|
for i in range(len(data_t1_c5)):
|
||||||
tdSql.checkData( i, 0, None ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
tdSql.checkData( i, 0, None ) if data_t1_c5[i] is None else tdSql.checkData( i, 0, f'{data_t1_c5[i]:.6f}' )
|
||||||
|
|
||||||
|
@ -580,6 +617,10 @@ class TDTestCase:
|
||||||
( tdSql.checkData(i, 0, '12121.233231') for i in range(tdSql.queryRows) )
|
( tdSql.checkData(i, 0, '12121.233231') for i in range(tdSql.queryRows) )
|
||||||
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as binary(2)) as b from {self.dbname}.ct4")
|
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as binary(2)) as b from {self.dbname}.ct4")
|
||||||
( tdSql.checkData(i, 0, '12') for i in range(tdSql.queryRows) )
|
( tdSql.checkData(i, 0, '12') for i in range(tdSql.queryRows) )
|
||||||
|
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as binary) as b from {self.dbname}.ct4")
|
||||||
|
( tdSql.checkData(i, 0, '12121.233231') for i in range(tdSql.queryRows) )
|
||||||
|
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as binary) as b from {self.dbname}.ct4")
|
||||||
|
( tdSql.checkData(i, 0, '12') for i in range(tdSql.queryRows) )
|
||||||
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as nchar(16)) as b from {self.dbname}.ct4")
|
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as nchar(16)) as b from {self.dbname}.ct4")
|
||||||
( tdSql.checkData(i, 0, '12121.233231') for i in range(tdSql.queryRows) )
|
( tdSql.checkData(i, 0, '12121.233231') for i in range(tdSql.queryRows) )
|
||||||
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as nchar(2)) as b from {self.dbname}.ct4")
|
tdSql.query(f"select cast(12121.23323131 + 'test~!@`#$%^&*(){'}'}{'{'}][;><.,' as nchar(2)) as b from {self.dbname}.ct4")
|
||||||
|
|
|
@ -103,6 +103,10 @@ class TDTestCase:
|
||||||
tdSql.checkRows(row)
|
tdSql.checkRows(row)
|
||||||
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by tbname')
|
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by tbname')
|
||||||
tdSql.checkRows(row)
|
tdSql.checkRows(row)
|
||||||
|
tdSql.query(f'select t0, {function_name}(c1),sum(c1) from {self.stbname} partition by tbname')
|
||||||
|
tdSql.checkRows(row)
|
||||||
|
tdSql.query(f'select cast(t0 as binary(12)), {function_name}(c1),sum(c1) from {self.stbname} partition by tbname')
|
||||||
|
tdSql.checkRows(row)
|
||||||
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by c1')
|
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by c1')
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by t0')
|
tdSql.query(f'select {function_name}(c1),sum(c1) from {self.stbname} partition by t0')
|
||||||
|
|
|
@ -470,7 +470,9 @@ class TDTestCase:
|
||||||
tdSql.checkRows(40)
|
tdSql.checkRows(40)
|
||||||
|
|
||||||
# bug need fix
|
# bug need fix
|
||||||
tdSql.query("select tbname , csum(c1), csum(c12) from db.stb1 partition by tbname")
|
tdSql.query("select tbname , st1, csum(c1), csum(c12) from db.stb1 partition by tbname")
|
||||||
|
tdSql.checkRows(40)
|
||||||
|
tdSql.query("select tbname , cast(st1 as binary(24)), csum(c1), csum(c12) from db.stb1 partition by tbname")
|
||||||
tdSql.checkRows(40)
|
tdSql.checkRows(40)
|
||||||
tdSql.query("select tbname , csum(st1) from db.stb1 partition by tbname")
|
tdSql.query("select tbname , csum(st1) from db.stb1 partition by tbname")
|
||||||
tdSql.checkRows(70)
|
tdSql.checkRows(70)
|
||||||
|
|
|
@ -91,15 +91,71 @@ class TDTestCase:
|
||||||
tdSql.query(f"select t2, t3, c1, count(*) from {self.dbname}.{self.stable} {keyword} by t2, t3, c1 ")
|
tdSql.query(f"select t2, t3, c1, count(*) from {self.dbname}.{self.stable} {keyword} by t2, t3, c1 ")
|
||||||
tdSql.checkRows(nonempty_tb_num * self.row_nums)
|
tdSql.checkRows(nonempty_tb_num * self.row_nums)
|
||||||
|
|
||||||
|
def test_groupby_sub_table(self):
|
||||||
|
for i in range(self.tb_nums):
|
||||||
|
tbname = f"{self.dbname}.sub_{self.stable}_{i}"
|
||||||
|
ts = self.ts + i*10000
|
||||||
|
tdSql.query(f"select t1, t2, t3,count(*) from {tbname}")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 1, i)
|
||||||
|
tdSql.checkData(0, 2, i*10)
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(t2 as binary(12)),count(*) from {tbname}")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 0, i)
|
||||||
|
|
||||||
|
tdSql.query(f"select t2 + 1, count(*) from {tbname}")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 0, i + 1)
|
||||||
|
|
||||||
|
tdSql.query(f"select t1, t2, t3, count(*) from {tbname} group by tbname")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 1, i)
|
||||||
|
tdSql.checkData(0, 2, i*10)
|
||||||
|
|
||||||
|
tdSql.query(f"select t1, t2, t3, count(*) from {tbname} group by tbname, c1, t4")
|
||||||
|
tdSql.checkData(0, 1, i)
|
||||||
|
tdSql.checkData(0, 2, i*10)
|
||||||
|
|
||||||
|
tdSql.query(f"select t1, t2, t3, count(*) from {tbname} partition by tbname")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 1, i)
|
||||||
|
tdSql.checkData(0, 2, i*10)
|
||||||
|
|
||||||
|
tdSql.query(f"select t1, t2, t3, count(*) from {tbname} partition by c1, tbname")
|
||||||
|
tdSql.checkData(0, 1, i)
|
||||||
|
tdSql.checkData(0, 2, i*10)
|
||||||
|
|
||||||
|
tdSql.query(f"select t1, t2, t3, count(*) from {self.dbname}.{self.stable} partition by c1, tbname order by tbname desc")
|
||||||
|
tdSql.checkRows(50)
|
||||||
|
tdSql.checkData(0, 1, 4)
|
||||||
|
tdSql.checkData(0, 2, 40)
|
||||||
|
|
||||||
|
|
||||||
def test_multi_group_key(self, check_num, nonempty_tb_num):
|
def test_multi_group_key(self, check_num, nonempty_tb_num):
|
||||||
# multi tag/tbname
|
# multi tag/tbname
|
||||||
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} group by t2, t3, tbname")
|
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} group by t2, t3, tbname")
|
||||||
tdSql.checkRows(check_num)
|
tdSql.checkRows(check_num)
|
||||||
|
|
||||||
|
tdSql.query(f"select cast(t2 as binary(12)), count(*) from {self.dbname}.{self.stable} group by t2, t3, tbname")
|
||||||
|
tdSql.checkRows(check_num)
|
||||||
|
|
||||||
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} partition by t2, t3, tbname")
|
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} partition by t2, t3, tbname")
|
||||||
tdSql.checkRows(check_num)
|
tdSql.checkRows(check_num)
|
||||||
|
|
||||||
|
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} group by tbname order by tbname asc")
|
||||||
|
tdSql.checkRows(check_num)
|
||||||
|
tdSql.checkData(0, 0, 0)
|
||||||
|
tdSql.checkData(1, 0, 1)
|
||||||
|
tdSql.checkData(2, 1, 20)
|
||||||
|
tdSql.checkData(3, 1, 30)
|
||||||
|
|
||||||
|
tdSql.query(f"select t2, t3, tbname, count(*) from {self.dbname}.{self.stable} partition by tbname order by tbname asc")
|
||||||
|
tdSql.checkRows(check_num)
|
||||||
|
tdSql.checkData(0, 0, 0)
|
||||||
|
tdSql.checkData(2, 1, 20)
|
||||||
|
tdSql.checkData(3, 1, 30)
|
||||||
|
|
||||||
# multi tag + col
|
# multi tag + col
|
||||||
tdSql.query(f"select t1, t2, c1, count(*) from {self.dbname}.{self.stable} partition by t1, t2, c1 ")
|
tdSql.query(f"select t1, t2, c1, count(*) from {self.dbname}.{self.stable} partition by t1, t2, c1 ")
|
||||||
tdSql.checkRows(nonempty_tb_num * self.row_nums)
|
tdSql.checkRows(nonempty_tb_num * self.row_nums)
|
||||||
|
@ -222,12 +278,14 @@ class TDTestCase:
|
||||||
|
|
||||||
self.test_groupby('group', self.tb_nums, nonempty_tb_num)
|
self.test_groupby('group', self.tb_nums, nonempty_tb_num)
|
||||||
self.test_groupby('partition', self.tb_nums, nonempty_tb_num)
|
self.test_groupby('partition', self.tb_nums, nonempty_tb_num)
|
||||||
|
self.test_groupby_sub_table()
|
||||||
self.test_innerSelect(self.tb_nums)
|
self.test_innerSelect(self.tb_nums)
|
||||||
self.test_multi_group_key(self.tb_nums, nonempty_tb_num)
|
self.test_multi_group_key(self.tb_nums, nonempty_tb_num)
|
||||||
self.test_multi_agg(self.tb_nums, nonempty_tb_num)
|
self.test_multi_agg(self.tb_nums, nonempty_tb_num)
|
||||||
self.test_window(nonempty_tb_num)
|
self.test_window(nonempty_tb_num)
|
||||||
self.test_event_window(nonempty_tb_num)
|
self.test_event_window(nonempty_tb_num)
|
||||||
|
|
||||||
|
|
||||||
## test old version before changed
|
## test old version before changed
|
||||||
# self.test_groupby('group', 0, 0)
|
# self.test_groupby('group', 0, 0)
|
||||||
# self.insert_db(5, self.row_nums)
|
# self.insert_db(5, self.row_nums)
|
||||||
|
|
|
@ -0,0 +1,130 @@
|
||||||
|
###################################################################
|
||||||
|
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This file is proprietary and confidential to TAOS Technologies.
|
||||||
|
# No part of this file may be reproduced, stored, transmitted,
|
||||||
|
# disclosed or used in any form or by any means other than as
|
||||||
|
# expressly provided by the written permission from Jianhui Tao
|
||||||
|
#
|
||||||
|
###################################################################
|
||||||
|
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
import random
|
||||||
|
import string
|
||||||
|
import sys
|
||||||
|
import taos
|
||||||
|
from util.common import *
|
||||||
|
from util.log import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.sql import *
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self, conn, logSql, replicaVar=1):
|
||||||
|
self.replicaVar = int(replicaVar)
|
||||||
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
|
||||||
|
self.rowNum = 10
|
||||||
|
self.tbnum = 20
|
||||||
|
self.ts = 1537146000000
|
||||||
|
self.binary_str = 'taosdata'
|
||||||
|
self.nchar_str = '涛思数据'
|
||||||
|
|
||||||
|
def first_check_base(self):
|
||||||
|
dbname = "db"
|
||||||
|
tdSql.prepare(dbname)
|
||||||
|
column_dict = {
|
||||||
|
'col1': 'tinyint',
|
||||||
|
'col2': 'smallint',
|
||||||
|
'col3': 'int',
|
||||||
|
'col4': 'bigint',
|
||||||
|
'col5': 'tinyint unsigned',
|
||||||
|
'col6': 'smallint unsigned',
|
||||||
|
'col7': 'int unsigned',
|
||||||
|
'col8': 'bigint unsigned',
|
||||||
|
'col9': 'float',
|
||||||
|
'col10': 'double',
|
||||||
|
'col11': 'bool',
|
||||||
|
'col12': 'binary(20)',
|
||||||
|
'col13': 'nchar(20)'
|
||||||
|
}
|
||||||
|
tdSql.execute(f"alter local \'keepColumnName\' \'1\'")
|
||||||
|
tdSql.execute(f'''create table {dbname}.stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 tinyint unsigned, col6 smallint unsigned,
|
||||||
|
col7 int unsigned, col8 bigint unsigned, col9 float, col10 double, col11 bool, col12 binary(20), col13 nchar(20)) tags(loc nchar(20))''')
|
||||||
|
tdSql.execute(f"create table {dbname}.stb_1 using {dbname}.stb tags('beijing')")
|
||||||
|
tdSql.execute(f"create table {dbname}.stb_2 using {dbname}.stb tags('beijing')")
|
||||||
|
|
||||||
|
column_list = ['col1','col2','col3','col4','col5','col6','col7','col8','col9','col10','col11','col12','col13']
|
||||||
|
for i in column_list:
|
||||||
|
for j in ['stb_1']:
|
||||||
|
tdSql.query(f"select first({i}) from {dbname}.{j}")
|
||||||
|
tdSql.checkRows(0)
|
||||||
|
for n in range(self.rowNum):
|
||||||
|
i = n
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb_1 values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
|
||||||
|
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1))
|
||||||
|
|
||||||
|
for n in range(self.rowNum):
|
||||||
|
i = n + 10
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb_1 values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
|
||||||
|
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1))
|
||||||
|
|
||||||
|
for n in range(self.rowNum):
|
||||||
|
i = n + 100
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb_2 values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
|
||||||
|
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1))
|
||||||
|
|
||||||
|
for k, v in column_dict.items():
|
||||||
|
|
||||||
|
if v == 'tinyint' or v == 'smallint' or v == 'int' or v == 'bigint' or v == 'tinyint unsigned' or v == 'smallint unsigned'\
|
||||||
|
or v == 'int unsigned' or v == 'bigint unsigned':
|
||||||
|
tdSql.query(f"select last({k})-first({k}) from {dbname}.stb")
|
||||||
|
tdSql.checkData(0, 0, 109)
|
||||||
|
tdSql.query(f"select first({k})+last({k}) from {dbname}.stb")
|
||||||
|
tdSql.checkData(0, 0, 111)
|
||||||
|
tdSql.query(f"select max({k})-first({k}) from {dbname}.stb")
|
||||||
|
tdSql.checkData(0, 0, 109)
|
||||||
|
tdSql.query(f"select max({k})-min({k}) from {dbname}.stb")
|
||||||
|
tdSql.checkData(0, 0, 109)
|
||||||
|
|
||||||
|
tdSql.query(f"select last({k})-first({k}) from {dbname}.stb_1")
|
||||||
|
tdSql.checkData(0, 0, 19)
|
||||||
|
tdSql.query(f"select first({k})+last({k}) from {dbname}.stb_1")
|
||||||
|
tdSql.checkData(0, 0, 21)
|
||||||
|
tdSql.query(f"select max({k})-first({k}) from {dbname}.stb_1")
|
||||||
|
tdSql.checkData(0, 0, 19)
|
||||||
|
tdSql.query(f"select max({k})-min({k}) from {dbname}.stb_1")
|
||||||
|
tdSql.checkData(0, 0, 19)
|
||||||
|
|
||||||
|
# float,double
|
||||||
|
elif v == 'float' or v == 'double':
|
||||||
|
tdSql.query(f"select first({k})+last({k}) from {dbname}.stb")
|
||||||
|
tdSql.checkData(0, 0, 109.2)
|
||||||
|
tdSql.query(f"select first({k})+last({k}) from {dbname}.stb_1")
|
||||||
|
tdSql.checkData(0, 0, 19.2)
|
||||||
|
# bool
|
||||||
|
elif v == 'bool':
|
||||||
|
continue
|
||||||
|
# binary
|
||||||
|
elif 'binary' in v:
|
||||||
|
continue
|
||||||
|
# nchar
|
||||||
|
elif 'nchar' in v:
|
||||||
|
continue
|
||||||
|
|
||||||
|
#tdSql.execute(f'drop database {dbname}')
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
self.first_check_base()
|
||||||
|
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success("%s successfully executed" % __file__)
|
||||||
|
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -29,9 +29,11 @@ class TDTestCase:
|
||||||
tdSql.execute(f'use db_stmt')
|
tdSql.execute(f'use db_stmt')
|
||||||
|
|
||||||
tdSql.query("select ts,k from st")
|
tdSql.query("select ts,k from st")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(self.expected_affected_rows)
|
||||||
|
|
||||||
tdSql.execute(f'create topic t_unorder_data as select ts,k from st')
|
tdSql.execute(f'create topic t_unorder_data as select ts,k from st')
|
||||||
|
tdSql.execute(f'create topic t_unorder_data_none as select i,k from st')
|
||||||
|
|
||||||
consumer_dict = {
|
consumer_dict = {
|
||||||
"group.id": "g1",
|
"group.id": "g1",
|
||||||
"td.connect.user": "root",
|
"td.connect.user": "root",
|
||||||
|
@ -41,7 +43,7 @@ class TDTestCase:
|
||||||
consumer = Consumer(consumer_dict)
|
consumer = Consumer(consumer_dict)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
consumer.subscribe(["t_unorder_data"])
|
consumer.subscribe(["t_unorder_data", "t_unorder_data_none"])
|
||||||
except TmqError:
|
except TmqError:
|
||||||
tdLog.exit(f"subscribe error")
|
tdLog.exit(f"subscribe error")
|
||||||
|
|
||||||
|
@ -51,18 +53,15 @@ class TDTestCase:
|
||||||
res = consumer.poll(1)
|
res = consumer.poll(1)
|
||||||
print(res)
|
print(res)
|
||||||
if not res:
|
if not res:
|
||||||
if cnt == 0:
|
if cnt == 0 or cnt != 2*self.expected_affected_rows:
|
||||||
tdLog.exit("consume error")
|
tdLog.exit("consume error")
|
||||||
break
|
break
|
||||||
val = res.value()
|
val = res.value()
|
||||||
if val is None:
|
if val is None:
|
||||||
continue
|
continue
|
||||||
for block in val:
|
for block in val:
|
||||||
|
print(block.fetchall(),len(block.fetchall()))
|
||||||
cnt += len(block.fetchall())
|
cnt += len(block.fetchall())
|
||||||
|
|
||||||
if cnt != 2:
|
|
||||||
tdLog.exit("consume error")
|
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
consumer.close()
|
consumer.close()
|
||||||
|
|
||||||
|
@ -110,20 +109,32 @@ class TDTestCase:
|
||||||
params = new_multi_binds(2)
|
params = new_multi_binds(2)
|
||||||
params[0].timestamp((1626861392589, 1626861392590))
|
params[0].timestamp((1626861392589, 1626861392590))
|
||||||
params[1].int([3, None])
|
params[1].int([3, None])
|
||||||
|
|
||||||
# print(type(stmt))
|
# print(type(stmt))
|
||||||
tdLog.debug("bind_param_batch start")
|
tdLog.debug("bind_param_batch start")
|
||||||
stmt.bind_param_batch(params)
|
stmt.bind_param_batch(params)
|
||||||
|
|
||||||
tdLog.debug("bind_param_batch end")
|
tdLog.debug("bind_param_batch end")
|
||||||
stmt.execute()
|
stmt.execute()
|
||||||
tdLog.debug("execute end")
|
tdLog.debug("execute end")
|
||||||
|
conn.execute("flush database %s" % dbname)
|
||||||
|
|
||||||
|
params1 = new_multi_binds(2)
|
||||||
|
params1[0].timestamp((1626861392587,1626861392586))
|
||||||
|
params1[1].int([None,3])
|
||||||
|
stmt.bind_param_batch(params1)
|
||||||
|
stmt.execute()
|
||||||
|
|
||||||
end = datetime.now()
|
end = datetime.now()
|
||||||
print("elapsed time: ", end - start)
|
print("elapsed time: ", end - start)
|
||||||
assert stmt.affected_rows == 2
|
print(stmt.affected_rows)
|
||||||
|
self.expected_affected_rows = 4
|
||||||
|
if stmt.affected_rows != self.expected_affected_rows :
|
||||||
|
tdLog.exit("affected_rows error")
|
||||||
tdLog.debug("close start")
|
tdLog.debug("close start")
|
||||||
|
|
||||||
stmt.close()
|
stmt.close()
|
||||||
|
|
||||||
# conn.execute("drop database if exists %s" % dbname)
|
# conn.execute("drop database if exists %s" % dbname)
|
||||||
conn.close()
|
conn.close()
|
||||||
|
|
||||||
|
|
|
@ -96,9 +96,28 @@ class TDTestCase:
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
tdSql.query("select * from sta")
|
tdSql.query("select * from sta")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
|
tdSql.query("select tbname from sta order by tbname")
|
||||||
|
if not tdSql.getData(0, 0).startswith('new-t1_1.d1.sta_'):
|
||||||
|
tdLog.exit("error1")
|
||||||
|
|
||||||
|
if not tdSql.getData(1, 0).startswith('new-t2_1.d1.sta_'):
|
||||||
|
tdLog.exit("error2")
|
||||||
|
|
||||||
|
if not tdSql.getData(2, 0).startswith('new-t3_1.d1.sta_'):
|
||||||
|
tdLog.exit("error3")
|
||||||
|
|
||||||
tdSql.query("select * from stb")
|
tdSql.query("select * from stb")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
|
tdSql.query("select tbname from stb order by tbname")
|
||||||
|
if not tdSql.getData(0, 0).startswith('new-t1_1.d1.stb_'):
|
||||||
|
tdLog.exit("error4")
|
||||||
|
|
||||||
|
if not tdSql.getData(1, 0).startswith('new-t2_1.d1.stb_'):
|
||||||
|
tdLog.exit("error5")
|
||||||
|
|
||||||
|
if not tdSql.getData(2, 0).startswith('new-t3_1.d1.stb_'):
|
||||||
|
tdLog.exit("error6")
|
||||||
|
|
||||||
# run
|
# run
|
||||||
def run(self):
|
def run(self):
|
||||||
self.case1()
|
self.case1()
|
||||||
|
|
|
@ -1,243 +0,0 @@
|
||||||
# 写一段python代码,生成一个JSON串,json 串为数组,数组长度为10000,每个元素为包含4000个key-value对的JSON字符串,json 数组里每个元素里的4000个key不相同,元素之间使用相同的key,key值为英文单词,value 为int值,且value 的范围是[0, 256]。把json串紧凑形式写入文件,把json串存入parquet文件中,把json串写入avro文件中,把json串写入到postgre sql表中,表有两列第一列主int类型主键,第二列为json类型,数组的每个元素写入json类型里
|
|
||||||
import csv
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import random
|
|
||||||
import string
|
|
||||||
import time
|
|
||||||
|
|
||||||
from faker import Faker
|
|
||||||
import pandas as pd
|
|
||||||
import pyarrow as pa
|
|
||||||
import pyarrow.parquet as pq
|
|
||||||
import fastavro
|
|
||||||
import psycopg2
|
|
||||||
from psycopg2.extras import Json
|
|
||||||
|
|
||||||
|
|
||||||
def get_dir_size(start_path='.'):
|
|
||||||
total = 0
|
|
||||||
for dirpath, dirs, files in os.walk(start_path):
|
|
||||||
for f in files:
|
|
||||||
fp = os.path.join(dirpath, f)
|
|
||||||
# 获取文件大小并累加到total上
|
|
||||||
total += os.path.getsize(fp)
|
|
||||||
return total
|
|
||||||
|
|
||||||
|
|
||||||
def to_avro_record(obj):
|
|
||||||
return {key: value for key, value in obj.items()}
|
|
||||||
|
|
||||||
|
|
||||||
def generate_random_string(length):
|
|
||||||
return ''.join(random.choices(string.ascii_letters + string.digits, k=length))
|
|
||||||
|
|
||||||
|
|
||||||
def generate_random_values(t):
|
|
||||||
if t == 0:
|
|
||||||
return random.randint(-255, 256)
|
|
||||||
elif t == 1:
|
|
||||||
return random.randint(-2100000000, 2100000000)
|
|
||||||
elif t == 2:
|
|
||||||
return random.uniform(-10000.0, 10000.0)
|
|
||||||
elif t == 3:
|
|
||||||
return generate_random_string(10)
|
|
||||||
elif t == 4:
|
|
||||||
return random.choice([True, False])
|
|
||||||
|
|
||||||
|
|
||||||
def generate_json_object(key_set, value_set):
|
|
||||||
values = [generate_random_values(t) for t in value_set]
|
|
||||||
return dict(zip(key_set, values))
|
|
||||||
|
|
||||||
|
|
||||||
def generate_json_array(keys, values, array_length):
|
|
||||||
return [generate_json_object(keys, values) for _ in range(array_length)]
|
|
||||||
|
|
||||||
|
|
||||||
def write_parquet_file(parquet_file, json_array):
|
|
||||||
df = pd.DataFrame(json_array)
|
|
||||||
table = pa.Table.from_pandas(df)
|
|
||||||
pq.write_table(table, parquet_file + ".parquet")
|
|
||||||
|
|
||||||
|
|
||||||
def write_json_file(json_file, json_array):
|
|
||||||
with open(json_file + ".json", 'w') as f:
|
|
||||||
json.dump(json_array, f, separators=(',', ':'))
|
|
||||||
|
|
||||||
|
|
||||||
def generate_avro_schema(k, t):
|
|
||||||
if t == 0:
|
|
||||||
return {"name": k, "type": "int", "logicalType": "int"}
|
|
||||||
elif t == 1:
|
|
||||||
return {"name": k, "type": "int", "logicalType": "int"}
|
|
||||||
elif t == 2:
|
|
||||||
return {"name": k, "type": "float"}
|
|
||||||
elif t == 3:
|
|
||||||
return {"name": k, "type": "string"}
|
|
||||||
elif t == 4:
|
|
||||||
return {"name": k, "type": "boolean"}
|
|
||||||
|
|
||||||
|
|
||||||
def write_avro_file(avro_file, json_array, keys, values):
|
|
||||||
k = list(json_array[0].keys())
|
|
||||||
|
|
||||||
if keys != k:
|
|
||||||
raise ValueError("keys and values should have the same length")
|
|
||||||
|
|
||||||
avro_schema = {
|
|
||||||
"type": "record",
|
|
||||||
"name": "MyRecord",
|
|
||||||
"fields": [generate_avro_schema(k, v) for k, v in dict(zip(keys, values)).items()]
|
|
||||||
}
|
|
||||||
|
|
||||||
avro_records = [to_avro_record(obj) for obj in json_array]
|
|
||||||
with open(avro_file + ".avro", 'wb') as f:
|
|
||||||
fastavro.writer(f, avro_schema, avro_records)
|
|
||||||
|
|
||||||
|
|
||||||
def write_pg_file(json_array):
|
|
||||||
conn_str = "dbname=mydatabase user=myuser host=localhost"
|
|
||||||
conn = psycopg2.connect(conn_str)
|
|
||||||
cur = conn.cursor()
|
|
||||||
|
|
||||||
cur.execute("drop table if exists my_table")
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
# 创建表(如果不存在)
|
|
||||||
cur.execute("""
|
|
||||||
CREATE TABLE IF NOT EXISTS my_table (
|
|
||||||
id SERIAL PRIMARY KEY,
|
|
||||||
json_data JSONB
|
|
||||||
);
|
|
||||||
""")
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
# 执行SQL查询
|
|
||||||
cur.execute("SELECT count(*) FROM my_table")
|
|
||||||
# 获取查询结果
|
|
||||||
rows = cur.fetchall()
|
|
||||||
# 打印查询结果
|
|
||||||
for row in rows:
|
|
||||||
print("rows before:", row[0])
|
|
||||||
|
|
||||||
# 插入数据
|
|
||||||
for idx, json_obj in enumerate(json_array):
|
|
||||||
# print(json.dumps(json_obj))
|
|
||||||
cur.execute("INSERT INTO my_table (json_data) VALUES (%s)", (json.dumps(json_obj),))
|
|
||||||
|
|
||||||
conn.commit() # 提交事务
|
|
||||||
|
|
||||||
# 执行SQL查询
|
|
||||||
cur.execute("SELECT count(*) FROM my_table")
|
|
||||||
# 获取查询结果
|
|
||||||
rows = cur.fetchall()
|
|
||||||
# 打印查询结果
|
|
||||||
for row in rows:
|
|
||||||
print("rows after:", row[0])
|
|
||||||
|
|
||||||
# # 执行SQL查询
|
|
||||||
# cur.execute("SELECT pg_relation_size('my_table')")
|
|
||||||
# # 获取查询结果
|
|
||||||
# rows = cur.fetchall()
|
|
||||||
# # 打印查询结果
|
|
||||||
# size = 0
|
|
||||||
# for row in rows:
|
|
||||||
# size = row[0]
|
|
||||||
# print("table size:", row[0])
|
|
||||||
|
|
||||||
# 关闭游标和连接
|
|
||||||
cur.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def read_parquet_file(parquet_file):
|
|
||||||
table = pq.read_table(parquet_file + ".parquet")
|
|
||||||
df = table.to_pandas()
|
|
||||||
print(df)
|
|
||||||
|
|
||||||
|
|
||||||
def read_avro_file(avg_file):
|
|
||||||
with open(avg_file + ".avro", 'rb') as f:
|
|
||||||
reader = fastavro.reader(f)
|
|
||||||
|
|
||||||
for record in reader:
|
|
||||||
print(record)
|
|
||||||
|
|
||||||
|
|
||||||
def read_json_file(csv_file):
|
|
||||||
with open(csv_file + ".json", 'r') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
print(data)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
key_length = 7
|
|
||||||
key_sizes = 4000
|
|
||||||
row_sizes = 10000
|
|
||||||
file_name = "output"
|
|
||||||
|
|
||||||
# cases = [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (0, 4)]
|
|
||||||
cases = [(2, 2), (3, 3), (0, 4)]
|
|
||||||
|
|
||||||
for data in cases:
|
|
||||||
begin, end = data
|
|
||||||
print(f"执行类型:{begin}-{end}")
|
|
||||||
|
|
||||||
N = 2
|
|
||||||
for _ in range(N):
|
|
||||||
|
|
||||||
t0 = time.time()
|
|
||||||
|
|
||||||
keys = [generate_random_string(key_length) for _ in range(key_sizes)]
|
|
||||||
values = [random.randint(begin, end) for _ in range(key_sizes)]
|
|
||||||
# 生成JSON数组
|
|
||||||
json_array = generate_json_array(keys, values, row_sizes)
|
|
||||||
|
|
||||||
t1 = time.time()
|
|
||||||
|
|
||||||
write_json_file(file_name, json_array)
|
|
||||||
|
|
||||||
t2 = time.time()
|
|
||||||
|
|
||||||
write_parquet_file(file_name, json_array)
|
|
||||||
|
|
||||||
t3 = time.time()
|
|
||||||
|
|
||||||
write_avro_file(file_name, json_array, keys, values)
|
|
||||||
|
|
||||||
t4 = time.time()
|
|
||||||
|
|
||||||
size = write_pg_file(json_array)
|
|
||||||
|
|
||||||
t5 = time.time()
|
|
||||||
|
|
||||||
print("生成json 速度:", t2 - t0, "文件大小:", os.path.getsize(file_name + ".json"))
|
|
||||||
print("parquet 速度:", t3 - t2, "文件大小:", os.path.getsize(file_name + ".parquet"))
|
|
||||||
print("avro 速度:", t4 - t3, "文件大小:", os.path.getsize(file_name + ".avro"))
|
|
||||||
print("pg json 速度:", t5 - t4, "文件大小:", get_dir_size("/opt/homebrew/var/postgresql@14/base/16385") - 8 * 1024 * 1024)
|
|
||||||
|
|
||||||
# read_json_file(file_name)
|
|
||||||
# read_parquet_file(file_name)
|
|
||||||
# read_avro_file(file_name)
|
|
||||||
print(f"\n---------------\n")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
||||||
# 压缩文件
|
|
||||||
# import os
|
|
||||||
#
|
|
||||||
# import lz4.frame
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# files =["output.json", "output.parquet", "output.avro"]
|
|
||||||
# def compress_file(input_path, output_path):
|
|
||||||
# with open(input_path, 'rb') as f_in:
|
|
||||||
# compressed_data = lz4.frame.compress(f_in.read())
|
|
||||||
#
|
|
||||||
# with open(output_path, 'wb') as f_out:
|
|
||||||
# f_out.write(compressed_data)
|
|
||||||
#
|
|
||||||
# for file in files:
|
|
||||||
# compress_file(file, file + ".lz4")
|
|
||||||
# print(file, "origin size:", os.path.getsize(file), " after lsz size:", os.path.getsize(file + ".lz4"))
|
|
|
@ -18,7 +18,7 @@ IF (TD_WEBSOCKET)
|
||||||
COMMAND git clean -f -d
|
COMMAND git clean -f -d
|
||||||
BUILD_COMMAND
|
BUILD_COMMAND
|
||||||
COMMAND cargo update
|
COMMAND cargo update
|
||||||
COMMAND RUSTFLAGS=-Ctarget-feature=-crt-static cargo build --release -p taos-ws-sys --features native-tls
|
COMMAND RUSTFLAGS=-Ctarget-feature=-crt-static cargo build --release -p taos-ws-sys --features rustls
|
||||||
INSTALL_COMMAND
|
INSTALL_COMMAND
|
||||||
COMMAND cp target/release/${websocket_lib_file} ${CMAKE_BINARY_DIR}/build/lib
|
COMMAND cp target/release/${websocket_lib_file} ${CMAKE_BINARY_DIR}/build/lib
|
||||||
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/build/include
|
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/build/include
|
||||||
|
@ -37,7 +37,7 @@ IF (TD_WEBSOCKET)
|
||||||
COMMAND git clean -f -d
|
COMMAND git clean -f -d
|
||||||
BUILD_COMMAND
|
BUILD_COMMAND
|
||||||
COMMAND cargo update
|
COMMAND cargo update
|
||||||
COMMAND cargo build --release -p taos-ws-sys --features native-tls-vendored
|
COMMAND cargo build --release -p taos-ws-sys --features rustls
|
||||||
INSTALL_COMMAND
|
INSTALL_COMMAND
|
||||||
COMMAND cp target/release/taosws.dll ${CMAKE_BINARY_DIR}/build/lib
|
COMMAND cp target/release/taosws.dll ${CMAKE_BINARY_DIR}/build/lib
|
||||||
COMMAND cp target/release/taosws.dll.lib ${CMAKE_BINARY_DIR}/build/lib/taosws.lib
|
COMMAND cp target/release/taosws.dll.lib ${CMAKE_BINARY_DIR}/build/lib/taosws.lib
|
||||||
|
@ -57,7 +57,7 @@ IF (TD_WEBSOCKET)
|
||||||
COMMAND git clean -f -d
|
COMMAND git clean -f -d
|
||||||
BUILD_COMMAND
|
BUILD_COMMAND
|
||||||
COMMAND cargo update
|
COMMAND cargo update
|
||||||
COMMAND cargo build --release -p taos-ws-sys --features native-tls-vendored
|
COMMAND cargo build --release -p taos-ws-sys --features rustls
|
||||||
INSTALL_COMMAND
|
INSTALL_COMMAND
|
||||||
COMMAND cp target/release/${websocket_lib_file} ${CMAKE_BINARY_DIR}/build/lib
|
COMMAND cp target/release/${websocket_lib_file} ${CMAKE_BINARY_DIR}/build/lib
|
||||||
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/build/include
|
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/build/include
|
||||||
|
|
Loading…
Reference in New Issue