public abstract static class ClickHouseIO.Write<T>
extends org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,org.apache.beam.sdk.values.PDone>
PTransform to write to ClickHouse.| Constructor and Description |
|---|
Write() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.beam.sdk.values.PDone |
expand(org.apache.beam.sdk.values.PCollection<T> input) |
abstract org.joda.time.Duration |
initialBackoff() |
abstract @Nullable java.lang.Boolean |
insertDeduplicate() |
abstract @Nullable java.lang.Boolean |
insertDistributedSync() |
abstract @Nullable java.lang.Long |
insertQuorum() |
abstract java.lang.String |
jdbcUrl() |
abstract org.joda.time.Duration |
maxCumulativeBackoff() |
abstract long |
maxInsertBlockSize() |
abstract int |
maxRetries() |
abstract java.util.Properties |
properties() |
abstract java.lang.String |
table() |
abstract @Nullable TableSchema |
tableSchema() |
ClickHouseIO.Write<T> |
withInitialBackoff(org.joda.time.Duration value)
Set initial backoff duration.
|
ClickHouseIO.Write<T> |
withInsertDeduplicate(java.lang.Boolean value)
For INSERT queries in the replicated table, specifies that deduplication of inserting blocks
should be performed.
|
ClickHouseIO.Write<T> |
withInsertDistributedSync(@Nullable java.lang.Boolean value)
If setting is enabled, insert query into distributed waits until data will be sent to all
nodes in cluster.
|
ClickHouseIO.Write<T> |
withInsertQuorum(@Nullable java.lang.Long value)
For INSERT queries in the replicated table, wait writing for the specified number of replicas
and linearize the addition of the data.
|
ClickHouseIO.Write<T> |
withMaxCumulativeBackoff(org.joda.time.Duration value)
Limits total time spent in backoff.
|
ClickHouseIO.Write<T> |
withMaxInsertBlockSize(long value)
The maximum block size for insertion, if we control the creation of blocks for insertion.
|
ClickHouseIO.Write<T> |
withMaxRetries(int value)
Maximum number of retries per insert.
|
ClickHouseIO.Write<T> |
withTableSchema(@Nullable TableSchema tableSchema)
Set TableSchema.
|
public abstract java.lang.String jdbcUrl()
public abstract java.lang.String table()
public abstract java.util.Properties properties()
public abstract long maxInsertBlockSize()
public abstract int maxRetries()
public abstract org.joda.time.Duration maxCumulativeBackoff()
public abstract org.joda.time.Duration initialBackoff()
public abstract @Nullable TableSchema tableSchema()
public abstract @Nullable java.lang.Boolean insertDistributedSync()
public abstract @Nullable java.lang.Long insertQuorum()
public abstract @Nullable java.lang.Boolean insertDeduplicate()
public org.apache.beam.sdk.values.PDone expand(org.apache.beam.sdk.values.PCollection<T> input)
expand in class org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,org.apache.beam.sdk.values.PDone>public ClickHouseIO.Write<T> withMaxInsertBlockSize(long value)
value - number of rowsPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withInsertDistributedSync(@Nullable java.lang.Boolean value)
value - true to enable, null for server defaultPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withInsertQuorum(@Nullable java.lang.Long value)
This setting is disabled in default server settings.
value - number of replicas, 0 for disabling, null for server defaultPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withInsertDeduplicate(java.lang.Boolean value)
Enabled by default. Shouldn't be disabled unless your input has duplicate blocks, and you don't want to deduplicate them.
value - true to enable, null for server defaultPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withMaxRetries(int value)
See FluentBackoff.withMaxRetries(int).
value - maximum number of retriesPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withMaxCumulativeBackoff(org.joda.time.Duration value)
See FluentBackoff.withMaxCumulativeBackoff(org.joda.time.Duration).
value - maximum durationPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withInitialBackoff(org.joda.time.Duration value)
See FluentBackoff.withInitialBackoff(org.joda.time.Duration).
value - initial durationPTransform writing data to ClickHousepublic ClickHouseIO.Write<T> withTableSchema(@Nullable TableSchema tableSchema)
tableSchema - schema of Table in which rows are going to be insertedPTransform writing data to ClickHouse