public class PulsarDynamicTableSink extends Object implements org.apache.flink.table.connector.sink.DynamicTableSink, SupportsWritingMetadata
org.apache.flink.table.connector.sink.DynamicTableSink.Context, org.apache.flink.table.connector.sink.DynamicTableSink.DataStructureConverter, org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider| Modifier and Type | Field and Description |
|---|---|
protected String |
adminUrl |
protected org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> |
encodingFormat
Sink format for encoding records to pulsar.
|
protected List<String> |
metadataKeys
Metadata that is appended at the end of a physical sink row.
|
protected org.apache.flink.table.types.DataType |
physicalDataType
Data type to configure the format.
|
protected Properties |
properties
Properties for the pulsar producer.
|
protected String |
serviceUrl |
protected String |
topic
The pulsar topic to write to.
|
protected boolean |
useExtendField |
| Modifier | Constructor and Description |
|---|---|
protected |
PulsarDynamicTableSink(String serviceUrl,
String adminUrl,
String topic,
org.apache.flink.table.types.DataType physicalDataType,
Properties properties,
org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> encodingFormat) |
| Modifier and Type | Method and Description |
|---|---|
void |
applyWritableMetadata(List<String> metadataKeys,
org.apache.flink.table.types.DataType consumedDataType)
Provides a list of metadata keys that the consumed
RowData will contain as appended metadata
columns which must be persisted. |
String |
asSummaryString() |
org.apache.flink.table.connector.sink.DynamicTableSink |
copy() |
boolean |
equals(Object o) |
org.apache.flink.table.connector.ChangelogMode |
getChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode) |
org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider |
getSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context) |
int |
hashCode() |
Map<String,org.apache.flink.table.types.DataType> |
listWritableMetadata()
Returns the map of metadata keys and their corresponding data types that can be consumed by this
table sink for writing.
|
protected List<String> metadataKeys
protected org.apache.flink.table.types.DataType physicalDataType
protected final String topic
protected final String serviceUrl
protected final String adminUrl
protected final Properties properties
protected final boolean useExtendField
protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> encodingFormat
protected PulsarDynamicTableSink(String serviceUrl, String adminUrl, String topic, org.apache.flink.table.types.DataType physicalDataType, Properties properties, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> encodingFormat)
public org.apache.flink.table.connector.ChangelogMode getChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode)
getChangelogMode in interface org.apache.flink.table.connector.sink.DynamicTableSinkpublic org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context)
getSinkRuntimeProvider in interface org.apache.flink.table.connector.sink.DynamicTableSinkpublic org.apache.flink.table.connector.sink.DynamicTableSink copy()
copy in interface org.apache.flink.table.connector.sink.DynamicTableSinkpublic String asSummaryString()
asSummaryString in interface org.apache.flink.table.connector.sink.DynamicTableSinkpublic Map<String,org.apache.flink.table.types.DataType> listWritableMetadata()
SupportsWritingMetadataThe returned map will be used by the planner for validation and insertion of explicit casts
(see LogicalTypeCasts.supportsExplicitCast(LogicalType, LogicalType)) if necessary.
The iteration order of the returned map determines the order of metadata keys in the list
passed in SupportsWritingMetadata.applyWritableMetadata(List, DataType). Therefore, it might be beneficial to return a
LinkedHashMap if a strict metadata column order is required.
If a sink forwards metadata to one or more formats, we recommend the following column order for consistency:
KEY FORMAT METADATA COLUMNS + VALUE FORMAT METADATA COLUMNS + SINK METADATA COLUMNS
Metadata key names follow the same pattern as mentioned in Factory. In case of duplicate
names in format and sink keys, format keys shall have higher precedence.
Regardless of the returned DataTypes, a metadata column is always represented using
internal data structures (see RowData).
listWritableMetadata in interface SupportsWritingMetadatapublic void applyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType)
SupportsWritingMetadataRowData will contain as appended metadata
columns which must be persisted.applyWritableMetadata in interface SupportsWritingMetadatametadataKeys - a subset of the keys returned by SupportsWritingMetadata.listWritableMetadata(), ordered
by the iteration order of returned mapconsumedDataType - the final input type of the sinkCopyright © 2019–2021 The Apache Software Foundation. All rights reserved.