public class PulsarTableSink extends Object implements org.apache.flink.table.sinks.AppendStreamTableSink<org.apache.flink.types.Row>, SupportsWritingMetadata
| Modifier and Type | Field and Description |
|---|---|
protected List<String> |
metadataKeys
Metadata that is appended at the end of a physical sink row.
|
protected org.apache.flink.table.types.DataType |
physicalDataType
Data type to configure the format.
|
protected boolean |
useExtendField |
| Constructor and Description |
|---|
PulsarTableSink(String serviceUrl,
String adminUrl,
org.apache.flink.table.api.TableSchema schema,
String defaultTopicName,
Properties properties,
org.apache.flink.api.common.serialization.SerializationSchema serializationSchema) |
PulsarTableSink(String adminUrl,
org.apache.flink.table.api.TableSchema schema,
String defaultTopicName,
org.apache.pulsar.client.impl.conf.ClientConfigurationData clientConf,
Properties properties,
org.apache.flink.api.common.serialization.SerializationSchema serializationSchema) |
| Modifier and Type | Method and Description |
|---|---|
void |
applyWritableMetadata(List<String> metadataKeys,
org.apache.flink.table.types.DataType consumedDataType)
Provides a list of metadata keys that the consumed
RowData will contain as appended metadata
columns which must be persisted. |
org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row> |
configure(String[] fieldNames,
org.apache.flink.api.common.typeinfo.TypeInformation<?>[] fieldTypes) |
org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
consumeDataStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream) |
void |
emitDataStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream) |
String[] |
getFieldNames() |
org.apache.flink.api.common.typeinfo.TypeInformation<?>[] |
getFieldTypes() |
org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.types.Row> |
getOutputType() |
org.apache.flink.table.api.TableSchema |
getTableSchema() |
Map<String,org.apache.flink.table.types.DataType> |
listWritableMetadata()
Returns the map of metadata keys and their corresponding data types that can be consumed by this
table sink for writing.
|
protected List<String> metadataKeys
protected org.apache.flink.table.types.DataType physicalDataType
protected final boolean useExtendField
public PulsarTableSink(String adminUrl, org.apache.flink.table.api.TableSchema schema, String defaultTopicName, org.apache.pulsar.client.impl.conf.ClientConfigurationData clientConf, Properties properties, org.apache.flink.api.common.serialization.SerializationSchema serializationSchema)
public PulsarTableSink(String serviceUrl, String adminUrl, org.apache.flink.table.api.TableSchema schema, String defaultTopicName, Properties properties, org.apache.flink.api.common.serialization.SerializationSchema serializationSchema)
public void emitDataStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream)
public org.apache.flink.streaming.api.datastream.DataStreamSink<?> consumeDataStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream)
consumeDataStream in interface org.apache.flink.table.sinks.StreamTableSink<org.apache.flink.types.Row>public org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.types.Row> getOutputType()
getOutputType in interface org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row>public org.apache.flink.table.api.TableSchema getTableSchema()
getTableSchema in interface org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row>public String[] getFieldNames()
getFieldNames in interface org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row>public org.apache.flink.api.common.typeinfo.TypeInformation<?>[] getFieldTypes()
getFieldTypes in interface org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row>public org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row> configure(String[] fieldNames, org.apache.flink.api.common.typeinfo.TypeInformation<?>[] fieldTypes)
configure in interface org.apache.flink.table.sinks.TableSink<org.apache.flink.types.Row>public Map<String,org.apache.flink.table.types.DataType> listWritableMetadata()
SupportsWritingMetadataThe returned map will be used by the planner for validation and insertion of explicit casts
(see LogicalTypeCasts.supportsExplicitCast(LogicalType, LogicalType)) if necessary.
The iteration order of the returned map determines the order of metadata keys in the list
passed in SupportsWritingMetadata.applyWritableMetadata(List, DataType). Therefore, it might be beneficial to return a
LinkedHashMap if a strict metadata column order is required.
If a sink forwards metadata to one or more formats, we recommend the following column order for consistency:
KEY FORMAT METADATA COLUMNS + VALUE FORMAT METADATA COLUMNS + SINK METADATA COLUMNS
Metadata key names follow the same pattern as mentioned in Factory. In case of duplicate
names in format and sink keys, format keys shall have higher precedence.
Regardless of the returned DataTypes, a metadata column is always represented using
internal data structures (see RowData).
listWritableMetadata in interface SupportsWritingMetadatapublic void applyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType)
SupportsWritingMetadataRowData will contain as appended metadata
columns which must be persisted.applyWritableMetadata in interface SupportsWritingMetadatametadataKeys - a subset of the keys returned by SupportsWritingMetadata.listWritableMetadata(), ordered
by the iteration order of returned mapconsumedDataType - the final input type of the sinkCopyright © 2019–2021 The Apache Software Foundation. All rights reserved.