@PublicEvolving
public class FlinkKafkaProducer010<T>
extends org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase<T>
| 限定符和类型 | 类和说明 |
|---|---|
static class |
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T>
已过时。
This class is deprecated since the factory methods
writeToKafkaWithTimestamps for the producer are also deprecated. |
| 构造器和说明 |
|---|
FlinkKafkaProducer010(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic. the sink produces a DataStream to the topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
| 限定符和类型 | 方法和说明 |
|---|---|
protected void |
flush() |
void |
invoke(T value,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) |
void |
setWriteTimestampToKafka(boolean writeTimestampToKafka)
If set to true, Flink will write the (event time) timestamp attached to each record into
Kafka.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig)
|
checkErroneous, close, getKafkaProducer, getPartitionsByTopic, getPropertiesFromBrokerList, initializeState, numPendingRecords, open, setFlushOnCheckpoint, setLogFailuresOnly, snapshotStategetIterationRuntimeContext, getRuntimeContext, setRuntimeContextpublic FlinkKafkaProducer010(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer010(String,
SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.public FlinkKafkaProducer010(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer010(String,
SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.producerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
SerializationSchema and possibly a custom FlinkKafkaPartitioner.
Since a key-less SerializationSchema is used, all records sent to Kafka will not
have an attached key. Therefore, if a partitioner is also not provided, records will be
distributed to Kafka partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A key-less serializable serialization schema for turning user
objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If set to null, records will be distributed to Kafka partitions in a
round-robin fashion.public FlinkKafkaProducer010(String brokerList, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer010(String,
KeyedSerializationSchema, Properties, FlinkKafkaPartitioner) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer010(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as the
partitioner. This default partitioner maps each sink subtask to a single Kafka partition
(i.e. all records received by a sink subtask will end up in the same Kafka partition).
To use a custom partitioner, please use FlinkKafkaProducer010(String,
KeyedSerializationSchema, Properties, FlinkKafkaPartitioner) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key
of each record (as determined by KeyedSerializationSchema.serializeKey(Object)). If
written records do not have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they will be
distributed to Kafka partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions. If set to null, records will be partitioned by the key of each record
(determined by KeyedSerializationSchema.serializeKey(Object)). If the keys are
null, then records will be distributed to Kafka partitions in a round-robin
fashion.public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
writeTimestampToKafka - Flag indicating if Flink's internal timestamps are written to
Kafka.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.producerConfig - Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties,
FlinkKafkaPartitioner) and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - The name of the target topicserializationSchema - A serializable serialization schema for turning user objects into
a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is
the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka
partitions.public void invoke(T value, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) throws Exception
protected void flush()
flush 在类中 org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase<T>Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.