@PublicEvolving
public class FlinkKafkaProducer010<T>
extends org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer09<T>
| 限定符和类型 | 类和说明 |
|---|---|
static class |
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T>
已过时。
This class is deprecated since the factory methods
writeToKafkaWithTimestamps
for the producer are also deprecated. |
| 构造器和说明 |
|---|
FlinkKafkaProducer010(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
已过时。
This is a deprecated constructor that does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic. the sink produces a DataStream to
the topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
已过时。
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String brokerList,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
| 限定符和类型 | 方法和说明 |
|---|---|
void |
invoke(T value,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) |
void |
setWriteTimestampToKafka(boolean writeTimestampToKafka)
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
已过时。
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig)
|
checkErroneous, close, getKafkaProducer, getPartitionsByTopic, getPropertiesFromBrokerList, initializeState, numPendingRecords, open, setFlushOnCheckpoint, setLogFailuresOnly, snapshotStategetIterationRuntimeContext, getRuntimeContext, setRuntimeContextpublic FlinkKafkaProducer010(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.public FlinkKafkaProducer010(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.producerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
SerializationSchema and possibly a custom FlinkKafkaPartitioner.
Since a key-less SerializationSchema is used, all records sent to Kafka will not have an
attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka
partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.
If set to null, records will be distributed to Kafka partitions
in a round-robin fashion.public FlinkKafkaProducer010(String brokerList, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer010(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, @Nullable org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key of each
record (as determined by KeyedSerializationSchema.serializeKey(Object)). If written records do not
have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they
will be distributed to Kafka partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.
If set to null, records will be partitioned by the key of each record
(determined by KeyedSerializationSchema.serializeKey(Object)). If the keys
are null, then records will be distributed to Kafka partitions in a
round-robin fashion.@Deprecated public FlinkKafkaProducer010(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.topicId - The topic to write data toserializationSchema - A (keyless) serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions (when passing null, we'll use Kafka's partitioner)@Deprecated public FlinkKafkaProducer010(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.This constructor does not allow writing timestamps to Kafka, it follow approach (a) (see above)
public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
writeTimestampToKafka - Flag indicating if Flink's internal timestamps are written to Kafka.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema, Properties producerConfig)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties)
and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.producerConfig - Properties with the producer configuration.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, KeyedSerializationSchema, Properties, FlinkKafkaPartitioner)
and call setWriteTimestampToKafka(boolean).This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - The name of the target topicserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.This constructor allows writing timestamps to Kafka, it follow approach (b) (see above)
inStream - The stream to write to KafkatopicId - The name of the target topicserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.