@PublicEvolving public class FlinkKafkaProducer011<IN> extends org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>
FlinkKafkaProducer011.Semantic.AT_LEAST_ONCE semantic. Before using FlinkKafkaProducer011.Semantic.EXACTLY_ONCE please refer to Flink's
Kafka connector documentation.| 限定符和类型 | 类和说明 |
|---|---|
static class |
FlinkKafkaProducer011.ContextStateSerializer
TypeSerializer for
FlinkKafkaProducer011.KafkaTransactionContext. |
static class |
FlinkKafkaProducer011.KafkaTransactionContext
Context associated to this instance of the
FlinkKafkaProducer011. |
static class |
FlinkKafkaProducer011.NextTransactionalIdHint
Keep information required to deduce next safe to use transactional id.
|
static class |
FlinkKafkaProducer011.NextTransactionalIdHintSerializer
TypeSerializer for
FlinkKafkaProducer011.NextTransactionalIdHint. |
static class |
FlinkKafkaProducer011.Semantic
Semantics that can be chosen.
|
static class |
FlinkKafkaProducer011.TransactionStateSerializer
TypeSerializer for
KafkaTransactionState. |
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.State<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializer<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializerConfigSnapshot<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializerSnapshot<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.TransactionHolder<TXN>| 限定符和类型 | 字段和说明 |
|---|---|
static int |
DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
Default number of KafkaProducers in the pool.
|
static org.apache.flink.api.common.time.Time |
DEFAULT_KAFKA_TRANSACTION_TIMEOUT
Default value for kafka transaction timeout.
|
static String |
KEY_DISABLE_METRICS
Configuration key for disabling the metrics reporting.
|
static int |
SAFE_SCALE_DOWN_FACTOR
This coefficient determines what is the safe scale down factor.
|
| 构造器和说明 |
|---|
FlinkKafkaProducer011(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
FlinkKafkaProducer011.Semantic semantic)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String defaultTopicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String defaultTopicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner,
FlinkKafkaProducer011.Semantic semantic,
int kafkaProducersPoolSize)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
Properties producerConfig,
Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String brokerList,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer011(String brokerList,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
| 限定符和类型 | 方法和说明 |
|---|---|
protected void |
abort(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction) |
protected org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState |
beginTransaction() |
void |
close() |
protected void |
commit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction) |
protected void |
finishRecoveringContext(Collection<org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState> handledTransactions) |
FlinkKafkaProducer011<IN> |
ignoreFailuresAfterTransactionTimeout()
Disables the propagation of exceptions thrown when committing presumably timed out Kafka
transactions during recovery of the job.
|
void |
initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) |
protected Optional<FlinkKafkaProducer011.KafkaTransactionContext> |
initializeUserContext() |
void |
invoke(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction,
IN next,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) |
void |
open(org.apache.flink.configuration.Configuration configuration)
Initializes the connection to Kafka.
|
protected void |
preCommit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction) |
protected void |
recoverAndAbort(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction) |
protected void |
recoverAndCommit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction) |
void |
setLogFailuresOnly(boolean logFailuresOnly)
Defines whether the producer should fail on errors, or only log them.
|
void |
setWriteTimestampToKafka(boolean writeTimestampToKafka)
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
|
void |
snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) |
currentTransaction, enableTransactionTimeoutWarnings, getUserContext, invoke, invoke, notifyCheckpointComplete, pendingTransactions, setTransactionTimeoutpublic static final int SAFE_SCALE_DOWN_FACTOR
If the Flink application previously failed before first checkpoint completed or we are starting new batch
of FlinkKafkaProducer011 from scratch without clean shutdown of the previous one,
FlinkKafkaProducer011 doesn't know what was the set of previously used Kafka's transactionalId's. In
that case, it will try to play safe and abort all of the possible transactionalIds from the range of:
[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)
The range of available to use transactional ids is:
[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize)
This means that if we decrease getNumberOfParallelSubtasks() by a factor larger than
SAFE_SCALE_DOWN_FACTOR we can have a left some lingering transaction.
public static final int DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
FlinkKafkaProducer011.Semantic.EXACTLY_ONCE.public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUT
public FlinkKafkaProducer011(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.public FlinkKafkaProducer011(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer011(String, SerializationSchema, Properties, Optional) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined key-less serialization schema.producerConfig - Properties with the producer configuration.public FlinkKafkaProducer011(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner)
SerializationSchema and possibly a custom FlinkKafkaPartitioner.
Since a key-less SerializationSchema is used, all records sent to Kafka will not have an
attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka
partitions in a round-robin fashion.
topicId - The topic to write data toserializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.
If a partitioner is not provided, records will be distributed to Kafka partitions
in a round-robin fashion.public FlinkKafkaProducer011(String brokerList, String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer011(String, KeyedSerializationSchema, Properties, Optional) instead.
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer011(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer011(String, KeyedSerializationSchema, Properties, Optional) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.public FlinkKafkaProducer011(String topicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer011.Semantic semantic)
Using this constructor, the default FlinkFixedPartitioner will be used as
the partitioner. This default partitioner maps each sink subtask to a single Kafka
partition (i.e. all records received by a sink subtask will end up in the same
Kafka partition).
To use a custom partitioner, please use
FlinkKafkaProducer011(String, KeyedSerializationSchema, Properties, Optional, Semantic, int) instead.
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer011.Semantic).public FlinkKafkaProducer011(String defaultTopicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner)
KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key of each
record (as determined by KeyedSerializationSchema.serializeKey(Object)). If written records do not
have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they
will be distributed to Kafka partitions in a round-robin fashion.
defaultTopicId - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.
If a partitioner is not provided, records will be partitioned by the key of each record
(determined by KeyedSerializationSchema.serializeKey(Object)). If the keys
are null, then records will be distributed to Kafka partitions in a
round-robin fashion.public FlinkKafkaProducer011(String defaultTopicId, org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer011.Semantic semantic, int kafkaProducersPoolSize)
KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.
If a partitioner is not provided, written records will be partitioned by the attached key of each
record (as determined by KeyedSerializationSchema.serializeKey(Object)). If written records do not
have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they
will be distributed to Kafka partitions in a round-robin fashion.
defaultTopicId - The default topic to write data toserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.
If a partitioner is not provided, records will be partitioned by the key of each record
(determined by KeyedSerializationSchema.serializeKey(Object)). If the keys
are null, then records will be distributed to Kafka partitions in a
round-robin fashion.semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer011.Semantic).kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer011.Semantic.EXACTLY_ONCE).public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
writeTimestampToKafka - Flag indicating if Flink's internal timestamps are written to Kafka.public void setLogFailuresOnly(boolean logFailuresOnly)
logFailuresOnly - The flag to indicate logging-only on exceptions.public FlinkKafkaProducer011<IN> ignoreFailuresAfterTransactionTimeout()
Note that we use System.currentTimeMillis() to track the age of a transaction.
Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will
attempt at least one commit of the transaction before giving up.
ignoreFailuresAfterTransactionTimeout 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>public void open(org.apache.flink.configuration.Configuration configuration)
throws Exception
open 在接口中 org.apache.flink.api.common.functions.RichFunctionopen 在类中 org.apache.flink.api.common.functions.AbstractRichFunctionExceptionpublic void invoke(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction,
IN next,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context)
throws FlinkKafka011Exception
invoke 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>FlinkKafka011Exceptionpublic void close()
throws FlinkKafka011Exception
close 在接口中 org.apache.flink.api.common.functions.RichFunctionclose 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>FlinkKafka011Exceptionprotected org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState beginTransaction()
throws FlinkKafka011Exception
beginTransaction 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>FlinkKafka011Exceptionprotected void preCommit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction)
throws FlinkKafka011Exception
preCommit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>FlinkKafka011Exceptionprotected void commit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction)
commit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>protected void recoverAndCommit(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction)
recoverAndCommit 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>protected void abort(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction)
abort 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>protected void recoverAndAbort(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState transaction)
recoverAndAbort 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>public void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context)
throws Exception
snapshotState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctionsnapshotState 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>Exceptionpublic void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context)
throws Exception
initializeState 在接口中 org.apache.flink.streaming.api.checkpoint.CheckpointedFunctioninitializeState 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>Exceptionprotected Optional<FlinkKafkaProducer011.KafkaTransactionContext> initializeUserContext()
initializeUserContext 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>protected void finishRecoveringContext(Collection<org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState> handledTransactions)
finishRecoveringContext 在类中 org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.KafkaTransactionState,FlinkKafkaProducer011.KafkaTransactionContext>Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.