public class FlinkKafkaProducer010<T> extends StreamSink<T> implements SinkFunction<T>, org.apache.flink.api.common.functions.RichFunction
| Modifier and Type | Class and Description |
|---|---|
static class |
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T>
Configuration object returned by the writeToKafkaWithTimestamps() call.
|
AbstractStreamOperator.CountingOutput, AbstractStreamOperator.LatencyGaugeuserFunctionchainingStrategy, config, latencyGauge, LOG, metrics, output| Constructor and Description |
|---|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Create Kafka producer
This constructor does not allow writing timestamps to Kafka, it follow approach (a) (see above)
|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated constructor that does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
FlinkKafkaProducer010(String brokerList,
String topicId,
KeyedSerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
SerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
| Modifier and Type | Method and Description |
|---|---|
org.apache.flink.api.common.functions.IterationRuntimeContext |
getIterationRuntimeContext()
This method is used for approach (a) (see above)
|
void |
invoke(T value)
Invoke method for using the Sink as DataStream.addSink() sink.
|
void |
open(org.apache.flink.configuration.Configuration parameters)
This method is used for approach (a) (see above)
|
void |
processElement(StreamRecord<T> element)
Process method for using the sink with timestamp support.
|
void |
setFlushOnCheckpoint(boolean flush)
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers
to be acknowledged by the Kafka producer on a checkpoint.
|
void |
setLogFailuresOnly(boolean logFailuresOnly)
Defines whether the producer should fail on errors, or only log them.
|
void |
setRuntimeContext(org.apache.flink.api.common.functions.RuntimeContext t)
This method is used for approach (a) (see above)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
Deprecated.
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
reportOrForwardLatencyMarkerclose, dispose, getUserFunction, getUserFunctionParameters, initializeState, notifyOfCompletedCheckpoint, open, restoreState, setOutputType, setup, snapshotState, snapshotStategetChainingStrategy, getContainingTask, getCurrentKey, getExecutionConfig, getInternalTimerService, getKeyedStateBackend, getKeyedStateStore, getMetricGroup, getOperatorConfig, getOperatorName, getOperatorStateBackend, getOrCreateKeyedState, getPartitionedState, getPartitionedState, getProcessingTimeService, getRuntimeContext, getUserCodeClassloader, initializeState, numEventTimeTimers, numProcessingTimeTimers, processLatencyMarker, processLatencyMarker1, processLatencyMarker2, processWatermark, processWatermark1, processWatermark2, setChainingStrategy, setCurrentKey, setKeyContextElement1, setKeyContextElement2, snapshotLegacyOperatorState, snapshotStateclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitclose, getRuntimeContextprocessLatencyMarker, processWatermarkclose, dispose, getChainingStrategy, getMetricGroup, initializeState, notifyOfCompletedCheckpoint, open, setChainingStrategy, setKeyContextElement1, setKeyContextElement2, setup, snapshotLegacyOperatorState, snapshotStatepublic FlinkKafkaProducer010(String brokerList, String topicId, SerializationSchema<T> serializationSchema)
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
topicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.producerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
topicId - The topic to write data toserializationSchema - A (keyless) serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions (when passing null, we'll use Kafka's partitioner)public FlinkKafkaProducer010(String brokerList, String topicId, KeyedSerializationSchema<T> serializationSchema)
brokerList - Comma separated addresses of the brokerstopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
topicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
@Deprecated public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.topicId - The topic to write data toserializationSchema - A (keyless) serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions (when passing null, we'll use Kafka's partitioner)@Deprecated public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined serialization schema supporting key/value messagesproducerConfig - Properties with the producer configuration.public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
inStream - The stream to write to KafkatopicId - ID of the Kafka topic.serializationSchema - User defined (keyless) serialization schema.producerConfig - Properties with the producer configuration.public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
inStream - The stream to write to KafkatopicId - The name of the target topicserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.@Deprecated public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.inStream - The stream to write to KafkatopicId - The name of the target topicserializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner - A serializable partitioner for assigning messages to Kafka partitions.public void setLogFailuresOnly(boolean logFailuresOnly)
logFailuresOnly - The flag to indicate logging-only on exceptions.public void setFlushOnCheckpoint(boolean flush)
flush - Flag indicating the flushing mode (true = flush on checkpoint)public void open(org.apache.flink.configuration.Configuration parameters)
throws Exception
open in interface org.apache.flink.api.common.functions.RichFunctionExceptionpublic org.apache.flink.api.common.functions.IterationRuntimeContext getIterationRuntimeContext()
getIterationRuntimeContext in interface org.apache.flink.api.common.functions.RichFunctionpublic void setRuntimeContext(org.apache.flink.api.common.functions.RuntimeContext t)
setRuntimeContext in interface org.apache.flink.api.common.functions.RichFunctionpublic void invoke(T value) throws Exception
invoke in interface SinkFunction<T>value - The input record.Exceptionpublic void processElement(StreamRecord<T> element) throws Exception
processElement in interface OneInputStreamOperator<T,Object>processElement in class StreamSink<T>ExceptionCopyright © 2014–2017 The Apache Software Foundation. All rights reserved.