@PublicEvolving
public class FlinkKafkaConsumer09<T>
extends org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase<T>
The Flink Kafka Consumer participates in checkpointing and guarantees that no data is lost during a failure, and that the computation processes elements "exactly once". (Note: These guarantees naturally assume that Kafka itself does not loose any data.)
Please note that Flink snapshots the offsets internally as part of its distributed checkpoints. The offsets committed to Kafka / ZooKeeper are only to bring the outside view of progress in sync with Flink's view of the progress. That way, monitoring and other jobs can get a view of how far the Flink Kafka consumer has consumed a topic.
Please refer to Kafka's documentation for the available configuration properties: http://kafka.apache.org/documentation.html#newconsumerconfigs
| 限定符和类型 | 字段和说明 |
|---|---|
static long |
DEFAULT_POLL_TIMEOUT
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not
available.
|
static String |
KEY_POLL_TIMEOUT
Configuration key to change the polling timeout.
|
protected long |
pollTimeout
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not
available.
|
protected Properties |
properties
User-supplied properties for Kafka.
|
| 构造器和说明 |
|---|
FlinkKafkaConsumer09(List<String> topics,
org.apache.flink.api.common.serialization.DeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing multiple topics to the consumer.
|
FlinkKafkaConsumer09(List<String> topics,
org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing multiple topics and a key/value deserialization schema.
|
FlinkKafkaConsumer09(Pattern subscriptionPattern,
org.apache.flink.api.common.serialization.DeserializationSchema<T> valueDeserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x.
|
FlinkKafkaConsumer09(Pattern subscriptionPattern,
org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x.
|
FlinkKafkaConsumer09(String topic,
org.apache.flink.api.common.serialization.DeserializationSchema<T> valueDeserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x .
|
FlinkKafkaConsumer09(String topic,
org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer,
Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.9.x
This constructor allows passing a
KafkaDeserializationSchema for reading key/value
pairs, offsets, and topic names from Kafka. |
| 限定符和类型 | 方法和说明 |
|---|---|
protected org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher<T,?> |
createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext,
Map<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets,
org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext,
org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode offsetCommitMode,
org.apache.flink.metrics.MetricGroup consumerMetricGroup,
boolean useMetrics) |
protected org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer |
createPartitionDiscoverer(org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor topicsDescriptor,
int indexOfThisSubtask,
int numParallelSubtasks) |
protected Map<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition,Long> |
fetchOffsetsWithTimestamp(Collection<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition> partitions,
long timestamp) |
protected boolean |
getIsAutoCommitEnabled() |
org.apache.flink.api.common.io.ratelimiting.FlinkConnectorRateLimiter |
getRateLimiter() |
void |
setRateLimiter(org.apache.flink.api.common.io.ratelimiting.FlinkConnectorRateLimiter kafkaRateLimiter)
Set a rate limiter to ratelimit bytes read from Kafka.
|
assignTimestampsAndWatermarks, assignTimestampsAndWatermarks, cancel, close, disableFilterRestoredPartitionsWithSubscribedTopics, getProducedType, initializeState, notifyCheckpointComplete, open, run, setCommitOffsetsOnCheckpoints, setStartFromEarliest, setStartFromGroupOffsets, setStartFromLatest, setStartFromSpecificOffsets, setStartFromTimestamp, snapshotStatepublic static final String KEY_POLL_TIMEOUT
public static final long DEFAULT_POLL_TIMEOUT
protected final Properties properties
protected final long pollTimeout
public FlinkKafkaConsumer09(String topic, org.apache.flink.api.common.serialization.DeserializationSchema<T> valueDeserializer, Properties props)
topic - The name of the topic that should be consumed.valueDeserializer - The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer09(String topic, org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer, Properties props)
This constructor allows passing a KafkaDeserializationSchema for reading key/value
pairs, offsets, and topic names from Kafka.
topic - The name of the topic that should be consumed.deserializer - The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties used to configure the Kafka consumer client, and the ZooKeeper client.public FlinkKafkaConsumer09(List<String> topics, org.apache.flink.api.common.serialization.DeserializationSchema<T> deserializer, Properties props)
This constructor allows passing multiple topics to the consumer.
topics - The Kafka topics to read from.deserializer - The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties that are used to configure both the fetcher and the offset handler.public FlinkKafkaConsumer09(List<String> topics, org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer, Properties props)
This constructor allows passing multiple topics and a key/value deserialization schema.
topics - The Kafka topics to read from.deserializer - The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties that are used to configure both the fetcher and the offset handler.@PublicEvolving public FlinkKafkaConsumer09(Pattern subscriptionPattern, org.apache.flink.api.common.serialization.DeserializationSchema<T> valueDeserializer, Properties props)
If partition discovery is enabled (by setting a non-negative value for
FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS in the properties), topics
with names matching the pattern will also be subscribed to as they are created on the fly.
subscriptionPattern - The regular expression for a pattern of topic names to subscribe to.valueDeserializer - The de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties used to configure the Kafka consumer client, and the ZooKeeper client.@PublicEvolving public FlinkKafkaConsumer09(Pattern subscriptionPattern, org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema<T> deserializer, Properties props)
If partition discovery is enabled (by setting a non-negative value for
FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS in the properties), topics
with names matching the pattern will also be subscribed to as they are created on the fly.
This constructor allows passing a KafkaDeserializationSchema for reading key/value
pairs, offsets, and topic names from Kafka.
subscriptionPattern - The regular expression for a pattern of topic names to subscribe to.deserializer - The keyed de-/serializer used to convert between Kafka's byte messages and Flink's objects.props - The properties used to configure the Kafka consumer client, and the ZooKeeper client.protected org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher<T,?> createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>> watermarksPeriodic, org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics) throws Exception
protected org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer createPartitionDiscoverer(org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor topicsDescriptor,
int indexOfThisSubtask,
int numParallelSubtasks)
createPartitionDiscoverer 在类中 org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase<T>protected boolean getIsAutoCommitEnabled()
getIsAutoCommitEnabled 在类中 org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase<T>protected Map<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition,Long> fetchOffsetsWithTimestamp(Collection<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition> partitions, long timestamp)
fetchOffsetsWithTimestamp 在类中 org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase<T>public void setRateLimiter(org.apache.flink.api.common.io.ratelimiting.FlinkConnectorRateLimiter kafkaRateLimiter)
kafkaRateLimiter - public org.apache.flink.api.common.io.ratelimiting.FlinkConnectorRateLimiter getRateLimiter()
Copyright © 2014–2019 The Apache Software Foundation. All rights reserved.