跳过导航链接
A C D E F G I K N O S U W 

A

assignPartitions(KafkaConsumer<?, ?>, List<TopicPartition>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerCallBridge010
 

C

createCallBridge() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
This method needs to be overridden because Kafka broke binary compatibility between 0.9 and 0.10, changing binary signatures.
createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<AssignerWithPeriodicWatermarks<T>>, SerializedValue<AssignerWithPunctuatedWatermarks<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
createKafkaConsumer(String, Properties, DeserializationSchema<Row>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
 
createKafkaProducer(String, Properties, SerializationSchema<Row>, Optional<FlinkKafkaPartitioner<Row>>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSink
 
createKafkaTableSink(TableSchema, String, Properties, Optional<FlinkKafkaPartitioner<Row>>, SerializationSchema<Row>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
createKafkaTableSource(TableSchema, Optional<String>, List<RowtimeAttributeDescriptor>, Map<String, String>, String, Properties, DeserializationSchema<Row>, StartupMode, Map<KafkaTopicPartition, Long>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 

D

disableChaining() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

E

emitRecord(T, KafkaTopicPartitionState<TopicPartition>, long, ConsumerRecord<?, ?>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 

F

fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
FlinkKafkaConsumer010<T> - org.apache.flink.streaming.connectors.kafka中的类
The Flink Kafka Consumer is a streaming data source that pulls a parallel data stream from Apache Kafka 0.10.x.
FlinkKafkaConsumer010(String, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaConsumer010(String, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing a KafkaDeserializationSchema for reading key/value pairs, offsets, and topic names from Kafka.
FlinkKafkaConsumer010(List<String>, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing multiple topics to the consumer.
FlinkKafkaConsumer010(List<String>, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing multiple topics and a key/value deserialization schema.
FlinkKafkaConsumer010(Pattern, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaConsumer010(Pattern, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaProducer010<T> - org.apache.flink.streaming.connectors.kafka中的类
Flink Sink to produce data into a Kafka topic.
FlinkKafkaProducer010(String, String, SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, SerializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic. the sink produces a DataStream to the topic.
FlinkKafkaProducer010(String, SerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, String, KeyedSerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, KeyedSerializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, KeyedSerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, SerializationSchema<T>, Properties, KafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
已过时。
This is a deprecated since it does not correctly handle partitioning when producing to multiple topics. Use FlinkKafkaProducer010.FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
FlinkKafkaProducer010(String, KeyedSerializationSchema<T>, Properties, KafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
已过时。
This is a deprecated constructor that does not correctly handle partitioning when producing to multiple topics. Use FlinkKafkaProducer010.FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> - org.apache.flink.streaming.connectors.kafka中的类
已过时。
This class is deprecated since the factory methods writeToKafkaWithTimestamps for the producer are also deprecated.

G

getFetcherName() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 
getTransformation() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

I

invoke(T, SinkFunction.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
 

K

Kafka010Fetcher<T> - org.apache.flink.streaming.connectors.kafka.internal中的类
A fetcher that fetches data from Kafka brokers via the Kafka 0.10 consumer API.
Kafka010Fetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<AssignerWithPeriodicWatermarks<T>>, SerializedValue<AssignerWithPunctuatedWatermarks<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean, FlinkConnectorRateLimiter) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 
Kafka010PartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internal中的类
A partition discoverer that can be used to discover topics and partitions metadata from Kafka brokers via the Kafka 0.10 high-level consumer API.
Kafka010PartitionDiscoverer(KafkaTopicsDescriptor, int, int, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
Kafka010TableSink - org.apache.flink.streaming.connectors.kafka中的类
Kafka 0.10 table sink for writing data into Kafka.
Kafka010TableSink(TableSchema, String, Properties, Optional<FlinkKafkaPartitioner<Row>>, SerializationSchema<Row>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSink
 
Kafka010TableSource - org.apache.flink.streaming.connectors.kafka中的类
Kafka StreamTableSource for Kafka 0.10.
Kafka010TableSource(TableSchema, Optional<String>, List<RowtimeAttributeDescriptor>, Optional<Map<String, String>>, String, Properties, DeserializationSchema<Row>, StartupMode, Map<KafkaTopicPartition, Long>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
Creates a Kafka 0.10 StreamTableSource.
Kafka010TableSource(TableSchema, String, Properties, DeserializationSchema<Row>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
Creates a Kafka 0.10 StreamTableSource.
Kafka010TableSourceSinkFactory - org.apache.flink.streaming.connectors.kafka中的类
Factory for creating configured instances of Kafka010TableSource.
Kafka010TableSourceSinkFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
KafkaConsumerCallBridge010 - org.apache.flink.streaming.connectors.kafka.internal中的类
The ConsumerCallBridge simply calls the KafkaConsumer.assign(java.util.Collection) method.
KafkaConsumerCallBridge010() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerCallBridge010
 
kafkaVersion() - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 

N

name(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

O

org.apache.flink.streaming.connectors.kafka - 程序包 org.apache.flink.streaming.connectors.kafka
 
org.apache.flink.streaming.connectors.kafka.internal - 程序包 org.apache.flink.streaming.connectors.kafka.internal
 

S

seekPartitionToBeginning(KafkaConsumer<?, ?>, TopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerCallBridge010
 
seekPartitionToEnd(KafkaConsumer<?, ?>, TopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerCallBridge010
 
setFlushOnCheckpoint(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
Defines whether the producer should fail on errors, or only log them.
setParallelism(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
setStartFromTimestamp(long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
setUidHash(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
slotSharingGroup(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
supportsKafkaTimestamps() - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 

U

uid(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

W

writeToKafkaWithTimestamps(DataStream<T>, String, KeyedSerializationSchema<T>, Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
writeToKafkaWithTimestamps(DataStream<T>, String, SerializationSchema<T>, Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
writeToKafkaWithTimestamps(DataStream<T>, String, KeyedSerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
writeToKafkaWithTimestamps(DataStream<T>, String, KeyedSerializationSchema<T>, Properties, KafkaPartitioner<T>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
已过时。
This is a deprecated since it does not correctly handle partitioning when producing to multiple topics. Use FlinkKafkaProducer010.FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead.
A C D E F G I K N O S U W 
跳过导航链接

Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.