跳过导航链接
A C D F G H I K N O P R S U W 

A

asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSink
 
asSummaryString() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSource
 

C

cancel() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 
close() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Handover
Closes the handover.
closeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
ClosedException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internal.Handover.ClosedException
 
copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSink
 
copy() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSource
 
createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
createKafkaConsumer(String, Properties, DeserializationSchema<Row>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
 
createKafkaConsumer(String, Properties, DeserializationSchema<RowData>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSource
 
createKafkaPartitionHandle(KafkaTopicPartition) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 
createKafkaProducer(String, Properties, SerializationSchema<Row>, Optional<FlinkKafkaPartitioner<Row>>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSink
 
createKafkaProducer(String, Properties, SerializationSchema<RowData>, Optional<FlinkKafkaPartitioner<RowData>>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSink
 
createKafkaTableSink(TableSchema, String, Properties, Optional<FlinkKafkaPartitioner<Row>>, SerializationSchema<Row>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
createKafkaTableSink(DataType, String, Properties, Optional<FlinkKafkaPartitioner<RowData>>, EncodingFormat<SerializationSchema<RowData>>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicTableFactory
 
createKafkaTableSource(TableSchema, Optional<String>, List<RowtimeAttributeDescriptor>, Map<String, String>, String, Properties, DeserializationSchema<Row>, StartupMode, Map<KafkaTopicPartition, Long>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
createKafkaTableSource(DataType, String, Properties, DecodingFormat<DeserializationSchema<RowData>>, StartupMode, Map<KafkaTopicPartition, Long>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicTableFactory
 
createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 

D

DEFAULT_POLL_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
disableChaining() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 

F

factoryIdentifier() - 类 中的方法org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicTableFactory
 
fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
FlinkKafkaConsumer010<T> - org.apache.flink.streaming.connectors.kafka中的类
The Flink Kafka Consumer is a streaming data source that pulls a parallel data stream from Apache Kafka 0.10.x.
FlinkKafkaConsumer010(String, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaConsumer010(String, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing a KafkaDeserializationSchema for reading key/value pairs, offsets, and topic names from Kafka.
FlinkKafkaConsumer010(List<String>, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing multiple topics to the consumer.
FlinkKafkaConsumer010(List<String>, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x This constructor allows passing multiple topics and a key/value deserialization schema.
FlinkKafkaConsumer010(Pattern, DeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaConsumer010(Pattern, KafkaDeserializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Creates a new Kafka streaming source consumer for Kafka 0.10.x.
FlinkKafkaProducer010<T> - org.apache.flink.streaming.connectors.kafka中的类
Flink Sink to produce data into a Kafka topic.
FlinkKafkaProducer010(String, String, SerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, SerializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic. the sink produces a DataStream to the topic.
FlinkKafkaProducer010(String, SerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, String, KeyedSerializationSchema<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, KeyedSerializationSchema<T>, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010(String, KeyedSerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
Creates a FlinkKafkaProducer for a given topic.
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> - org.apache.flink.streaming.connectors.kafka中的类
已过时。
This class is deprecated since the factory methods writeToKafkaWithTimestamps for the producer are also deprecated.
flush() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
 

G

getAllPartitionsForTopics(List<String>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
getAllTopics() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
getIsAutoCommitEnabled() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
getRateLimiter() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
 
getRecordsFromKafka() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread
Get records from Kafka.
getTransformation() - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

H

Handover - org.apache.flink.streaming.connectors.kafka.internal中的类
The Handover is a utility to hand over data (a buffer of records) and exception from a producer thread to a consumer thread.
Handover() - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.Handover
 
Handover.ClosedException - org.apache.flink.streaming.connectors.kafka.internal中的异常错误
An exception thrown by the Handover in the Handover.pollNext() or Handover.produce(ConsumerRecords) method, after the Handover was closed via Handover.close().
Handover.WakeupException - org.apache.flink.streaming.connectors.kafka.internal中的异常错误
A special exception thrown bv the Handover in the Handover.produce(ConsumerRecords) method when the producer is woken up from a blocking call via Handover.wakeupProducer().

I

IDENTIFIER - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicTableFactory
 
initializeConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
invoke(T, SinkFunction.Context) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
 

K

Kafka010DynamicSink - org.apache.flink.streaming.connectors.kafka.table中的类
Kafka 0.10 table sink for writing data into Kafka.
Kafka010DynamicSink(DataType, String, Properties, Optional<FlinkKafkaPartitioner<RowData>>, EncodingFormat<SerializationSchema<RowData>>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSink
 
Kafka010DynamicSource - org.apache.flink.streaming.connectors.kafka.table中的类
Kafka StreamTableSource for Kafka 0.10.
Kafka010DynamicSource(DataType, String, Properties, DecodingFormat<DeserializationSchema<RowData>>, StartupMode, Map<KafkaTopicPartition, Long>, long) - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicSource
Creates a Kafka 0.10 StreamTableSource.
Kafka010DynamicTableFactory - org.apache.flink.streaming.connectors.kafka.table中的类
Factory for creating configured instances of Kafka010DynamicSource.
Kafka010DynamicTableFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.table.Kafka010DynamicTableFactory
 
Kafka010Fetcher<T> - org.apache.flink.streaming.connectors.kafka.internal中的类
A fetcher that fetches data from Kafka brokers via the Kafka 0.10 consumer API.
Kafka010Fetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean, FlinkConnectorRateLimiter) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 
Kafka010PartitionDiscoverer - org.apache.flink.streaming.connectors.kafka.internal中的类
A partition discoverer that can be used to discover topics and partitions metadata from Kafka brokers via the Kafka 0.10 high-level consumer API.
Kafka010PartitionDiscoverer(KafkaTopicsDescriptor, int, int, Properties) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
Kafka010TableSink - org.apache.flink.streaming.connectors.kafka中的类
Kafka 0.10 table sink for writing data into Kafka.
Kafka010TableSink(TableSchema, String, Properties, Optional<FlinkKafkaPartitioner<Row>>, SerializationSchema<Row>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSink
 
Kafka010TableSource - org.apache.flink.streaming.connectors.kafka中的类
Kafka StreamTableSource for Kafka 0.10.
Kafka010TableSource(TableSchema, Optional<String>, List<RowtimeAttributeDescriptor>, Optional<Map<String, String>>, String, Properties, DeserializationSchema<Row>, StartupMode, Map<KafkaTopicPartition, Long>, long) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
Creates a Kafka 0.10 StreamTableSource.
Kafka010TableSource(TableSchema, String, Properties, DeserializationSchema<Row>) - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSource
Creates a Kafka 0.10 StreamTableSource.
Kafka010TableSourceSinkFactory - org.apache.flink.streaming.connectors.kafka中的类
Factory for creating configured instances of Kafka010TableSource.
Kafka010TableSourceSinkFactory() - 类 的构造器org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
KafkaConsumerThread<T> - org.apache.flink.streaming.connectors.kafka.internal中的类
The thread the runs the KafkaConsumer, connecting to the brokers and polling records.
KafkaConsumerThread(Logger, Handover, Properties, ClosableBlockingQueue<KafkaTopicPartitionState<T, TopicPartition>>, String, long, boolean, MetricGroup, MetricGroup, FlinkConnectorRateLimiter) - 类 的构造器org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread
 
kafkaVersion() - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 
KEY_POLL_TIMEOUT - 类 中的静态变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Configuration key to change the polling timeout

N

name(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

O

org.apache.flink.streaming.connectors.kafka - 程序包 org.apache.flink.streaming.connectors.kafka
 
org.apache.flink.streaming.connectors.kafka.internal - 程序包 org.apache.flink.streaming.connectors.kafka.internal
 
org.apache.flink.streaming.connectors.kafka.table - 程序包 org.apache.flink.streaming.connectors.kafka.table
 

P

pollNext() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Handover
Polls the next element from the Handover, possibly blocking until the next element is available.
pollTimeout - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
produce(ConsumerRecords<byte[], byte[]>) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Handover
Hands over an element from the producer.
properties - 类 中的变量org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
User-supplied properties for Kafka

R

reportError(Throwable) - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Handover
Reports an exception.
run() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread
 
runFetchLoop() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher
 

S

setFlushOnCheckpoint(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
setLogFailuresOnly(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
Defines whether the producer should fail on errors, or only log them.
setParallelism(int) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
setRateLimiter(FlinkConnectorRateLimiter) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
Set a rate limiter to ratelimit bytes read from Kafka.
setUidHash(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
setWriteTimestampToKafka(boolean) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
shutdown() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread
Shuts this thread down, waking up the thread gracefully if blocked (without Thread.interrupt() calls).
slotSharingGroup(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 
supportsKafkaTimestamps() - 类 中的方法org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
 

U

uid(String) - 类 中的方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
已过时。
 

W

wakeupConnections() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Kafka010PartitionDiscoverer
 
WakeupException() - 异常错误 的构造器org.apache.flink.streaming.connectors.kafka.internal.Handover.WakeupException
 
wakeupProducer() - 类 中的方法org.apache.flink.streaming.connectors.kafka.internal.Handover
Wakes the producer thread up.
writeToKafkaWithTimestamps(DataStream<T>, String, KeyedSerializationSchema<T>, Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
writeToKafkaWithTimestamps(DataStream<T>, String, SerializationSchema<T>, Properties) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
writeToKafkaWithTimestamps(DataStream<T>, String, KeyedSerializationSchema<T>, Properties, FlinkKafkaPartitioner<T>) - 类 中的静态方法org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010
A C D F G H I K N O P R S U W 
跳过导航链接

Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.