| 程序包 | 说明 |
|---|---|
| org.apache.flink.streaming.connectors.kafka |
| 限定符和类型 | 方法和说明 |
|---|---|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
FlinkKafkaProducer010.writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
FlinkKafkaProducer010.writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner<T> customPartitioner)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
FlinkKafkaProducer010.writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.streaming.util.serialization.KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner<T> customPartitioner)
已过时。
This is a deprecated since it does not correctly handle partitioning when
producing to multiple topics. Use
FlinkKafkaProducer010.FlinkKafkaProducer010(String, SerializationSchema, Properties, FlinkKafkaPartitioner) instead. |
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
FlinkKafkaProducer010.writeToKafkaWithTimestamps(org.apache.flink.streaming.api.datastream.DataStream<T> inStream,
String topicId,
org.apache.flink.api.common.serialization.SerializationSchema<T> serializationSchema,
Properties producerConfig)
|
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.