public interface KafkaProducer<K,V> extends WriteStream<KafkaProducerRecord<K,V>>
The WriteStream.write(Object) provides global control over writing a record.
| Modifier and Type | Method and Description |
|---|---|
KafkaWriteStream<K,V> |
asStream() |
void |
close()
Close the producer
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
void |
close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
Close the producer
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
org.apache.kafka.clients.producer.Producer<K,V> producer)
Create a new KafkaProducer instance from a native
Producer. |
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Properties config)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
create(Vertx vertx,
Properties config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaProducer instance
|
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Properties config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
static <K,V> KafkaProducer<K,V> |
createShared(Vertx vertx,
String name,
Properties config,
Class<K> keyType,
Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the same
name |
KafkaProducer<K,V> |
drainHandler(Handler<Void> handler)
Set a drain handler on the stream.
|
void |
end()
Ends the stream.
|
void |
end(KafkaProducerRecord<K,V> kafkaProducerRecord)
Same as
WriteStream.end() but writes some data to the stream before ending. |
KafkaProducer<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the write stream.
|
KafkaProducer<K,V> |
flush(Handler<Void> completionHandler)
Invoking this method makes all buffered records immediately available to write
|
KafkaProducer<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<PartitionInfo>>> handler)
Get the partition metadata for the give topic.
|
KafkaProducer<K,V> |
setWriteQueueMaxSize(int i)
Set the maximum size of the write queue to
maxSize. |
org.apache.kafka.clients.producer.Producer<K,V> |
unwrap() |
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> kafkaProducerRecord)
Write some data to the stream.
|
KafkaProducer<K,V> |
write(KafkaProducerRecord<K,V> record,
Handler<AsyncResult<RecordMetadata>> handler)
Asynchronously write a record to a topic
|
boolean |
writeQueueFull()
This will return
true if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int) |
static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Properties config)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationstatic <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationstatic <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationstatic <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Properties config, Class<K> keyType, Class<V> valueType)
namevertx - Vert.x instance to usename - the producer name to identify itconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationstatic <K,V> KafkaProducer<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)
Producer.vertx - Vert.x instance to useproducer - the Kafka producer to wrapstatic <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config)
vertx - Vert.x instance to useconfig - Kafka producer configurationstatic <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
vertx - Vert.x instance to useconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationstatic <K,V> KafkaProducer<K,V> create(Vertx vertx, Properties config)
vertx - Vert.x instance to useconfig - Kafka producer configurationstatic <K,V> KafkaProducer<K,V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType)
vertx - Vert.x instance to useconfig - Kafka producer configurationkeyType - class type for the key serializationvalueType - class type for the value serializationKafkaProducer<K,V> exceptionHandler(Handler<Throwable> handler)
WriteStreamexceptionHandler in interface StreamBaseexceptionHandler in interface WriteStream<KafkaProducerRecord<K,V>>handler - the exception handlerKafkaProducer<K,V> write(KafkaProducerRecord<K,V> kafkaProducerRecord)
WriteStreamWriteStream.writeQueueFull() method before writing. This is done automatically if using a Pump.write in interface WriteStream<KafkaProducerRecord<K,V>>kafkaProducerRecord - the data to writevoid end()
WriteStreamOnce the stream has ended, it cannot be used any more.
end in interface WriteStream<KafkaProducerRecord<K,V>>void end(KafkaProducerRecord<K,V> kafkaProducerRecord)
WriteStreamWriteStream.end() but writes some data to the stream before ending.end in interface WriteStream<KafkaProducerRecord<K,V>>KafkaProducer<K,V> setWriteQueueMaxSize(int i)
WriteStreammaxSize. You will still be able to write to the stream even
if there is more than maxSize items in the write queue. This is used as an indicator by classes such as
Pump to provide flow control.
The value is defined by the implementation of the stream, e.g in bytes for a
NetSocket, the number of Message for a
MessageProducer, etc...setWriteQueueMaxSize in interface WriteStream<KafkaProducerRecord<K,V>>i - the max size of the write streamboolean writeQueueFull()
WriteStreamtrue if there are more bytes in the write queue than the value set using WriteStream.setWriteQueueMaxSize(int)writeQueueFull in interface WriteStream<KafkaProducerRecord<K,V>>KafkaProducer<K,V> drainHandler(Handler<Void> handler)
WriteStreamPump for an example of this being used.
The stream implementation defines when the drain handler, for example it could be when the queue size has been
reduced to maxSize / 2.drainHandler in interface WriteStream<KafkaProducerRecord<K,V>>handler - the handlerKafkaProducer<K,V> write(KafkaProducerRecord<K,V> record, Handler<AsyncResult<RecordMetadata>> handler)
record - record to writehandler - handler called on operation completedKafkaProducer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
topic - topic partition for which getting partitions infohandler - handler called on operation completedKafkaProducer<K,V> flush(Handler<Void> completionHandler)
completionHandler - handler called on operation completedvoid close()
void close(Handler<AsyncResult<Void>> completionHandler)
completionHandler - handler called on operation completedvoid close(long timeout,
Handler<AsyncResult<Void>> completionHandler)
timeout - timeout to wait for closingcompletionHandler - handler called on operation completedKafkaWriteStream<K,V> asStream()
KafkaWriteStream instanceCopyright © 2018 Eclipse. All rights reserved.