K - The type of the first tuple field.V - The type of the second tuple field.public class SequenceFileWriter<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable> extends StreamWriterBase<org.apache.flink.api.java.tuple.Tuple2<K,V>> implements org.apache.flink.api.java.typeutils.InputTypeConfigurable
Writer that writes the bucket files as Hadoop SequenceFiles.
The input to the RollingSink must
be a Tuple2 of two Hadopo
Writables.| Constructor and Description |
|---|
SequenceFileWriter()
Creates a new
SequenceFileWriter that writes sequence files without compression. |
SequenceFileWriter(String compressionCodecName,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Creates a new
SequenceFileWriter that writes sequence with the given
compression codec and compression type. |
| Modifier and Type | Method and Description |
|---|---|
void |
close()
Closes the
Writer. |
Writer<org.apache.flink.api.java.tuple.Tuple2<K,V>> |
duplicate()
Duplicates the
Writer. |
void |
open(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
Initializes the
Writer for a newly opened bucket file. |
void |
setInputType(org.apache.flink.api.common.typeinfo.TypeInformation<?> type,
org.apache.flink.api.common.ExecutionConfig executionConfig) |
void |
write(org.apache.flink.api.java.tuple.Tuple2<K,V> element)
Writes one element to the bucket file.
|
flush, getPos, getStream, hflushOrSyncpublic SequenceFileWriter()
SequenceFileWriter that writes sequence files without compression.public SequenceFileWriter(String compressionCodecName, org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
SequenceFileWriter that writes sequence with the given
compression codec and compression type.compressionCodecName - Name of a Hadoop Compression Codec.compressionType - The compression type to use.public void open(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
throws IOException
WriterWriter for a newly opened bucket file.
Any internal per-bucket initialization should be performed here.open in interface Writer<org.apache.flink.api.java.tuple.Tuple2<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable>>open in class StreamWriterBase<org.apache.flink.api.java.tuple.Tuple2<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable>>fs - The FileSystem containing the newly opened file.path - The Path of the newly opened file.IOExceptionpublic void close()
throws IOException
WriterWriter. If the writer is already closed, no action will be
taken. The call should close all state related to the current output file,
including the output stream opened in open.close in interface Writer<org.apache.flink.api.java.tuple.Tuple2<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable>>close in class StreamWriterBase<org.apache.flink.api.java.tuple.Tuple2<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable>>IOExceptionpublic void write(org.apache.flink.api.java.tuple.Tuple2<K,V> element) throws IOException
Writerwrite in interface Writer<org.apache.flink.api.java.tuple.Tuple2<K extends org.apache.hadoop.io.Writable,V extends org.apache.hadoop.io.Writable>>IOExceptionpublic void setInputType(org.apache.flink.api.common.typeinfo.TypeInformation<?> type,
org.apache.flink.api.common.ExecutionConfig executionConfig)
setInputType in interface org.apache.flink.api.java.typeutils.InputTypeConfigurableCopyright © 2014–2016 The Apache Software Foundation. All rights reserved.