public class FileVirtualSplit
extends org.apache.hadoop.mapreduce.InputSplit
implements org.apache.hadoop.io.Writable
FileSplit, but uses
BGZF virtual offsets to fit with BlockCompressedInputStream.| Constructor and Description |
|---|
FileVirtualSplit() |
FileVirtualSplit(org.apache.hadoop.fs.Path f,
long vs,
long ve,
String[] locs) |
FileVirtualSplit(org.apache.hadoop.fs.Path f,
long vs,
long ve,
String[] locs,
long[] intervalFilePointers) |
| Modifier and Type | Method and Description |
|---|---|
long |
getEndVirtualOffset()
Exclusive.
|
long[] |
getIntervalFilePointers() |
long |
getLength()
Inexact due to the nature of virtual offsets.
|
String[] |
getLocations() |
org.apache.hadoop.fs.Path |
getPath() |
long |
getStartVirtualOffset()
Inclusive.
|
void |
readFields(DataInput in) |
void |
setEndVirtualOffset(long vo) |
void |
setStartVirtualOffset(long vo) |
String |
toString() |
void |
write(DataOutput out) |
public FileVirtualSplit()
public FileVirtualSplit(org.apache.hadoop.fs.Path f,
long vs,
long ve,
String[] locs)
public FileVirtualSplit(org.apache.hadoop.fs.Path f,
long vs,
long ve,
String[] locs,
long[] intervalFilePointers)
public String[] getLocations()
getLocations in class org.apache.hadoop.mapreduce.InputSplitpublic long getLength()
getLength in class org.apache.hadoop.mapreduce.InputSplitpublic org.apache.hadoop.fs.Path getPath()
public long getStartVirtualOffset()
public long getEndVirtualOffset()
public void setStartVirtualOffset(long vo)
public void setEndVirtualOffset(long vo)
public long[] getIntervalFilePointers()
null if there are none. These correspond to
BAMFileSpan chunk start/stop pointers in htsjdk.public void write(DataOutput out) throws IOException
write in interface org.apache.hadoop.io.WritableIOExceptionpublic void readFields(DataInput in) throws IOException
readFields in interface org.apache.hadoop.io.WritableIOExceptionCopyright © 2017. All rights reserved.