@InterfaceAudience.Private public class CompoundBloomFilterWriter extends CompoundBloomFilterBase implements BloomFilterWriter, InlineBlockWriter
HFile to the
CompoundBloomFilter class.| Modifier and Type | Field and Description |
|---|---|
protected static org.apache.commons.logging.Log |
LOG |
comparator, errorRate, hashType, numChunks, totalByteSize, totalKeyCount, totalMaxKeys, VERSION| Constructor and Description |
|---|
CompoundBloomFilterWriter(int chunkByteSizeHint,
float errorRate,
int hashType,
int maxFold,
boolean cacheOnWrite,
KeyValue.KVComparator comparator) |
| Modifier and Type | Method and Description |
|---|---|
void |
add(byte[] bloomKey,
int keyOffset,
int keyLength)
Adds a Bloom filter key.
|
void |
allocBloom()
Allocate memory for the bloom filter data.
|
void |
blockWritten(long offset,
int onDiskSize,
int uncompressedSize)
Called after a block has been written, and its offset, raw size, and
compressed size have been determined.
|
void |
compactBloom()
Compact the Bloom filter before writing metadata & data to disk.
|
boolean |
getCacheOnWrite() |
org.apache.hadoop.io.Writable |
getDataWriter()
Get a writable interface into bloom filter data (the actual Bloom bits).
|
BlockType |
getInlineBlockType()
The type of blocks this block writer produces.
|
org.apache.hadoop.io.Writable |
getMetaWriter()
Get a writable interface into bloom filter meta data.
|
boolean |
shouldWriteBlock(boolean closing)
Determines whether there is a new block to be written out.
|
void |
writeInlineBlock(DataOutput out)
Writes the block to the provided stream.
|
createBloomKey, getByteSize, getComparator, getKeyCount, getMaxKeysclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitcreateBloomKey, getByteSize, getComparator, getKeyCount, getMaxKeyspublic CompoundBloomFilterWriter(int chunkByteSizeHint,
float errorRate,
int hashType,
int maxFold,
boolean cacheOnWrite,
KeyValue.KVComparator comparator)
chunkByteSizeHint - each chunk's size in bytes. The real chunk size might be different
as required by the fold factor.errorRate - target false positive ratehashType - hash function type to usemaxFold - maximum degree of folding allowedpublic boolean shouldWriteBlock(boolean closing)
InlineBlockWritershouldWriteBlock in interface InlineBlockWriterclosing - whether the file is being closed, in which case we need to write
out all available data and not wait to accumulate another blockpublic void add(byte[] bloomKey,
int keyOffset,
int keyLength)
StoreFile.Writer.append(
org.apache.hadoop.hbase.KeyValue) for the details of deduplication.add in interface BloomFilterWriterbloomKey - data to be added to the bloomkeyOffset - offset into the data to be addedkeyLength - length of the data to be addedpublic void writeInlineBlock(DataOutput out) throws IOException
InlineBlockWriterInlineBlockWriter.shouldWriteBlock(boolean) returned true.writeInlineBlock in interface InlineBlockWriterout - a stream (usually a compressing stream) to write the block toIOExceptionpublic void blockWritten(long offset,
int onDiskSize,
int uncompressedSize)
InlineBlockWriterblockWritten in interface InlineBlockWriteroffset - the offset of the block in the streamonDiskSize - the on-disk size of the blockuncompressedSize - the uncompressed size of the blockpublic BlockType getInlineBlockType()
InlineBlockWritergetInlineBlockType in interface InlineBlockWriterpublic org.apache.hadoop.io.Writable getMetaWriter()
BloomFilterWritergetMetaWriter in interface BloomFilterWriterpublic void compactBloom()
BloomFilterWritercompactBloom in interface BloomFilterWriterpublic void allocBloom()
BloomFilterWriterallocBloom in interface BloomFilterWriterpublic org.apache.hadoop.io.Writable getDataWriter()
BloomFilterWritergetDataWriter in interface BloomFilterWriterpublic boolean getCacheOnWrite()
getCacheOnWrite in interface InlineBlockWriterCopyright © 2014 The Apache Software Foundation. All Rights Reserved.