@InterfaceAudience.LimitedPrivate(value="Coprocesssor") @InterfaceStability.Evolving public interface Store extends HeapSize, StoreConfigInformation, PropagatingConfigurationObserver
| Modifier and Type | Field and Description |
|---|---|
static int |
NO_PRIORITY |
static int |
PRIORITY_USER
The default priority for user-specified compaction requests.
|
| Modifier and Type | Method and Description |
|---|---|
void |
addChangedReaderObserver(ChangedReadersObserver o) |
boolean |
areWritesEnabled() |
void |
cancelRequestedCompaction(CompactionContext compaction) |
boolean |
canSplit() |
Collection<StoreFile> |
close()
Close all the readers We don't need to worry about subsequent requests because the Region
holds a write lock that will prevent any more reads or writes.
|
void |
closeAndArchiveCompactedFiles()
Closes and archives the compacted files under this store
|
List<StoreFile> |
compact(CompactionContext compaction,
ThroughputController throughputController)
Deprecated.
see compact(CompactionContext, ThroughputController, User)
|
List<StoreFile> |
compact(CompactionContext compaction,
ThroughputController throughputController,
User user) |
org.apache.hadoop.hbase.regionserver.StoreFlushContext |
createFlushContext(long cacheFlushId) |
StoreFileWriter |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint,
boolean includesTags) |
StoreFileWriter |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint,
boolean includesTags,
boolean shouldDropBehind) |
StoreFileWriter |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint,
boolean includesTags,
boolean shouldDropBehind,
TimeRangeTracker trt) |
void |
deleteChangedReaderObserver(ChangedReadersObserver o) |
long |
getAvgStoreFileAge() |
CacheConfig |
getCacheConfig()
Used for tests.
|
ColumnFamilyDescriptor |
getColumnFamilyDescriptor() |
String |
getColumnFamilyName() |
long |
getCompactedCellsCount() |
long |
getCompactedCellsSize() |
Collection<StoreFile> |
getCompactedFiles() |
int |
getCompactedFilesCount() |
double |
getCompactionPressure()
This value can represent the degree of emergency of compaction for this store.
|
CompactionProgress |
getCompactionProgress()
getter for CompactionProgress object
|
int |
getCompactPriority() |
CellComparator |
getComparator() |
RegionCoprocessorHost |
getCoprocessorHost() |
HFileDataBlockEncoder |
getDataBlockEncoder() |
org.apache.hadoop.fs.FileSystem |
getFileSystem() |
long |
getFlushableSize()
Deprecated.
Since 2.0 and will be removed in 3.0. Use
getSizeToFlush() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area. |
long |
getFlushedCellsCount() |
long |
getFlushedCellsSize() |
long |
getFlushedOutputFileSize() |
long |
getHFilesSize() |
long |
getLastCompactSize() |
long |
getMajorCompactedCellsCount() |
long |
getMajorCompactedCellsSize() |
long |
getMaxMemstoreTS() |
long |
getMaxSequenceId() |
long |
getMaxStoreFileAge() |
long |
getMemStoreSize()
Deprecated.
Since 2.0 and will be removed in 3.0. Use
getSizeOfMemStore() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area. |
long |
getMinStoreFileAge() |
long |
getNumHFiles() |
long |
getNumReferenceFiles() |
HRegionInfo |
getRegionInfo() |
ScanInfo |
getScanInfo() |
KeyValueScanner |
getScanner(Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
default List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean isGet,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow,
long readPt)
Get all scanners with no filtering based on TTL (that happens further down the line).
|
List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt)
Get all scanners with no filtering based on TTL (that happens further down the line).
|
default List<KeyValueScanner> |
getScanners(List<StoreFile> files,
boolean cacheBlocks,
boolean isGet,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow,
long readPt,
boolean includeMemstoreScanner)
Create scanners on the given files and if needed on the memstore with no filtering based on TTL
(that happens further down the line).
|
List<KeyValueScanner> |
getScanners(List<StoreFile> files,
boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt,
boolean includeMemstoreScanner)
Create scanners on the given files and if needed on the memstore with no filtering based on TTL
(that happens further down the line).
|
long |
getSize() |
MemstoreSize |
getSizeOfMemStore() |
MemstoreSize |
getSizeOfSnapshot() |
MemstoreSize |
getSizeToFlush() |
long |
getSmallestReadPoint() |
long |
getSnapshotSize()
Deprecated.
Since 2.0 and will be removed in 3.0. Use
getSizeOfSnapshot() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area. |
byte[] |
getSplitPoint()
Determines if Store should be split
|
Collection<StoreFile> |
getStorefiles() |
int |
getStorefilesCount() |
long |
getStorefilesIndexSize() |
long |
getStorefilesSize() |
long |
getStoreSizeUncompressed() |
TableName |
getTableName() |
long |
getTotalStaticBloomSize()
Returns the total byte size of all Bloom filter bit arrays.
|
long |
getTotalStaticIndexSize()
Returns the total size of all index blocks in the data block indexes, including the root level,
intermediate levels, and the leaf level for multi-level indexes, or just the root level for
single-level indexes.
|
boolean |
hasReferences() |
boolean |
hasTooManyStoreFiles() |
boolean |
isMajorCompaction() |
boolean |
isPrimaryReplicaStore() |
boolean |
isSloppyMemstore() |
boolean |
needsCompaction()
See if there's too much store files in this store
|
List<KeyValueScanner> |
recreateScanners(List<KeyValueScanner> currentFileScanners,
boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt,
boolean includeMemstoreScanner)
Recreates the scanners on the current list of active store file scanners
|
void |
refreshStoreFiles()
Checks the underlying store files, and opens the files that have not
been opened, and removes the store file readers for store files no longer
available.
|
void |
refreshStoreFiles(Collection<String> newFiles)
Replaces the store files that the store has with the given files.
|
default Optional<CompactionContext> |
requestCompaction() |
Optional<CompactionContext> |
requestCompaction(int priority,
CompactionLifeCycleTracker tracker,
User user) |
boolean |
throttleCompaction(long compactionSize) |
long |
timeOfOldestEdit()
When was the last edit done in the memstore
|
void |
triggerMajorCompaction() |
getBlockingFileCount, getCompactionCheckMultiplier, getMemstoreFlushSize, getStoreFileTtlderegisterChildren, registerChildrenonConfigurationChangestatic final int PRIORITY_USER
static final int NO_PRIORITY
CellComparator getComparator()
Collection<StoreFile> getStorefiles()
Collection<StoreFile> getCompactedFiles()
Collection<StoreFile> close() throws IOException
StoreFiles that were previously being used.IOException - on failureKeyValueScanner getScanner(Scan scan, NavigableSet<byte[]> targetCols, long readPt) throws IOException
scan - Scan to apply when scanning the storestargetCols - columns to scanIOException - on failuredefault List<KeyValueScanner> getScanners(boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt) throws IOException
cacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowstopRow - the stop rowreadPt - the read point of the current scanIOExceptionList<KeyValueScanner> getScanners(boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt) throws IOException
cacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowincludeStartRow - true to include start row, false if notstopRow - the stop rowincludeStopRow - true to include stop row, false if notreadPt - the read point of the current scanIOExceptionList<KeyValueScanner> recreateScanners(List<KeyValueScanner> currentFileScanners, boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt, boolean includeMemstoreScanner) throws IOException
currentFileScanners - the current set of active store file scannerscacheBlocks - cache the blocks or notusePread - use pread or notisCompaction - is the scanner for compactionmatcher - the scan query matcherstartRow - the scan's start rowincludeStartRow - should the scan include the start rowstopRow - the scan's stop rowincludeStopRow - should the scan include the stop rowreadPt - the read point of the current scaneincludeMemstoreScanner - whether the current scanner should include memstorescannerIOExceptiondefault List<KeyValueScanner> getScanners(List<StoreFile> files, boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt, boolean includeMemstoreScanner) throws IOException
files - the list of files on which the scanners has to be createdcacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowstopRow - the stop rowreadPt - the read point of the current scanincludeMemstoreScanner - true if memstore has to be includedIOExceptionList<KeyValueScanner> getScanners(List<StoreFile> files, boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt, boolean includeMemstoreScanner) throws IOException
files - the list of files on which the scanners has to be createdcacheBlocks - ache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowincludeStartRow - true to include start row, false if notstopRow - the stop rowincludeStopRow - true to include stop row, false if notreadPt - the read point of the current scanincludeMemstoreScanner - true if memstore has to be includedIOExceptionScanInfo getScanInfo()
long timeOfOldestEdit()
org.apache.hadoop.fs.FileSystem getFileSystem()
StoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTags) throws IOException
maxKeyCount - compression - Compression algorithm to useisCompaction - whether we are creating a new file in a compactionincludeMVCCReadpoint - whether we should out the MVCC readpointIOExceptionStoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTags, boolean shouldDropBehind) throws IOException
maxKeyCount - compression - Compression algorithm to useisCompaction - whether we are creating a new file in a compactionincludeMVCCReadpoint - whether we should out the MVCC readpointshouldDropBehind - should the writer drop caches behind writesIOExceptionStoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTags, boolean shouldDropBehind, TimeRangeTracker trt) throws IOException
maxKeyCount - compression - Compression algorithm to useisCompaction - whether we are creating a new file in a compactionincludeMVCCReadpoint - whether we should out the MVCC readpointshouldDropBehind - should the writer drop caches behind writestrt - Ready-made timetracker to use.IOExceptionboolean throttleCompaction(long compactionSize)
CompactionProgress getCompactionProgress()
default Optional<CompactionContext> requestCompaction() throws IOException
IOExceptionOptional<CompactionContext> requestCompaction(int priority, CompactionLifeCycleTracker tracker, User user) throws IOException
IOExceptionvoid cancelRequestedCompaction(CompactionContext compaction)
@Deprecated List<StoreFile> compact(CompactionContext compaction, ThroughputController throughputController) throws IOException
IOExceptionList<StoreFile> compact(CompactionContext compaction, ThroughputController throughputController, User user) throws IOException
IOExceptionboolean isMajorCompaction()
throws IOException
IOExceptionvoid triggerMajorCompaction()
boolean needsCompaction()
int getCompactPriority()
org.apache.hadoop.hbase.regionserver.StoreFlushContext createFlushContext(long cacheFlushId)
boolean canSplit()
byte[] getSplitPoint()
boolean hasReferences()
@Deprecated long getMemStoreSize()
getSizeOfMemStore() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area.
MemstoreSize getSizeOfMemStore()
@Deprecated long getFlushableSize()
getSizeToFlush() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area.
getMemStoreSize() unless we are carrying snapshots and then it will be the size of
outstanding snapshots.MemstoreSize getSizeToFlush()
getSizeOfMemStore() unless we are carrying snapshots and then it will be the size of
outstanding snapshots.@Deprecated long getSnapshotSize()
getSizeOfSnapshot() instead.
Note: When using off heap MSLAB feature, this will not account the cell data bytes size which is in off heap MSLAB area.
MemstoreSize getSizeOfSnapshot()
ColumnFamilyDescriptor getColumnFamilyDescriptor()
long getMaxSequenceId()
long getMaxMemstoreTS()
HFileDataBlockEncoder getDataBlockEncoder()
long getLastCompactSize()
long getSize()
int getStorefilesCount()
int getCompactedFilesCount()
long getMaxStoreFileAge()
long getMinStoreFileAge()
long getAvgStoreFileAge()
long getNumReferenceFiles()
long getNumHFiles()
long getStoreSizeUncompressed()
long getStorefilesSize()
long getHFilesSize()
long getStorefilesIndexSize()
long getTotalStaticIndexSize()
long getTotalStaticBloomSize()
CacheConfig getCacheConfig()
HRegionInfo getRegionInfo()
RegionCoprocessorHost getCoprocessorHost()
boolean areWritesEnabled()
long getSmallestReadPoint()
String getColumnFamilyName()
TableName getTableName()
long getFlushedCellsCount()
long getFlushedCellsSize()
long getFlushedOutputFileSize()
long getCompactedCellsCount()
long getCompactedCellsSize()
long getMajorCompactedCellsCount()
long getMajorCompactedCellsSize()
void addChangedReaderObserver(ChangedReadersObserver o)
void deleteChangedReaderObserver(ChangedReadersObserver o)
boolean hasTooManyStoreFiles()
void refreshStoreFiles()
throws IOException
IOExceptiondouble getCompactionPressure()
And for striped stores, we should calculate this value by the files in each stripe separately and return the maximum value.
It is similar to getCompactPriority() except that it is more suitable to use in a
linear formula.
void refreshStoreFiles(Collection<String> newFiles) throws IOException
IOExceptionboolean isPrimaryReplicaStore()
void closeAndArchiveCompactedFiles()
throws IOException
IOExceptionboolean isSloppyMemstore()
Copyright © 2007–2017 The Apache Software Foundation. All rights reserved.