| Package | Description |
|---|---|
| org.apache.hadoop.hbase.coprocessor |
Table of Contents
|
| org.apache.hadoop.hbase.mob | |
| org.apache.hadoop.hbase.regionserver | |
| org.apache.hadoop.hbase.regionserver.compactions | |
| org.apache.hadoop.hbase.regionserver.throttle | |
| org.apache.hadoop.hbase.security.access |
| Modifier and Type | Method and Description |
|---|---|
default void |
RegionObserver.postCompact(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
StoreFile resultFile,
CompactionLifeCycleTracker tracker)
Called after compaction has completed and the new store file has been moved in to place.
|
default void |
RegionObserver.postCompactSelection(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList<StoreFile> selected,
CompactionLifeCycleTracker tracker)
Called after the
StoreFiles to compact have been selected from the available
candidates. |
default void |
RegionObserver.postFlush(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
StoreFile resultFile)
Called after a Store's memstore is flushed to disk.
|
default InternalScanner |
RegionObserver.preCompact(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
InternalScanner scanner,
ScanType scanType,
CompactionLifeCycleTracker tracker)
Called prior to writing the
StoreFiles selected for compaction into a new
StoreFile. |
default InternalScanner |
RegionObserver.preCompactScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long earliestPutTs,
InternalScanner s,
CompactionLifeCycleTracker tracker,
long readPoint)
Called prior to writing the
StoreFiles selected for compaction into a new
StoreFile and prior to creating the scanner used to read the input files. |
default void |
RegionObserver.preCompactSelection(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
List<StoreFile> candidates,
CompactionLifeCycleTracker tracker)
Called prior to selecting the
StoreFiles to compact from the list of
available candidates. |
default InternalScanner |
RegionObserver.preFlush(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
InternalScanner scanner)
Called before a Store's memstore is flushed to disk.
|
default InternalScanner |
RegionObserver.preFlushScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
List<KeyValueScanner> scanners,
InternalScanner s,
long readPoint)
Called before a memstore is flushed to disk and prior to creating the scanner to read from
the memstore.
|
default KeyValueScanner |
RegionObserver.preStoreScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
KeyValueScanner s,
long readPt)
Called before a store opens a new scanner.
|
| Modifier and Type | Method and Description |
|---|---|
protected void |
MobStoreEngine.createCompactor(org.apache.hadoop.conf.Configuration conf,
Store store)
Creates the DefaultMobCompactor.
|
protected void |
MobStoreEngine.createStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store) |
| Constructor and Description |
|---|
DefaultMobStoreCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
DefaultMobStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store) |
| Modifier and Type | Class and Description |
|---|---|
class |
HMobStore
The store implementation to save MOBs (medium objects), it extends the HStore.
|
class |
HStore
A Store holds a column family in a Region.
|
| Modifier and Type | Field and Description |
|---|---|
protected Optional<Store> |
StoreScanner.store |
| Modifier and Type | Method and Description |
|---|---|
Store |
CompactingMemStore.getStore() |
Store |
Region.getStore(byte[] family)
Return the Store for the given family
|
| Modifier and Type | Method and Description |
|---|---|
List<? extends Store> |
Region.getStores()
Return the list of Stores managed by this region
|
| Modifier and Type | Method and Description |
|---|---|
static StoreEngine<?,?,?,?> |
StoreEngine.create(Store store,
org.apache.hadoop.conf.Configuration conf,
CellComparator kvComparator)
Create the StoreEngine configured for the given Store.
|
protected void |
DefaultStoreEngine.createCompactionPolicy(org.apache.hadoop.conf.Configuration conf,
Store store) |
protected void |
DefaultStoreEngine.createCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
protected void |
StripeStoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
CellComparator comparator) |
protected abstract void |
StoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
CellComparator kvComparator)
Create the StoreEngine's components.
|
protected void |
DefaultStoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
CellComparator kvComparator) |
protected void |
DateTieredStoreEngine.createComponents(org.apache.hadoop.conf.Configuration conf,
Store store,
CellComparator kvComparator) |
protected StoreEngine<?,?,?,?> |
HStore.createStoreEngine(Store store,
org.apache.hadoop.conf.Configuration conf,
CellComparator kvComparator)
Creates the store engine configured for the given Store.
|
protected StoreEngine<?,?,?,?> |
HMobStore.createStoreEngine(Store store,
org.apache.hadoop.conf.Configuration conf,
CellComparator cellComparator)
Creates the mob store engine.
|
protected void |
DefaultStoreEngine.createStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store) |
void |
RegionCoprocessorHost.postCompact(Store store,
StoreFile resultFile,
CompactionLifeCycleTracker tracker,
User user)
Called after the store compaction has completed.
|
void |
RegionCoprocessorHost.postCompactSelection(Store store,
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList<StoreFile> selected,
CompactionLifeCycleTracker tracker,
User user)
Called after the
StoreFiles to be compacted have been selected from the available
candidates. |
void |
RegionCoprocessorHost.postFlush(Store store,
StoreFile storeFile)
Invoked after a memstore flush
|
InternalScanner |
RegionCoprocessorHost.preCompact(Store store,
InternalScanner scanner,
ScanType scanType,
CompactionLifeCycleTracker tracker,
User user)
Called prior to rewriting the store files selected for compaction
|
InternalScanner |
RegionCoprocessorHost.preCompactScannerOpen(Store store,
List<StoreFileScanner> scanners,
ScanType scanType,
long earliestPutTs,
CompactionLifeCycleTracker tracker,
User user,
long readPoint)
|
boolean |
RegionCoprocessorHost.preCompactSelection(Store store,
List<StoreFile> candidates,
CompactionLifeCycleTracker tracker,
User user)
Called prior to selecting the
StoreFiles for compaction from the list of currently
available candidates. |
InternalScanner |
RegionCoprocessorHost.preFlush(Store store,
InternalScanner scanner)
Invoked before a memstore flush
|
InternalScanner |
RegionCoprocessorHost.preFlushScannerOpen(Store store,
List<KeyValueScanner> scanners,
long readPoint)
|
KeyValueScanner |
RegionCoprocessorHost.preStoreScannerOpen(Store store,
Scan scan,
NavigableSet<byte[]> targetCols,
long readPt)
|
protected List<KeyValueScanner> |
StoreScanner.selectScannersFrom(Store store,
List<? extends KeyValueScanner> allScanners)
Filters the given list of scanners using Bloom filter, time range, and TTL.
|
boolean |
StoreFileScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
SegmentScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS)
This functionality should be resolved in the higher level which is
MemStoreScanner, currently returns true as default.
|
boolean |
NonLazyKeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS) |
boolean |
KeyValueScanner.shouldUseScanner(Scan scan,
Store store,
long oldestUnexpiredTS)
Allows to filter out scanners (both StoreFile and memstore) that we don't
want to use based on criteria such as Bloom filters and timestamp ranges.
|
| Constructor and Description |
|---|
DefaultStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store) |
MemStoreCompactorSegmentsIterator(List<ImmutableSegment> segments,
CellComparator comparator,
int compactionKVMax,
Store store) |
MobStoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt) |
StoreScanner(Store store,
ScanInfo scanInfo,
OptionalInt maxVersions,
List<? extends KeyValueScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow)
Used for compactions that drop deletes from a limited range of rows.
|
StoreScanner(Store store,
ScanInfo scanInfo,
OptionalInt maxVersions,
List<? extends KeyValueScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs)
Used for compactions.
|
StoreScanner(Store store,
ScanInfo scanInfo,
Scan scan,
NavigableSet<byte[]> columns,
long readPt)
Opens a scanner across memstore, snapshot, and all StoreFiles.
|
StripeStoreFlusher(org.apache.hadoop.conf.Configuration conf,
Store store,
StripeCompactionPolicy policy,
StripeStoreFileManager stripes) |
| Modifier and Type | Field and Description |
|---|---|
protected Store |
Compactor.store |
| Modifier and Type | Method and Description |
|---|---|
default void |
CompactionLifeCycleTracker.afterExecute(Store store)
Called after compaction is executed by CompactSplitThread.
|
default void |
CompactionLifeCycleTracker.beforeExecute(Store store)
Called before compaction is executed by CompactSplitThread.
|
protected InternalScanner |
Compactor.createScanner(Store store,
List<StoreFileScanner> scanners,
long smallestReadPoint,
long earliestPutTs,
byte[] dropDeletesFromRow,
byte[] dropDeletesToRow) |
protected InternalScanner |
Compactor.createScanner(Store store,
List<StoreFileScanner> scanners,
ScanType scanType,
long smallestReadPoint,
long earliestPutTs) |
| Constructor and Description |
|---|
AbstractMultiOutputCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
DateTieredCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
StripeCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
| Modifier and Type | Method and Description |
|---|---|
static String |
ThroughputControlUtil.getNameForThrottling(Store store,
String opName)
Generate a name for throttling, to prevent name conflict when multiple IO operation running
parallel on the same store.
|
| Modifier and Type | Method and Description |
|---|---|
InternalScanner |
AccessController.preCompact(ObserverContext<RegionCoprocessorEnvironment> c,
Store store,
InternalScanner scanner,
ScanType scanType,
CompactionLifeCycleTracker tracker) |
Copyright © 2007–2017 The Apache Software Foundation. All rights reserved.