| Modifier and Type | Method and Description |
|---|---|
static HoodieRecord |
HoodieAvroUtils.createHoodieRecordFromAvro(org.apache.avro.generic.IndexedRecord data,
String payloadClass,
String preCombineField,
Option<Pair<String,String>> simpleKeyGenFieldsOpt,
Boolean withOperation,
Option<String> partitionNameOp,
Boolean populateMetaFields,
Option<org.apache.avro.Schema> schemaWithoutMetaFields) |
| Modifier and Type | Method and Description |
|---|---|
List<Pair<K,V>> |
HoodieListPairData.collectAsList() |
List<Pair<K,V>> |
HoodiePairData.collectAsList()
Collects results of the underlying collection into a
List
This is a terminal operation |
List<Pair<K,V>> |
HoodieListPairData.get() |
<W> HoodiePairData<K,Pair<V,Option<W>>> |
HoodieListPairData.leftOuterJoin(HoodiePairData<K,W> other) |
<W> HoodiePairData<K,Pair<V,Option<W>>> |
HoodiePairData.leftOuterJoin(HoodiePairData<K,W> other)
Performs a left outer join of this dataset against
other. |
| Modifier and Type | Method and Description |
|---|---|
static <K,V> HoodieListPairData<K,V> |
HoodieListPairData.eager(List<Pair<K,V>> data) |
<K,V> HoodiePairData<K,V> |
HoodieData.flatMapToPair(SerializableFunction<T,Iterator<? extends Pair<K,V>>> func)
Maps every element in the collection into a collection of the
Pairs of new elements
using provided mapping func, subsequently flattening the result (by concatenating) into
a single collection
NOTE: That this operation will convert container from HoodieData to HoodiePairData
This is an intermediate operation |
<K,V> HoodiePairData<K,V> |
HoodieListData.flatMapToPair(SerializableFunction<T,Iterator<? extends Pair<K,V>>> func) |
static <K,V> HoodieListPairData<K,V> |
HoodieListPairData.lazy(List<Pair<K,V>> data) |
<O> HoodieData<O> |
HoodieListPairData.map(SerializableFunction<Pair<K,V>,O> func) |
<O> HoodieData<O> |
HoodiePairData.map(SerializableFunction<Pair<K,V>,O> func)
Maps key-value pairs of this
HoodiePairData container leveraging provided mapper
NOTE: That this returns HoodieData and not HoodiePairData |
<L,W> HoodiePairData<L,W> |
HoodieListPairData.mapToPair(SerializablePairFunction<Pair<K,V>,L,W> mapToPairFunc) |
<L,W> HoodiePairData<L,W> |
HoodiePairData.mapToPair(SerializablePairFunction<Pair<K,V>,L,W> mapToPairFunc) |
| Modifier and Type | Method and Description |
|---|---|
abstract <I,K,V> List<V> |
HoodieEngineContext.reduceByKey(List<Pair<K,V>> data,
SerializableBiFunction<V,V,V> reduceFunc,
int parallelism) |
<I,K,V> List<V> |
HoodieLocalEngineContext.reduceByKey(List<Pair<K,V>> data,
SerializableBiFunction<V,V,V> reduceFunc,
int parallelism) |
| Modifier and Type | Method and Description |
|---|---|
static Option<Pair<Integer,String>> |
FSUtils.getLatestLogVersion(HoodieStorage storage,
StoragePath partitionPath,
String fileId,
String logFileExtension,
String deltaCommitTime)
Get the latest log version for the fileId in the partition path.
|
| Modifier and Type | Method and Description |
|---|---|
static <T> Map<String,T> |
FSUtils.parallelizeFilesProcess(HoodieEngineContext hoodieEngineContext,
HoodieStorage storage,
int parallelism,
FSUtils.SerializableFunction<Pair<String,StorageConfiguration<?>>,T> pairFunction,
List<String> subPaths) |
static <T> Map<String,T> |
FSUtils.parallelizeSubPathProcess(HoodieEngineContext hoodieEngineContext,
HoodieStorage storage,
StoragePath dirPath,
int parallelism,
Predicate<StoragePathInfo> subPathPredicate,
FSUtils.SerializableFunction<Pair<String,StorageConfiguration<?>>,T> pairFunction)
Processes sub-path in parallel.
|
| Modifier and Type | Method and Description |
|---|---|
Pair<K,V> |
SerializablePairFunction.call(I t) |
| Modifier and Type | Method and Description |
|---|---|
Stream<Pair<K,V>> |
SerializablePairFlatMapFunction.call(I t) |
static <I,K,V> Function<I,Stream<Pair<K,V>>> |
FunctionWrapper.throwingFlatMapToPairWrapper(SerializablePairFlatMapFunction<I,K,V> throwingPairFlatMapFunction) |
static <I,K,V> Function<I,Pair<K,V>> |
FunctionWrapper.throwingMapToPairWrapper(SerializablePairFunction<I,K,V> throwingPairFunction) |
| Modifier and Type | Method and Description |
|---|---|
Pair<Option<Long>,Option<Long>> |
HoodieCommitMetadata.getMinAndMaxEventTime() |
| Modifier and Type | Method and Description |
|---|---|
default List<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodieRecordMerger.fullOuterMerge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props)
Merges two records with the same key in full outer merge fashion i.e.
|
List<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodieMetadataRecordMerger.fullOuterMerge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props) |
static Option<Pair<String,List<String>>> |
HoodieCommitMetadata.getFileSliceForFileGroupFromDeltaCommit(byte[] bytes,
HoodieFileGroupId fileGroupId)
parse the bytes of deltacommit, and get the base file and the log files belonging to this
provided file group.
|
Option<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodiePreCombineAvroRecordMerger.merge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props) |
Option<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodieAvroRecordMerger.merge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props) |
Option<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodieRecordMerger.merge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props)
This method converges combineAndGetUpdateValue and precombine from HoodiePayload.
|
Option<Pair<HoodieRecord,org.apache.avro.Schema>> |
OverwriteWithLatestMerger.merge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
TypedProperties props) |
default Option<Pair<HoodieRecord,org.apache.avro.Schema>> |
HoodieRecordMerger.partialMerge(HoodieRecord older,
org.apache.avro.Schema oldSchema,
HoodieRecord newer,
org.apache.avro.Schema newSchema,
org.apache.avro.Schema readerSchema,
TypedProperties props)
Merges records which can contain partial updates, i.e., only subset of fields and values are
present in the record representing the updates, and absent fields are not updated.
|
| Modifier and Type | Method and Description |
|---|---|
HoodieRecord |
HoodieEmptyRecord.wrapIntoHoodieRecordPayloadWithParams(org.apache.avro.Schema recordSchema,
Properties props,
Option<Pair<String,String>> simpleKeyGenFieldsOpt,
Boolean withOperation,
Option<String> partitionNameOp,
Boolean populateMetaFieldsOp,
Option<org.apache.avro.Schema> schemaWithoutMetaFields) |
HoodieRecord |
HoodieAvroIndexedRecord.wrapIntoHoodieRecordPayloadWithParams(org.apache.avro.Schema recordSchema,
Properties props,
Option<Pair<String,String>> simpleKeyGenFieldsOpt,
Boolean withOperation,
Option<String> partitionNameOp,
Boolean populateMetaFields,
Option<org.apache.avro.Schema> schemaWithoutMetaFields) |
HoodieRecord |
HoodieAvroRecord.wrapIntoHoodieRecordPayloadWithParams(org.apache.avro.Schema recordSchema,
Properties props,
Option<Pair<String,String>> simpleKeyGenFieldsOpt,
Boolean withOperation,
Option<String> partitionNameOp,
Boolean populateMetaFields,
Option<org.apache.avro.Schema> schemaWithoutMetaFields) |
HoodieRecord |
HoodieRecordCompatibilityInterface.wrapIntoHoodieRecordPayloadWithParams(org.apache.avro.Schema recordSchema,
Properties props,
Option<Pair<String,String>> simpleKeyGenFieldsOpt,
Boolean withOperation,
Option<String> partitionNameOp,
Boolean populateMetaFieldsOp,
Option<org.apache.avro.Schema> schemaWithoutMetaFields)
This method used to extract HoodieKey not through keyGenerator.
|
| Modifier and Type | Method and Description |
|---|---|
Map<Serializable,Pair<Option<T>,Map<String,Object>>> |
HoodieMergedLogRecordReader.getRecords() |
Iterator<Pair<Option<T>,Map<String,Object>>> |
HoodieMergedLogRecordReader.iterator() |
| Constructor and Description |
|---|
HoodieFileSliceReader(Option<HoodieFileReader> baseFileReader,
HoodieMergedLogRecordScanner scanner,
org.apache.avro.Schema schema,
String preCombineField,
HoodieRecordMerger merger,
Properties props,
Option<Pair<String,String>> simpleKeyGenFieldsOpt) |
| Constructor and Description |
|---|
HoodieDeleteBlock(List<Pair<DeleteRecord,Long>> recordsToDelete,
boolean shouldWriteRecordPositions,
Map<HoodieLogBlock.HeaderMetadataType,String> header) |
| Modifier and Type | Field and Description |
|---|---|
protected Iterator<Pair<Option<T>,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.logRecordIterator |
protected ExternalSpillableMap<Serializable,Pair<Option<T>,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.records |
| Modifier and Type | Method and Description |
|---|---|
Pair<List<org.apache.avro.Schema.Field>,List<org.apache.avro.Schema.Field>> |
HoodieFileGroupReaderSchemaHandler.getBootstrapDataFields() |
Pair<List<org.apache.avro.Schema.Field>,List<org.apache.avro.Schema.Field>> |
HoodieFileGroupReaderSchemaHandler.getBootstrapRequiredFields() |
Pair<List<org.apache.avro.Schema.Field>,List<org.apache.avro.Schema.Field>> |
HoodiePositionBasedSchemaHandler.getBootstrapRequiredFields() |
protected Pair<ClosableIterator<T>,org.apache.avro.Schema> |
HoodieBaseFileGroupRecordBuffer.getRecordsIterator(HoodieDataBlock dataBlock,
Option<KeySpec> keySpecOpt)
Create a record iterator for a data block.
|
protected Pair<Function<T,T>,org.apache.avro.Schema> |
HoodieBaseFileGroupRecordBuffer.getSchemaTransformerWithEvolvedSchema(HoodieDataBlock dataBlock) |
| Modifier and Type | Method and Description |
|---|---|
protected Option<Pair<Function<T,T>,org.apache.avro.Schema>> |
HoodieBaseFileGroupRecordBuffer.composeEvolvedSchemaTransformer(HoodieDataBlock dataBlock)
Get final Read Schema for support evolution.
|
protected Option<Pair<T,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.doProcessNextDataRecord(T record,
Map<String,Object> metadata,
Pair<Option<T>,Map<String,Object>> existingRecordMetadataPair)
Merge two log data records if needed.
|
Iterator<Pair<Option<T>,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.getLogRecordIterator() |
Iterator<Pair<Option<T>,Map<String,Object>>> |
HoodieUnmergedFileGroupRecordBuffer.getLogRecordIterator() |
Iterator<Pair<Option<T>,Map<String,Object>>> |
HoodieFileGroupRecordBuffer.getLogRecordIterator() |
Map<Serializable,Pair<Option<T>,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.getLogRecords() |
Map<Serializable,Pair<Option<T>,Map<String,Object>>> |
HoodieFileGroupRecordBuffer.getLogRecords() |
| Modifier and Type | Method and Description |
|---|---|
protected Option<Pair<T,Map<String,Object>>> |
HoodieBaseFileGroupRecordBuffer.doProcessNextDataRecord(T record,
Map<String,Object> metadata,
Pair<Option<T>,Map<String,Object>> existingRecordMetadataPair)
Merge two log data records if needed.
|
protected Option<DeleteRecord> |
HoodieBaseFileGroupRecordBuffer.doProcessNextDeletedRecord(DeleteRecord deleteRecord,
Pair<Option<T>,Map<String,Object>> existingRecordMetadataPair)
Merge a delete record with another record (data, or delete).
|
protected boolean |
HoodieBaseFileGroupRecordBuffer.hasNextBaseRecord(T baseRecord,
Pair<Option<T>,Map<String,Object>> logRecordInfo) |
| Modifier and Type | Method and Description |
|---|---|
Option<Pair<HoodieInstant,HoodieCommitMetadata>> |
HoodieActiveTimeline.getLastCommitMetadataWithValidData()
Get the last instant with valid data, and convert this to HoodieCommitMetadata
|
Option<Pair<HoodieInstant,HoodieCommitMetadata>> |
HoodieActiveTimeline.getLastCommitMetadataWithValidSchema()
Returns most recent instant having valid schema in its
HoodieCommitMetadata |
| Modifier and Type | Method and Description |
|---|---|
static Pair<HoodieFileGroupId,HoodieInstant> |
ClusteringOpDTO.toClusteringOperation(ClusteringOpDTO dto) |
static Pair<String,CompactionOperation> |
CompactionOpDTO.toCompactionOperation(CompactionOpDTO dto) |
| Modifier and Type | Field and Description |
|---|---|
protected Map<HoodieFileGroupId,Pair<String,CompactionOperation>> |
HoodieTableFileSystemView.fgIdToPendingCompaction
PartitionPath + File-Id to pending compaction instant time.
|
protected Map<HoodieFileGroupId,Pair<String,CompactionOperation>> |
HoodieTableFileSystemView.fgIdToPendingLogCompaction
PartitionPath + File-Id to pending logcompaction instant time.
|
| Modifier and Type | Method and Description |
|---|---|
static Pair<Option<String>,Option<String>> |
InternalSchemaCache.getInternalSchemaAndAvroSchemaForClusteringAndCompaction(HoodieTableMetaClient metaClient,
String compactionAndClusteringInstant)
Get internalSchema and avroSchema for compaction/cluster operation.
|
| Modifier and Type | Method and Description |
|---|---|
abstract List<Pair<HoodieKey,Long>> |
FileFormatUtils.fetchRecordKeysWithPositions(HoodieStorage storage,
StoragePath filePath)
Fetch
HoodieKeys with positions from the given data file. |
abstract List<Pair<HoodieKey,Long>> |
FileFormatUtils.fetchRecordKeysWithPositions(HoodieStorage storage,
StoragePath filePath,
Option<BaseKeyGenerator> keyGeneratorOpt)
Fetch
HoodieKeys with positions from the given data file. |
abstract Set<Pair<String,Long>> |
FileFormatUtils.filterRowKeys(HoodieStorage storage,
StoragePath filePath,
Set<String> filter)
Read the rowKey list matching the given filter, from the given data file.
|
static Set<Pair<String,String>> |
CommitUtils.flattenPartitionToReplaceFileIds(Map<String,List<String>> partitionToReplaceFileIds) |
static Stream<Pair<HoodieInstant,HoodieClusteringPlan>> |
ClusteringUtils.getAllPendingClusteringPlans(HoodieTableMetaClient metaClient)
Get all pending clustering plans along with their instants.
|
static Map<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>> |
CompactionUtils.getAllPendingCompactionOperations(HoodieTableMetaClient metaClient)
Get all PartitionPath + file-ids with pending Compaction operations and their target compaction instant time.
|
static Map<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>> |
CompactionUtils.getAllPendingCompactionOperationsInPendingCompactionPlans(List<Pair<HoodieInstant,HoodieCompactionPlan>> pendingLogCompactionPlanWithInstants)
Get all partition + file Ids with pending Log Compaction operations and their target log compaction instant time.
|
static List<Pair<HoodieInstant,HoodieCompactionPlan>> |
CompactionUtils.getAllPendingCompactionPlans(HoodieTableMetaClient metaClient)
Get all pending compaction plans along with their instants.
|
static Map<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>> |
CompactionUtils.getAllPendingLogCompactionOperations(HoodieTableMetaClient metaClient)
Get all partition + file Ids with pending Log Compaction operations and their target log compaction instant time.
|
static List<Pair<HoodieInstant,HoodieCompactionPlan>> |
CompactionUtils.getAllPendingLogCompactionPlans(HoodieTableMetaClient metaClient)
Get all pending logcompaction plans along with their instants.
|
static Option<Pair<HoodieInstant,HoodieClusteringPlan>> |
ClusteringUtils.getClusteringPlan(HoodieTableMetaClient metaClient,
HoodieInstant pendingReplaceInstant)
Get Clustering plan from timeline.
|
static Option<Pair<HoodieInstant,HoodieClusteringPlan>> |
ClusteringUtils.getClusteringPlan(HoodieTimeline timeline,
HoodieInstant pendingReplaceInstant)
Get Clustering plan from timeline.
|
static Option<Pair<HoodieTimeline,HoodieInstant>> |
CompactionUtils.getCompletedDeltaCommitsSinceLatestCompaction(HoodieActiveTimeline activeTimeline)
Returns a pair of (timeline containing the completed delta commits after the latest completed
compaction commit, the completed compaction commit instant), if the latest completed
compaction commit is present; a pair of (timeline containing all the completed delta commits,
the first delta commit instant), if there is no completed compaction commit.
|
static Option<Pair<HoodieTimeline,HoodieInstant>> |
CompactionUtils.getDeltaCommitsSinceLatestCompaction(HoodieActiveTimeline activeTimeline)
Returns a pair of (timeline containing the delta commits after the latest completed
compaction commit, the completed compaction commit instant), if the latest completed
compaction commit is present; a pair of (timeline containing all the delta commits,
the first delta commit instant), if there is no completed compaction commit.
|
static Option<Pair<HoodieTimeline,HoodieInstant>> |
CompactionUtils.getDeltaCommitsSinceLatestCompactionRequest(HoodieActiveTimeline activeTimeline) |
static Stream<Pair<HoodieFileGroupId,HoodieInstant>> |
ClusteringUtils.getFileGroupsInPendingClusteringInstant(HoodieInstant instant,
HoodieClusteringPlan clusteringPlan) |
static Set<Pair<String,String>> |
CommitUtils.getPartitionAndFileIdWithoutSuffix(Map<String,List<HoodieWriteStat>> partitionToWriteStats) |
static Set<Pair<String,String>> |
CommitUtils.getPartitionAndFileIdWithoutSuffixFromSpecificRecord(Map<String,List<HoodieWriteStat>> partitionToWriteStats) |
static Stream<Pair<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>>> |
CompactionUtils.getPendingCompactionOperations(HoodieInstant instant,
HoodieCompactionPlan compactionPlan)
Get pending compaction operations for both major and minor compaction.
|
static Stream<Pair<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>>> |
CompactionUtils.getPendingCompactionOperations(HoodieInstant instant,
HoodieCompactionPlan compactionPlan)
Get pending compaction operations for both major and minor compaction.
|
| Modifier and Type | Method and Description |
|---|---|
static <R> HoodieRecord<R> |
SpillableMapUtils.convertToHoodieRecordPayload(org.apache.avro.generic.GenericRecord record,
String payloadClazz,
String preCombineField,
Pair<String,String> recordKeyPartitionPathFieldPair,
boolean withOperationField,
Option<String> partitionName,
Option<org.apache.avro.Schema> schemaWithoutMetaFields)
Utility method to convert bytes to HoodieRecord using schema and payload class.
|
static <K,V> Map<K,V> |
CollectionUtils.createImmutableMap(Pair<K,V>... elements) |
| Modifier and Type | Method and Description |
|---|---|
static HoodieCompactionOperation |
CompactionUtils.buildFromFileSlice(String partitionPath,
FileSlice fileSlice,
Option<Function<Pair<String,FileSlice>,Map<String,Double>>> metricsCaptureFunction)
Generate compaction operation from file-slice.
|
static HoodieCompactionPlan |
CompactionUtils.buildFromFileSlices(List<Pair<String,FileSlice>> partitionFileSlicePairs,
Option<Map<String,String>> extraMetadata,
Option<Function<Pair<String,FileSlice>,Map<String,Double>>> metricsCaptureFunction)
Generate compaction plan from file-slices.
|
static HoodieCompactionPlan |
CompactionUtils.buildFromFileSlices(List<Pair<String,FileSlice>> partitionFileSlicePairs,
Option<Map<String,String>> extraMetadata,
Option<Function<Pair<String,FileSlice>,Map<String,Double>>> metricsCaptureFunction)
Generate compaction plan from file-slices.
|
static Map<HoodieFileGroupId,Pair<String,HoodieCompactionOperation>> |
CompactionUtils.getAllPendingCompactionOperationsInPendingCompactionPlans(List<Pair<HoodieInstant,HoodieCompactionPlan>> pendingLogCompactionPlanWithInstants)
Get all partition + file Ids with pending Log Compaction operations and their target log compaction instant time.
|
| Modifier and Type | Class and Description |
|---|---|
class |
ImmutablePair<L,R>
(NOTE: Adapted from Apache commons-lang3)
|
| Modifier and Type | Method and Description |
|---|---|
static <L,R> Pair<L,R> |
Pair.of(L left,
R right)
Obtains an immutable pair of from two objects inferring the generic types.
|
| Modifier and Type | Method and Description |
|---|---|
<T extends Serializable> |
RocksDBDAO.prefixSearch(String columnFamilyName,
String prefix)
Perform a prefix search and return stream of key-value pairs retrieved.
|
| Modifier and Type | Method and Description |
|---|---|
int |
Pair.compareTo(Pair<L,R> other)
Compares the pair based on the left element followed by the right element.
|
| Modifier and Type | Method and Description |
|---|---|
Pair<InternalSchema,Map<String,String>> |
InternalSchemaMerger.mergeSchemaGetRenamed()
Create final read schema to read avro/parquet file.
|
| Modifier and Type | Method and Description |
|---|---|
static Map<Integer,Pair<Type,Type>> |
InternalSchemaUtils.collectTypeChangedCols(InternalSchema schema,
InternalSchema oldSchema)
Collect all type changed cols to build a colPosition -> (newColType, oldColType) map.
|
| Modifier and Type | Method and Description |
|---|---|
Set<Pair<String,Long>> |
HoodieBootstrapFileReader.filterRowKeys(Set<String> candidateRowKeys) |
Set<Pair<String,Long>> |
HoodieFileReader.filterRowKeys(Set<String> candidateRowKeys) |
Set<Pair<String,Long>> |
HoodieNativeAvroHFileReader.filterRowKeys(Set<String> candidateRowKeys) |
| Modifier and Type | Method and Description |
|---|---|
Pair<HoodieMetadataLogRecordReader,Long> |
HoodieBackedTableMetadata.getLogRecordScanner(List<HoodieLogFile> logFiles,
String partitionName,
Option<Boolean> allowFullScanOverride) |
| Modifier and Type | Method and Description |
|---|---|
Map<Pair<String,String>,BloomFilter> |
FileSystemBackedTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList) |
Map<Pair<String,String>,BloomFilter> |
BaseTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList) |
Map<Pair<String,String>,BloomFilter> |
HoodieTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList)
Get bloom filters for files from the metadata table index.
|
Map<Pair<String,String>,HoodieMetadataColumnStats> |
FileSystemBackedTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName) |
Map<Pair<String,String>,HoodieMetadataColumnStats> |
BaseTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName) |
Map<Pair<String,String>,HoodieMetadataColumnStats> |
HoodieTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName)
Get column stats for files from the metadata table index.
|
protected Map<Pair<String,StoragePath>,List<StoragePathInfo>> |
HoodieMetadataFileSystemView.listPartitions(List<Pair<String,StoragePath>> partitionPathList) |
| Modifier and Type | Method and Description |
|---|---|
Map<Pair<String,String>,BloomFilter> |
FileSystemBackedTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList) |
Map<Pair<String,String>,BloomFilter> |
BaseTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList) |
Map<Pair<String,String>,BloomFilter> |
HoodieTableMetadata.getBloomFilters(List<Pair<String,String>> partitionNameFileNameList)
Get bloom filters for files from the metadata table index.
|
Map<Pair<String,String>,HoodieMetadataColumnStats> |
FileSystemBackedTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName) |
Map<Pair<String,String>,HoodieMetadataColumnStats> |
BaseTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName) |
Map<Pair<String,String>,HoodieMetadataColumnStats> |
HoodieTableMetadata.getColumnStats(List<Pair<String,String>> partitionNameFileNameList,
String columnName)
Get column stats for files from the metadata table index.
|
protected Map<Pair<String,StoragePath>,List<StoragePathInfo>> |
HoodieMetadataFileSystemView.listPartitions(List<Pair<String,StoragePath>> partitionPathList) |
static HoodieData<HoodieRecord> |
HoodieTableMetadataUtil.readRecordKeysFromBaseFiles(HoodieEngineContext engineContext,
HoodieConfig config,
List<Pair<String,HoodieBaseFile>> partitionBaseFilePairs,
boolean forDelete,
int recordIndexMaxParallelism,
StoragePath basePath,
StorageConfiguration<?> configuration,
String activeModule)
Deprecated.
|
static HoodieData<HoodieRecord> |
HoodieTableMetadataUtil.readRecordKeysFromFileSlices(HoodieEngineContext engineContext,
List<Pair<String,FileSlice>> partitionFileSlicePairs,
boolean forDelete,
int recordIndexMaxParallelism,
String activeModule,
HoodieTableMetaClient metaClient,
EngineType engineType)
Reads the record keys from the given file slices and returns a
HoodieData of HoodieRecord to be updated in the metadata table. |
static HoodieData<HoodieRecord> |
HoodieTableMetadataUtil.readSecondaryKeysFromBaseFiles(HoodieEngineContext engineContext,
List<Pair<String,Pair<String,List<String>>>> partitionFiles,
int secondaryIndexMaxParallelism,
String activeModule,
HoodieTableMetaClient metaClient,
EngineType engineType,
HoodieIndexDefinition indexDefinition) |
static HoodieData<HoodieRecord> |
HoodieTableMetadataUtil.readSecondaryKeysFromBaseFiles(HoodieEngineContext engineContext,
List<Pair<String,Pair<String,List<String>>>> partitionFiles,
int secondaryIndexMaxParallelism,
String activeModule,
HoodieTableMetaClient metaClient,
EngineType engineType,
HoodieIndexDefinition indexDefinition) |
static HoodieData<HoodieRecord> |
HoodieTableMetadataUtil.readSecondaryKeysFromFileSlices(HoodieEngineContext engineContext,
List<Pair<String,FileSlice>> partitionFileSlicePairs,
int secondaryIndexMaxParallelism,
String activeModule,
HoodieTableMetaClient metaClient,
EngineType engineType,
HoodieIndexDefinition indexDefinition) |
| Modifier and Type | Method and Description |
|---|---|
static Pair<String,List<String>> |
MetricUtils.getLabelsAndMetricList(String metric) |
static Pair<String,Map<String,String>> |
MetricUtils.getLabelsAndMetricMap(String metric) |
static Pair<String,String> |
MetricUtils.getMetricAndLabels(String metric) |
Copyright © 2024 The Apache Software Foundation. All rights reserved.