| Package | Description |
|---|---|
| org.apache.hudi.execution | |
| org.apache.hudi.io | |
| org.apache.hudi.table.action.commit |
| Modifier and Type | Field and Description |
|---|---|
protected WriteHandleFactory |
HoodieLazyInsertIterable.writeHandleFactory |
| Constructor and Description |
|---|
CopyOnWriteInsertHandler(HoodieWriteConfig config,
String instantTime,
boolean areRecordsSorted,
HoodieTable hoodieTable,
String idPrefix,
TaskContextSupplier taskContextSupplier,
WriteHandleFactory writeHandleFactory) |
HoodieLazyInsertIterable(Iterator<HoodieRecord<T>> recordItr,
boolean areRecordsSorted,
HoodieWriteConfig config,
String instantTime,
HoodieTable hoodieTable,
String idPrefix,
TaskContextSupplier taskContextSupplier,
WriteHandleFactory writeHandleFactory) |
| Modifier and Type | Class and Description |
|---|---|
class |
AppendHandleFactory<T extends HoodieRecordPayload,I,K,O> |
class |
CreateHandleFactory<T extends HoodieRecordPayload,I,K,O> |
class |
SingleFileHandleCreateFactory<T extends HoodieRecordPayload,I,K,O>
A SingleFileHandleCreateFactory is used to write all data in the spark partition into a single data file.
|
| Modifier and Type | Method and Description |
|---|---|
abstract O |
BaseBulkInsertHelper.bulkInsert(I inputRecords,
String instantTime,
HoodieTable<T,I,K,O> table,
HoodieWriteConfig config,
boolean performDedupe,
Option<BulkInsertPartitioner> userDefinedBulkInsertPartitioner,
boolean addMetadataFields,
int parallelism,
WriteHandleFactory writeHandleFactory)
Only write input records.
|
Copyright © 2022 The Apache Software Foundation. All rights reserved.