Class BatchAppenderator
- All Implemented Interfaces:
QuerySegmentWalker,Appenderator
AppenderatorImpl was split. For historical
reasons, the code for creating segments was all handled by the same code path in that class. The code
was correct but inefficient for batch ingestion from a memory perspective. If the input file being processed
by batch ingestion had enough sinks & hydrants produced then it may run out of memory either in the
hydrant creation phase (append) of this class or in the merge hydrants phase. Therefore, a new class,
BatchAppenderator, this class, was created to specialize in batch ingestion and the old class
for stream ingestion was renamed to StreamAppenderator.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.druid.segment.realtime.appenderator.Appenderator
Appenderator.AppenderatorAddResult -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intstatic final int -
Method Summary
Modifier and TypeMethodDescriptionadd(SegmentIdWithShardSpec identifier, InputRow row, com.google.common.base.Supplier<Committer> committerSupplier, boolean allowIncrementalPersists) Add a row.voidclear()Drop all in-memory and on-disk data, and forget any previously-remembered commit metadata.voidclose()Stop any currently-running processing and clean up after ourselves.voidcloseNow()Stop all processing, abandoning current pushes, currently running persist may be allowed to finish if they persist critical metadata otherwise shutdown immediately.com.google.common.util.concurrent.ListenableFuture<?>drop(SegmentIdWithShardSpec identifier) Schedule dropping all data associated with a particular pending segment.longlonggetBytesInMemory(SegmentIdWithShardSpec identifier) Return the name of the dataSource associated with this Appenderator.getId()Return the identifier of this Appenderator; useful for log messages and such.<T> QueryRunner<T>getQueryRunnerForIntervals(Query<T> query, Iterable<org.joda.time.Interval> intervals) <T> QueryRunner<T>getQueryRunnerForSegments(Query<T> query, Iterable<SegmentDescriptor> specs) intgetRowCount(SegmentIdWithShardSpec identifier) Returns the number of rows in a particular pending segment.intReturns all active segments regardless whether they are in memory or persistedintReturns the number of total rows in this appenderator of all segments pending push.com.google.common.util.concurrent.ListenableFuture<Object>persistAll(Committer committer) Persist any in-memory indexed data to durable storage.com.google.common.util.concurrent.ListenableFuture<SegmentsAndCommitMetadata>push(Collection<SegmentIdWithShardSpec> identifiers, Committer committer, boolean useUniquePath) Merge and push particular segments to deep storage.startJob()Perform any initial setup.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.druid.segment.realtime.appenderator.Appenderator
add, setTaskThreadContext
-
Field Details
-
ROUGH_OVERHEAD_PER_SINK
public static final int ROUGH_OVERHEAD_PER_SINK- See Also:
-
ROUGH_OVERHEAD_PER_HYDRANT
public static final int ROUGH_OVERHEAD_PER_HYDRANT- See Also:
-
-
Method Details
-
getId
Description copied from interface:AppenderatorReturn the identifier of this Appenderator; useful for log messages and such.- Specified by:
getIdin interfaceAppenderator
-
getDataSource
Description copied from interface:AppenderatorReturn the name of the dataSource associated with this Appenderator.- Specified by:
getDataSourcein interfaceAppenderator
-
startJob
Description copied from interface:AppenderatorPerform any initial setup. Should be called before using any other methods.- Specified by:
startJobin interfaceAppenderator- Returns:
- currently persisted commit metadata
-
add
public Appenderator.AppenderatorAddResult add(SegmentIdWithShardSpec identifier, InputRow row, @Nullable com.google.common.base.Supplier<Committer> committerSupplier, boolean allowIncrementalPersists) throws SegmentNotWritableException Description copied from interface:AppenderatorAdd a row. Must not be called concurrently from multiple threads.If no pending segment exists for the provided identifier, a new one will be created.
This method may trigger a
Appenderator.persistAll(Committer)using the supplied Committer. If it does this, the Committer is guaranteed to be *created* synchronously with the call to add, but will actually be used asynchronously.If committer is not provided, no metadata is persisted.
- Specified by:
addin interfaceAppenderator- Parameters:
identifier- the segment into which this row should be addedrow- the row to addcommitterSupplier- supplier of a committer associated with all data that has been added, including this row ifallowIncrementalPersistsis set to false then this will not be used as no persist will be done automaticallyallowIncrementalPersists- indicate whether automatic persist should be performed or not if required. If this flag is set to false then the return value should haveAppenderator.AppenderatorAddResult.isPersistRequiredset to true if persist was skipped because of this flag and it is assumed that the responsibility of callingAppenderator.persistAll(Committer)is on the caller.- Returns:
Appenderator.AppenderatorAddResult- Throws:
SegmentNotWritableException- if the requested segment is known, but has been closed
-
getSegments
Returns all active segments regardless whether they are in memory or persisted- Specified by:
getSegmentsin interfaceAppenderator
-
getInMemorySegments
-
getRowCount
Description copied from interface:AppenderatorReturns the number of rows in a particular pending segment.- Specified by:
getRowCountin interfaceAppenderator- Parameters:
identifier- segment to examine- Returns:
- row count
-
getTotalRowCount
public int getTotalRowCount()Description copied from interface:AppenderatorReturns the number of total rows in this appenderator of all segments pending push.- Specified by:
getTotalRowCountin interfaceAppenderator- Returns:
- total number of rows
-
getRowsInMemory
public int getRowsInMemory() -
getBytesCurrentlyInMemory
public long getBytesCurrentlyInMemory() -
getBytesInMemory
-
getQueryRunnerForIntervals
public <T> QueryRunner<T> getQueryRunnerForIntervals(Query<T> query, Iterable<org.joda.time.Interval> intervals) - Specified by:
getQueryRunnerForIntervalsin interfaceQuerySegmentWalker
-
getQueryRunnerForSegments
public <T> QueryRunner<T> getQueryRunnerForSegments(Query<T> query, Iterable<SegmentDescriptor> specs) - Specified by:
getQueryRunnerForSegmentsin interfaceQuerySegmentWalker
-
clear
public void clear()Description copied from interface:AppenderatorDrop all in-memory and on-disk data, and forget any previously-remembered commit metadata. This could be useful if, for some reason, rows have been added that we do not actually want to hand off. Blocks until all data has been cleared. This may take some time, since all pending persists must finish first.- Specified by:
clearin interfaceAppenderator
-
drop
public com.google.common.util.concurrent.ListenableFuture<?> drop(SegmentIdWithShardSpec identifier) Description copied from interface:AppenderatorSchedule dropping all data associated with a particular pending segment. UnlikeAppenderator.clear()), any on-disk commit metadata will remain unchanged. If there is no pending segment with this identifier, then this method will do nothing.You should not write to the dropped segment after calling "drop". If you need to drop all your data and re-write it, consider
Appenderator.clear()instead. This method might be called concurrently from a thread different from the "main data appending / indexing thread", from where all other methods in this class (except those inherited fromQuerySegmentWalker) are called. This typically happens whendrop()is called in an async future callback. drop() itself is cheap and relays heavy dropping work to an internal executor of this Appenderator.- Specified by:
dropin interfaceAppenderator- Parameters:
identifier- the pending segment to drop- Returns:
- future that resolves when data is dropped
-
persistAll
public com.google.common.util.concurrent.ListenableFuture<Object> persistAll(@Nullable Committer committer) Description copied from interface:AppenderatorPersist any in-memory indexed data to durable storage. This may be only somewhat durable, e.g. the machine's local disk. The Committer will be made synchronously with the call to persistAll, but will actually be used asynchronously. Any metadata returned by the committer will be associated with the data persisted to disk.If committer is not provided, no metadata is persisted.
- Specified by:
persistAllin interfaceAppenderator- Parameters:
committer- a committer associated with all data that has been added so far- Returns:
- future that resolves when all pending data has been persisted, contains commit metadata for this persist
-
push
public com.google.common.util.concurrent.ListenableFuture<SegmentsAndCommitMetadata> push(Collection<SegmentIdWithShardSpec> identifiers, @Nullable Committer committer, boolean useUniquePath) Description copied from interface:AppenderatorMerge and push particular segments to deep storage. This will trigger an implicitAppenderator.persistAll(Committer)using the provided Committer.After this method is called, you cannot add new data to any segments that were previously under construction.
If committer is not provided, no metadata is persisted.
- Specified by:
pushin interfaceAppenderator- Parameters:
identifiers- list of segments to pushcommitter- a committer associated with all data that has been added so faruseUniquePath- true if the segment should be written to a path with a unique identifier- Returns:
- future that resolves when all segments have been pushed. The segment list will be the list of segments that have been pushed and the commit metadata from the Committer.
-
close
public void close()Description copied from interface:AppenderatorStop any currently-running processing and clean up after ourselves. This allows currently running persists and pushes to finish. This will not remove any on-disk persisted data, but it will drop any data that has not yet been persisted.- Specified by:
closein interfaceAppenderator
-
closeNow
public void closeNow()Description copied from interface:AppenderatorStop all processing, abandoning current pushes, currently running persist may be allowed to finish if they persist critical metadata otherwise shutdown immediately. This will not remove any on-disk persisted data, but it will drop any data that has not yet been persisted. Since this does not wait for pushes to finish, implementations have to make sure if any push is still happening in background thread then it does not cause any problems.- Specified by:
closeNowin interfaceAppenderator
-
getPersistedidentifierPaths
-