All Classes and Interfaces
Class
Description
This is a simplified handler for announcement listeners.
An abstract class that listens for segment change events and caches segment metadata.
ColumnTypeMergePolicy defines the rules of which type to use when faced with the possibility of different types
for the same column from segment to segment.
Classic logic, we use the first type we encounter.
Resolves types using
ColumnType.leastRestrictiveType(ColumnType, ColumnType) to find the ColumnType that
can best represent all data contained across all segments.Should only be used in conjunction with AllowAllAuthorizer.
Filters requests based on the HTTP method.
Authenticates all requests and directs them to an authorizer.
An Appenderator indexes data.
Result of
Appenderator.add(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec, org.apache.druid.data.input.InputRow, com.google.common.base.Supplier<org.apache.druid.data.input.Committer>) containing following information
- SegmentIdWithShardSpec - identifier of segment to which rows are being added
- int - positive number indicating how many summarized rows exist in this segment so far and
- boolean - true if is set to false and persist is required; false otherwiseResult of {@link BaseAppenderatorDriver#append)}.
Lock helper for
StreamAppenderatorDriver.This interface defines entities that create and manage potentially multiple
Appenderator instances.Audit utility class that can be used by different implementations of
AuditManager to serialize/deserialize audit
payloads based on the values configured in AuditManagerConfig.AuthConfig object is created via Jackson in production.
Sets necessary request attributes for requests sent to endpoints that need authentication but no authorization.
An AuthenticationResult contains information about a successfully authenticated request.
Used to wrap Filters created by Authenticators, this wrapper filter skips itself if a request already
has an authentication check (so that Authenticator implementations don't have to perform this check themselves)
Represents the outcoming of performing authorization check on required resource accesses on a query or http requests.
Static utility functions for performing authorization checks.
An Authorizer is responsible for performing authorization checks for resource accesses.
Utility functions to validate the an authorizer.
Immutable representation of RowSignature and other segment attributes.
An AvaticaConnectionBalancer balances Avatica connections across a collection of servers.
CachePopulator implementation that uses a ExecutorService thread pool to populate a cache in the
background.This class is for any exceptions that should return a bad request status code (404).
Represents a segment picked for moving by a balancer strategy.
Segment balancing strategy, used in every coordinator run by
StrategicSegmentAssigner
to choose optimal servers to load, move or drop a segment.Coordinator Duty to balance segments across Historicals.
A BaseAppenderatorDriver drives an Appenderator to index a finite stream of data.
Allocated segments for a sequence
Base implementation of
CompactionCandidateSearchPolicy that can have
a priorityDatasource.Base class for input source definitions.
Common code used by various implementations of DruidNodeDiscovery.
Base implementation for a table function definition.
This class exists so that
MetadataStorageConnectorConfig is asked for password every time a brand new
connection is established with DB.This is a new class produced when the old
AppenderatorImpl was split.This class is specifialized for batch ingestion.
IOConfig for all batch tasks except compactionTask.
Deprecated.
This class defines the spec for loading of broadcast datasources for a given task.
High-level Broker client.
Resource for fetching and updating the
CoordinatorDynamicConfig on the Broker.This module is used to fulfill dependency injection of query processing and caching resources: buffer pools and
thread pools on Broker.
Immutable class which represents the status of a dynamic configuration sync with a specific broker.
Broker view of the coordinator dynamic configuration, and its derived values such as target and source clone servers.
An async BytesAccumulatingResponseHandler which returns unfinished response
Abstract implementation of a BlockingQueue bounded by the size of elements,
works similar to LinkedBlockingQueue except that capacity is bounded by the size in bytes of the elements in the queue.
Thread safe collector of
CachePopulator statistics, utilized CacheMonitor to emit cache metrics.Summary of current contents of the cache.
This is the class on the Broker that is responsible for making native Druid queries to a cluster of data servers.
Helper class to build a new timeline of filtered segments.
Deprecated.
This is currently being used only in tests for benchmarking purposes
and will be removed in future releases.
Deprecated.
This is currently being used only in tests for benchmarking purposes
and will be removed in future releases.
Config for caching and managing datasource schema on the Coordinator.
This class keeps a bounded list of segment updates made on the server such as adding/dropping segments.
This class facilitates the usage of long-polling HTTP endpoints powered by
ChangeRequestHistory.Concurrency guarantees: all calls to
ChangeRequestHttpSyncer.Listener.fullSync(java.util.List<T>) and ChangeRequestHttpSyncer.Listener.deltaSync(java.util.List<T>) (that is done within the ChangeRequestHttpSyncer.executor) are linearizable.Objects that can be registered with a
ServiceAnnouncingChatHandlerProvider and provide http endpoints for indexing-related
objects.InputSpec for
ClientCompactionIOConfig.IOConfig for
ClientCompactionTaskQuery.This class is just used to pass the strategy type via the "type" parameter for deserilization to appropriate
org.apache.druid.indexing.common.task.CompactionRunner subtype at the overlod.Spec containing dimension configs for Compaction Task.
Spec containing Granularity configs for Compaction Task.
Client representation of org.apache.druid.indexing.common.task.CompactionTask.
Client representation of
org.apache.druid.indexing.common.task.KillUnusedSegmentsTask.This class copies over MSQ context parameters from the MSQ extension.
Query handler for the Broker processes (see CliBroker).
Utilities for
ClientQuerySegmentWalkerGuardrail type on the subquery's results
org.apache.druid.indexing.common.task.Task representations for clients.
Utils class for shared client methods
Handles cloning of historicals.
Immutable class which the current set of Brokers which have been synced with the latest
CoordinatorDynamicConfig.Manager to store and update the status of ongoing cloning operations.
Cluster-level compaction configs.
Description of one clustering key (column) for a datasource.
Specification of table columns.
Class representing the combined DataSchema of a set of segments, currently used only by Compaction.
Non-empty list of segments of a datasource being considered for compaction.
Policy used by
CompactSegments duty to pick segments for compaction.Response of
/compaction/progress API exposed by Coordinator and
Overlord (when compaction supervisors are enabled).Simulates runs of auto-compaction duty to obtain the expected list of
compaction tasks that would be submitted by the actual compaction duty.
Iterator over compactible segments.
Used to track statistics for segments in different states of compaction.
Represents the status of compaction for a given
CompactionCandidate.Response of
/compaction/status API exposed by Coordinator and
Overlord (when compaction supervisors are enabled).Tracks status of recently submitted compaction tasks.
Use this ResourceFilter at end points where Druid Cluster configuration is read or written
Here are some example paths where this filter is used -
- druid/worker/v1
- druid/indexer/v1
- druid/coordinator/v1/config
Note - Currently the resource name for all end points is set to "CONFIG" however if more fine grained access control
is required the resource name can be set to specific config properties.
Immutable class which contains the current set of Brokers which have been synced with the latest
CoordinatorDynamicConfig.Distributes objects across a set of node keys using consistent hashing.
Manager to fetch and update dynamic configs
CoordinatorDynamicConfig
and DruidCompactionConfig.This
ExtensionPoint allows for coordinator duty to be pluggable
so that users can register their own duties without modifying Core Druid classes.A group of
CoordinatorDuty.utilty methods that are useful for coordinator duties
This class is for users to change their configurations while their Druid cluster is running.
Updates all brokers with the latest coordinator dynamic config.
Contains statistics typically tracked during a single coordinator run or the
runtime of a single coordinator duty.
Coordinator-side cache of segment metadata that combines segments to build
datasources.
ServerView of coordinator for the state of segments being loaded in the cluster.
A coordinator statistic, which may or may not be emitted as a metric.
Level of coordinator stat, typically used for logging.
Builds the core (common) set of modules used by all Druid services and
commands.
Deprecated.
Deprecated.
Concurrency guarantees: all calls to
CuratorInventoryManagerStrategy.newContainer(ContainerClass), CuratorInventoryManagerStrategy.deadContainer(ContainerClass), CuratorInventoryManagerStrategy.updateContainer(ContainerClass, ContainerClass) and
CuratorInventoryManagerStrategy.inventoryInitialized() (all done within CuratorInventoryManager.pathChildrenCacheExecutor) are
linearizable.Deprecated.
Deprecated.
Metadata announced by any node that serves segments.
Response of a
DataSegmentChangeRequest.Interns the DataSegment object in order to share the reference for same DataSegment.
Encapsulates a
DataSegment and additional metadata about it:
DataSegmentPlus.used - Boolean flag representing if the segment is used.
DataSegmentPlus.createdDate - The time when the segment was created.
DataSegmentPlus.usedStatusLastUpdatedDate - The time when the segments
used status was last updated.
DataSegmentPlus.upgradedFromSegmentId - The segment id to which the same load spec originally belonged.Deprecated.
Client to query data servers given a query.
Response handler for the
DataServerClient.Iterator over compactible segments of a datasource in order of specified priority.
A DTO containing audit information for compaction config for a datasource.
A utility class to build the config history for a datasource from audit entries for
DruidCompactionConfig.Convenience wrapper on top of a resolved table (a table spec and its corresponding
definition.) To be used by consumers of catalog objects that work with specific
datasource properties rather than layers that work with specs generically.
Encapsulates information about a datasource, such as its schema.
Commit metadata for a dataSource.
Catalog model wrapper for projection spec
Use this resource filter for API endpoints that contain
DatasourceResourceFilter.DATASOURCES_PATH_SEGMENT in their request path.Cache containing segment metadata of a single datasource.
Performs read operations on the segment metadata for a single datasource.
Performs write operations on the segment metadata of a single datasource.
An immutable snapshot of metadata information about used segments and overshadowed segments, coming from
SegmentsMetadataManager.The default implementation of
RequestLogEvent.This
RequestLogEventBuilderFactory creates builders that return DefaultRequestLogEvents.Dimensions used while collecting or reporting coordinator run stats.
Factory for building
DirectDruidClientRepresentation of all information related to discovery of a node and all the other metadata associated with
the node per nodeRole such as broker, historical etc.
The DiscoveryModule allows for the registration of Keys of DruidNode objects, which it intends to be
automatically announced at the end of the lifecycle start.
A
ServiceLocator that uses DruidNodeDiscovery.A
BalancerStrategy which can be used when historicals in a tier have
varying disk capacities.DropRules indicate when segments should be completely removed from the cluster.
Contains a representation of the current state of the cluster by tier.
Curator
ConnectionStateListener that uses a ServiceEmitter to send alerts on ZK connection loss,
and emit metrics about ZK connection status.Used by
CoordinatorDutyGroup to check leadership and emit stats.Contains all static configs for the Coordinator.
A mutable collection of metadata of segments (
DataSegment objects), belonging to a particular data source.Druid-enabled injector builder which supports
DruidModules, module classes
created from the base injector, and filtering based on properties and LoadScope
annotations.This class facilitates interaction with Coordinator/Overlord leader nodes.
Interface for supporting Overlord and Coordinator Leader Elections in TaskMaster and DruidCoordinator
which expect appropriate implementation available in guice annotated with @IndexingService and @Coordinator
respectively.
DiscoveryDruidNode announcer for internal discovery.
Interface for discovering Druid nodes announced by
DruidNodeAnnouncer.Listener for watching nodes in a DruidNodeDiscovery instance obtained via
DruidNodeDiscoveryProvider.getXXX().Provider of
DruidNodeDiscovery instances.A mutable collection of metadata of segments (
DataSegment objects), stored on a particular Druid server
(typically historical).Metadata of a service announced by node.
A custom serializer to handle the bug of duplicate "type" keys in
DataNodeService.A modifier to use a custom serializer for
DruidService.This implementation is needed because Overlords and MiddleManagers operate on Task objects which
can require an AppenderatorsManager to be injected.
This interface provides methods needed for escalating internal system requests with priveleged authentication
credentials.
An annotation to exlcude specific node types that a
Module can be loaded on.Injector builder which overrides service modules with extension
modules.
Definition of an external table, primarily for ingestion.
Catalog form of an external table specification used to pass along the three
components needed for an external table in MSQ ingest.
Annotation to inject extra dimensions, added to all events, emitted via
EmitterModule.getServiceEmitter(com.google.common.base.Supplier<org.apache.druid.server.DruidNode>, org.apache.druid.java.util.emitter.core.Emitter, java.util.Map<java.lang.String, java.lang.String>).Request logger implementation that logs query requests and writes them to a file.
A SegmentCallback that is called only when the given filter satisfies.
Utility to generate schema fingerprint which is used to ensure schema uniqueness in the metadata database.
Implementation of
CompactionCandidateSearchPolicy that specifies the
datasources and intervals eligible for compaction and their order.Specifies a datasource-interval eligible for compaction.
Locator for a fixed set of
ServiceLocations.Throw this when a request is unauthorized and we want to send a 403 response back, Jersey exception mapper will
take care of sending the response.
CachePopulator implementation that populates a cache on the same thread that is processing the
Sequence.Base class for input formats that require an input format (which is most of them.)
By default, an input source supports all formats defined in the table registry, but
specific input sources can be more restrictive.
In-memory implementation of
SegmentMetadataCache.Query laning strategy which associates all
Query with priority lower than 0 into a 'low' laneInterface that denotes some sort of filtering on the historcals, based on
CloneQueryMode.This is kept separate from
HttpEmitterConfig because PasswordProvider
is currently located in druid-api.Able to monitor
HttpPostEmitter or ParametrizedUriEmitter, which is based on the former.Definition of an HTTP input source source.
Returned by
ServiceClient.asyncRequest(org.apache.druid.rpc.RequestBuilder, org.apache.druid.java.util.http.client.response.HttpResponseHandler<IntermediateType, FinalType>) when a request has failed due to an HTTP response.This class uses internal-discovery i.e.
Collection of http endpoits to introspect state of HttpServerInventoryView instance for debugging.
An HTTP response handler that discards the response and returns nothing.
An immutable collection of metadata of segments (
DataSegment objects), belonging to a particular data
source.This class should not be subclassed, it isn't declared final only to make it possible to mock the class with EasyMock
in tests.
Handles metadata transactions performed by the Overlord.
Provides task count metrics for the indexers
These metrics are reported by indexers
Should be synchronized with org.apache.druid.indexing.overlord.http.TotalWorkerCapacityResponse
Should be synchronized with org.apache.druid.indexing.worker.Worker
Should be synchronized with org.apache.druid.indexing.overlord.ImmutableWorkerInfo
Initialize Guice for a server.
Describes an inline input source: one where the data is provided in the
table spec as a series of text lines.
A
SegmentWrangler for InlineDataSource.Metadata about a Druid
InputFormat.Catalog definitions for the Druid input formats.
Base class for input format definitions.
Definition for the CSV input format.
Definition of a flat text (CSV and delimited text) input format.
JSON format definition.
Metadata definition for one Druid input source.
Module that installs
InputSource implementationsThis class contains configuration that internally generated Druid queries
should add to their query payload.
A config class that applies to all JDBC connections to other databases.
Module that installs
JoinableFactory for the appropriate DataSource.CoordinatorDuty for automatic deletion of compaction configurations from the config table in metadata storage.
CoordinatorDuty for automatic deletion of datasource metadata from the datasource table in metadata storage.
Duty to kill stale pending segments which are not needed anymore.
Cleans up terminated supervisors from the supervisors table in metadata storage.
Example
CoordinatorCustomDuty for automatic deletion of terminated
supervisors from the metadata storage.Coordinator duty to clean up segment schema which are not referenced by any used segment.
Completely removes information about unused segments who have an interval end that comes before
now -
KillUnusedSegments.durationToRetain from the metadata store.A
StorageLocation selector strategy that selects a segment cache location that is least filled each time
among the available storage locations.A handler for events related to the listening-announcer.
This is a simple announcement resource that handles simple items that have a POST to an announcement endpoint, a
GET of something in that endpoint with an ID, and a DELETE to that endpoint with an ID.
A deserialization aid used by
SegmentChangeRequestLoad.Tracks the current segment loading rate for a single server.
Callback executed when the load or drop of a segment completes on a server
either with success or failure.
Supports load queue management.
Provides LoadQueuePeons
LoadRules indicate the number of replicants a segment should have in a given tier.
An annotation to specify node types that a
Module can be loaded on.Deprecated.
Definition for a
LocalInputSource.Processor that computes Druid queries, single-threaded.
public, evolving
Specifies a policy to filter active locks held by a datasource
Audit manager that logs audited events at the level specified in
LoggingAuditManagerConfig.Managers
LookupExtractorFactoryContainer specifications, distributing them
to LookupReferencesManager around the cluster by monitoring the lookup
announce path for servers and utilizing their LookupListeningResource API
to load, drop, and update lookups around the cluster.Contains information about lookups exposed through the coordinator
This is same as
LookupExtractorFactoryContainer except it uses
Map<String, Object> instead of LookupExtractorFactory for referencing lookup spec so that lookup extensions are not
required to be loaded at the Coordinator.A
JoinableFactory for LookupDataSource.This class defines the spec for loading of lookups for a given task.
A Helper class that uses DruidNodeDiscovery to discover lookup nodes and tiers.
Metadata announced by any node that serves queries and hence applies lookups.
This class provide a basic
LookupExtractorFactory references manager.A
JoinableFactory for LookupDataSource.Variant of
LookupModule that only supports serde of Query objects, to allow
a service to examine queries that might contain for example a RegisteredLookupExtractionFn or a
LookupExprMacro, but without requiring the service to load the actual lookups.Utility class for lookup related things
Does nothing, the user must set the
QueryContexts.PRIORITY_KEY on the query context
to get a priority.An implementation of
SegmentWrangler that allows registration of DataSource-specific handlers via Guice.Mark eternity tombstones not overshadowed by currently served segments as unused.
Marks a segment as unused if it is overshadowed by:
a segment served by a historical or broker
a segment that has zero required replicas and thus will never be loaded on a server
A batch of messages collected by
MessageRelay from a remote Outbox through
MessageRelayResource.httpGetMessagesFromOutbox(java.lang.String, java.lang.Long, java.lang.Long, javax.servlet.http.HttpServletRequest).Listener for messages received by clients.
Relays run on clients, and receive messages from a server.
Client for
MessageRelayResource.Production implementation of
MessageRelayClient.Factory for creating new message relays.
Code that runs on message servers, to monitor their clients.
Server-side resource for message relaying.
Manages a fleet of
MessageRelay, one for each server discovered by a DruidNodeDiscoveryProvider.Contains functional interfaces that are used by a
CoordinatorDuty to
perform a single read or write operation on the metadata store.Client view of the metadata catalog.
Performs cleanup of stale metadata entries created before a configured retain duration.
Binds the following metadata configs for all services:
MetadataStorageTablesConfig
MetadataStorageConnectorConfig
CentralizedDatasourceSchemaConfig
Ideally, the storage configs should be bound only on Coordinator and Overlord,
but they are needed for other services too since metadata storage extensions
are currently loaded on all services.Contains all metadata managers used by the Coordinator.
Segment metadata cache metric names.
Metrics related to
SegmentSchemaCache and SegmentSchemaManager.Sets up the
MonitorScheduler to monitor things on a regular schedule.Definition of a top-level property in a catalog object.
A
StorageLocation selector strategy that selects a segment cache location that has most free space
among the available storage locations.Implementation of
CompactionCandidateSearchPolicy that prioritizes
intervals which have the latest data.The
NodeAnnouncer class is responsible for announcing a single node
in a ZooKeeper ensemble.Defines the 'role' of a Druid service, utilized to strongly type announcement and service discovery.
Mostly used for test purpose.
Implementation of
OverlordClient that throws
UnsupportedOperationException for every method.An empty implementation of
QuerySegmentWalker.No-op implementation of
SegmentSchemaCache that always returns false
for NoopSegmentSchemaCache.isEnabled() and NoopSegmentSchemaCache.isInitialized().Deprecated.
Used as a tombstone marker in the supervisors metadata table to indicate that the supervisor has been removed.
Query laning strategy that does nothing and provides the default, unlimited behavior
Metadata definition of the metadata objects stored in the catalog.
Utility class to simplify typed access to catalog object properties.
An outbox for messages sent from servers to clients.
Production implementation of
Outbox.Outgoing queue for a specific client.
High-level Overlord client.
Production implementation of
OverlordClient.A Proxy servlet that proxies requests to the overlord.
Defines a parameter for a catalog entry.
The
PathChildrenAnnouncer class manages the announcement of a node, and watches all child
and sibling nodes under the specified path in a ZooKeeper ensemble.Representation of a record in the pending segments table.
Manages Appenderators for tasks running within a CliPeon process.
Module for configuring the policy enforcer.
This duty does the following:
Creates an immutable
DruidCluster consisting of ServerHolders
which represent the current state of the servers in the cluster.
Starts and stops load peons for new and disappeared servers respectively.
Cancels in-progress loads on all decommissioning servers.Filter that verifies that authorization checks were applied to an HTTP request, before sending a response.
Implementation of
CompactionSegmentIterator that returns candidate
segments in order of their priority.Class that helps a Druid server (broker, historical, etc) manage the lifecycle of a query that it is handling.
Factory for creating instances of
QueryResourceQueryResultPusherFactory.QueryResourceQueryResultPusher.Handles query results for
QueryResource, pushing the results to the client.QueryScheduler (potentially) assigns any
Query that is to be executed to a 'query lane' using the
QueryLaningStrategy that is defined in QuerySchedulerConfig.Should be synchronized with org.apache.druid.indexing.common.TaskStatus.
A
QueryRunner which validates that a *specific* query is passed in, and then swaps it with another one.A simple
BalancerStrategy that
assigns segments randomly amongst eligible servers
performs no balancing
A
StorageLocation selector strategy that selects a segment cache location randomly each time
among the available storage locations.Cache with standard read/write locking.
Distributes objects across a set of node keys using rendezvous hashing
See https://en.wikipedia.org/wiki/Rendezvous_hashing
Details of a REPLACE lock held by a batch supervisor task.
The ReplicationThrottler is used to throttle the number of segment replicas
that are assigned to a load queue in a single run.
Marker subtype of events emitted from
EmittingRequestLogger.This factory allows to customize
RequestLogEvents, emitted in EmittingRequestLogger, e.A Marker interface for things that can provide a RequestLogger.
Internal class to hold the intermediate form of an external table: the
input source and input format properties converted to Java maps, and the
types of each resolved to the corresponding definitions.
Handle to a table specification along with its definition
and the object mapper used to serialize/deserialize its data.
Populates
QueryContexts.QUERY_RESOURCE_ID in the query contextFactory for creating instances of
ResourceIOReaderWriterFactory.ResourceIOReaderWriter.Encapsulates the mapper for the request and the
ResourceIOReaderWriterFactory.ResourceIOWriter for the response.Handles writing query response to the client in different formats.
Set of built-in and 'registered'
Resource types for use by AuthorizerException thrown by SQL connector code when it wants a transaction to be retried.
Provides iterators over historicals for a given tier that can load a
specified segment.
A
StorageLocation selector strategy that selects a segment cache location in a round-robin fashion each time
among the available storage locations.This module is used to fulfill dependency injection of query processing and caching resources: buffer pools and
thread pools on Router Druid node type.
Represents a row key against which stats are reported.
Retention rule that governs retention and distribution of segments in a cluster.
Use this ResourceFilter when the datasource information is present after "rules" segment in the request Path
Here are some example paths where this filter is used -
- druid/coordinator/v1/rules/
Duty to run retention rules for all used non-overshadowed segments.
Defines the set of schemas available in Druid and their properties.
Hard-coded schema registry that knows about the well-known, and
a few obscure, Druid schemas.
Represents actions that can be performed on a server for a single segment.
Performs various actions on a given segment.
Responsible for bootstrapping segments already cached on disk and bootstrap segments fetched from the coordinator.
A class to fetch segment files to local disk and manage the local cache.
Contains
SegmentChangeStatus.State of a DataSegmentChangeRequest and failure
message, if any.Maintains a count of segments for each datasource and interval.
Contains information used by
IndexerMetadataStorageCoordinator for
creating a new segment.Latch held by
SegmentLoadDropHandler.segmentDropLatches when a drop is scheduled or actively happening.Metrics for segment generation.
MessageGapStats tracks message gap statistics and is thread-safe.Represents a segment queued for a load or drop operation in a LoadQueuePeon.
Endpoints exposed here are to be used only for druid internal management of segments by Coordinators, Brokers etc.
Responsible for loading and dropping of segments by a process that can serve segments.
Contains information related to the capability of a server to load segments, for example the number of threads
available.
Contains recomputed configs from
CoordinatorDynamicConfig based on
whether CoordinatorDynamicConfig.isSmartSegmentLoading() is enabled or not.Determines the threadpool used by the historical to load segments.
Manager for addition/removal of segments to server load queues and the
corresponding success/failure callbacks.
This class is responsible for managing data sources and their states like timeline, total segment size, and number of
segments.
Represent the state of a data source including the timeline, total segment size, and number of segments.
Cache for metadata of pending segments and used segments maintained by
the Overlord to improve performance of segment allocation and other task actions.
Represents a thread-safe read or write action performed on the cache within
required locks.
Cache usage modes.
Coordinator-side configuration class for customizing properties related to the SegmentMetadata cache.
This
QuerySegmentWalker implementation is specific to SegmentMetadata queries
executed by CoordinatorSegmentMetadataCache and is in parity with CachingClusteredClient.Represents a single transaction involving read of segment metadata into
the metadata store.
Represents a single transaction involving read/write of segment metadata into
the metadata store.
Factory for
SegmentMetadataTransactions.Result of a segment publish operation.
Counts the number of replicas of a segment in different states (loading, loaded, etc)
in a tier or the whole cluster.
Contains a mapping from tier to
SegmentReplicaCounts.An immutable object that contains information about the under-replicated
or unavailable status of all used segments.
Class that creates a count of segments that have row counts in certain buckets
This enum is used as a parameter for several methods in
IndexerMetadataStorageCoordinator, specifying whether
only visible segments, or visible as well as overshadowed segments should be included in results.This class publishes the segment schema for segments obtained via segment metadata query.
In-memory cache of segment schema used by
CoordinatorSegmentMetadataCache.Handles segment schema persistence and cleanup on the Coordinator.
Wrapper over
SchemaPayloadPlus class to include segmentId and fingerprint information.Represents a single record in the druid_segmentSchemas table.
Encapsulates schema information for multiple segments.
Encapsulates either the absolute schema or schema change for a segment.
Implementation of
DataSegmentChangeRequest, which encapsulates segment schema changes.SegmentsCostCache provides faster way to calculate cost function proposed in
CostBalancerStrategy.Polls the metadata store periodically and builds a timeline of used segments
(and schemas if schema caching on the Coordinator is enabled).
Config that dictates polling and caching of segment metadata on leader
Coordinator or Overlord services.
An experimental monitor used to keep track of segment stats.
Maintains a map containing the state of a segment on all servers of a tier.
Filter to identify segments that need to be updated via REST APIs.
Result of syncing a datasource cache with segments polled from metadata store.
Calculates the maximum, minimum and required number of segments to move in a
Coordinator run for balancing.
Segment state transition is different in
BatchAppenderatorDriver and StreamAppenderatorDriver.Module that installs DataSource-class-specific
SegmentWrangler implementations.This class is annotated
Singleton rather than LazySingleton because it adds
a lifecycle handler in the constructor.Immutable class which represents the current status of a single clone server.
Enum determining the status of the cloning process.
Deprecated.
Deprecated.
Encapsulates the state of a DruidServer during a single coordinator run.
Initialize Guice for a server.
Marker interface for making batch/single/http server inventory view configurable.
Query handler for Historical processes (see CliHistorical).
Wrapper for a
SegmentDescriptor and Optional<Segment>, the latter being created by a
SegmentMapFunction being applied to a ReferenceCountedSegmentProvider.This enum represents types of druid services that hold segments.
Deprecated.
Provides a way for the outside world to talk to objects in the indexing service.
Mid-level client that provides an API similar to low-level
HttpClient, but accepts RequestBuilder
instead of Request, and internally handles service location
and retries.Factory for creating
ServiceClient.Production implementation of
ServiceClientFactory.Production implementation of
ServiceClient.Returned by
ServiceClient.asyncRequest(org.apache.druid.rpc.RequestBuilder, org.apache.druid.java.util.http.client.response.HttpResponseHandler<IntermediateType, FinalType>) when a request has failed because the service is closed.Injector builder for a service within a server.
Represents a service location at a particular point in time.
Returned by
ServiceLocator.locate().Used by
ServiceClient to locate services.Returned by
ServiceClient.asyncRequest(org.apache.druid.rpc.RequestBuilder, org.apache.druid.java.util.http.client.response.HttpResponseHandler<IntermediateType, FinalType>) when a request has failed because the service is not available.Used by
ServiceClient to decide whether to retry requests.Reports a heartbeat for the service.
This class is for any exceptions that should return a Service unavailable status code (503).
A ServletFilterHolder is a class that holds all of the information required to attach a Filter to a Servlet.
Use this QueryRunner to set and verify Query contexts.
Query handler for indexing tasks.
Segment reference returned by
Sink#acquireSegmentReferences(Function, boolean).Specifies the sort order when doing metadata store queries.
Retry policy for tasks.
Service locator for a specific task.
Represents a rdbms based input resource and knows how to read query results from the resource using SQL queries.
Reader exclusively for
SqlEntityFactory for read-only
SegmentMetadataTransactions that always read
directly from the metadata store and never from the SegmentMetadataCache.Factory for
SegmentMetadataTransactions.Implementation of
SegmentsMetadataManager, that periodically polls
used segments from the metadata store to build a DataSourcesSnapshot.Implementation V2 of
SegmentsMetadataManager, that can use the
segments cached in SegmentMetadataCache to build a DataSourcesSnapshot.An object that is used to query the segments table in the metadata store.
Adds response headers that we want to have on all responses.
Retry policy configurable with a maximum number of attempts and min/max wait time.
Use this ResourceFilter at end points where Druid Cluster State is read or written
Here are some example paths where this filter is used -
- druid/broker/v1
- druid/coordinator/v1
- druid/historical/v1
- druid/indexer/v1
- druid/coordinator/v1/rules
- druid/coordinator/v1/tiers
- druid/worker/v1
- druid/coordinator/v1/servers
- status
Note - Currently the resource name for all end points is set to "STATE" however if more fine grained access control
is required the resource name can be set to specific state properties.
List of Coordinator stats.
This class is a very simple logical representation of a local path.
This interface describes the storage location selection strategy which is responsible for ordering the
available multiple
StorageLocations for segment distribution.Used by the coordinator in each run for segment loading, dropping, balancing
and broadcasting.
This class is specialized for streaming ingestion.
An interface for managing supervisors that handle stream-based ingestion tasks.
When
DiscoveryDruidNode is deserialized from a JSON, the JSON is first converted to this class,
and then to a Map.Monitors and emits the metrics corresponding to the subqueries and their materialization.
Collects the metrics corresponding to the subqueries and their materialization.
Aids the
ClientQuerySegmentWalker compute the available heap size per query for materializing the inline
results from the subqueriesAn interface representing a general supervisor for managing ingestion tasks.
This class contains the attributes of a supervisor which are returned by the API's in
org.apache.druid.indexing.overlord.supervisor.SupervisorResource
and used by org.apache.druid.sql.calcite.schema.SystemSchema.SupervisorsTable
A simple table POJO with any number of rows and specified column names.
Informal table spec builder for tests.
Definition for all tables in the catalog.
Registry of the table types supported in the catalog.
Convenience wrapper on top of a resolved table (a table spec
and its corresponding definition.)
Representation of a SQL table function.
SQL-like compound table ID with schema and table name.
REST API level description of a table.
State of the metadata table entry (not necessarily of the underlying
datasource.) A table entry will be Active normally.
Definition of a table "hint" in the metastore, between client and
Druid, and between Druid nodes.
Contains the paths for various directories used by a task for logs, reports,
and persisting intermediate data.
Utility function for creating
ServiceClient instances that communicate to indexing service tasks.Deprecated.
Should be synced with org.apache.druid.indexing.overlord.http.TaskStatusResponse
Lowers query priority when any of the configured thresholds is exceeded
Balances segments within the servers of a tier using the balancer strategy.
This extension point allows developers to replace the standard TLS certificate checks with custom checks.
When
DiscoveryDruidNode is deserialized from a JSON,
the JSON is first converted to StringObjectPairList, and then to a Map.Authenticates requests coming from a specific domain and directs them to an authorizer.
Manages
Appenderator instances for the CliIndexer task execution service, which runs all tasks in
a single process.This wrapper around IndexMerger limits concurrent calls to the merge/persist methods used by
StreamAppenderator with a shared executor service.Unloads segments that are no longer marked as used from servers.
Sets necessary request attributes for requests sent to endpoints that don't need authentication or
authorization checks.
Config for
UnusedSegmentKiller.Simple response of an update API call containing the success status of the
update operation.
Spec containing dimension configs for Compaction Task.
Spec containing Granularity configs for Auto Compaction.
Spec containing IO configs for Auto Compaction.
Worker metadata announced by Middle Manager.
Proides task / task count status at the level of individual worker nodes.
Deprecated.
as Druid has already migrated to HTTP-based segment loading and
will soon migrate to HTTP-based inventory view using
SegmentListerResource.