Package inference
Class GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub
java.lang.Object
io.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub>
io.grpc.stub.AbstractFutureStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub>
inference.GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub
- Enclosing class:
GRPCInferenceServiceGrpc
public static final class GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub
extends io.grpc.stub.AbstractFutureStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub>
A stub to allow clients to do ListenableFuture-style rpc calls to service GRPCInferenceService.
Inference Server GRPC endpoints.
-
Nested Class Summary
Nested classes/interfaces inherited from class io.grpc.stub.AbstractStub
io.grpc.stub.AbstractStub.StubFactory<T extends io.grpc.stub.AbstractStub<T>> -
Method Summary
Modifier and TypeMethodDescriptionbuild(io.grpc.Channel channel, io.grpc.CallOptions callOptions) com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelInferResponse> The ModelInfer API performs inference using the specified model.com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelMetadataResponse> The per-model metadata API provides information about a model.com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelReadyResponse> The ModelReady API indicates if a specific model is ready for inferencing.com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerLiveResponse> The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerMetadataResponse> The ServerMetadata API provides information about the server.com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerReadyResponse> The ServerReady API indicates if the server is ready for inferencing.Methods inherited from class io.grpc.stub.AbstractFutureStub
newStub, newStubMethods inherited from class io.grpc.stub.AbstractStub
getCallOptions, getChannel, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOnReadyThreshold, withOption, withWaitForReady
-
Method Details
-
build
protected GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions) - Specified by:
buildin classio.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceFutureStub>
-
serverLive
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerLiveResponse> serverLive(GrpcPredictV2.ServerLiveRequest request) The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.
-
serverReady
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerReadyResponse> serverReady(GrpcPredictV2.ServerReadyRequest request) The ServerReady API indicates if the server is ready for inferencing.
-
modelReady
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelReadyResponse> modelReady(GrpcPredictV2.ModelReadyRequest request) The ModelReady API indicates if a specific model is ready for inferencing.
-
serverMetadata
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ServerMetadataResponse> serverMetadata(GrpcPredictV2.ServerMetadataRequest request) The ServerMetadata API provides information about the server. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelMetadata
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelMetadataResponse> modelMetadata(GrpcPredictV2.ModelMetadataRequest request) The per-model metadata API provides information about a model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelInfer
public com.google.common.util.concurrent.ListenableFuture<GrpcPredictV2.ModelInferResponse> modelInfer(GrpcPredictV2.ModelInferRequest request) The ModelInfer API performs inference using the specified model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-