Package inference
Class GRPCInferenceServiceGrpc.GRPCInferenceServiceStub
java.lang.Object
io.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceStub>
io.grpc.stub.AbstractAsyncStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceStub>
inference.GRPCInferenceServiceGrpc.GRPCInferenceServiceStub
- Enclosing class:
GRPCInferenceServiceGrpc
public static final class GRPCInferenceServiceGrpc.GRPCInferenceServiceStub
extends io.grpc.stub.AbstractAsyncStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceStub>
A stub to allow clients to do asynchronous rpc calls to service GRPCInferenceService.
Inference Server GRPC endpoints.
-
Nested Class Summary
Nested classes/interfaces inherited from class io.grpc.stub.AbstractStub
io.grpc.stub.AbstractStub.StubFactory<T extends io.grpc.stub.AbstractStub<T>> -
Method Summary
Modifier and TypeMethodDescriptionbuild(io.grpc.Channel channel, io.grpc.CallOptions callOptions) voidmodelInfer(GrpcPredictV2.ModelInferRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelInferResponse> responseObserver) The ModelInfer API performs inference using the specified model.voidmodelMetadata(GrpcPredictV2.ModelMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelMetadataResponse> responseObserver) The per-model metadata API provides information about a model.voidmodelReady(GrpcPredictV2.ModelReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelReadyResponse> responseObserver) The ModelReady API indicates if a specific model is ready for inferencing.voidserverLive(GrpcPredictV2.ServerLiveRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerLiveResponse> responseObserver) The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.voidserverMetadata(GrpcPredictV2.ServerMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerMetadataResponse> responseObserver) The ServerMetadata API provides information about the server.voidserverReady(GrpcPredictV2.ServerReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerReadyResponse> responseObserver) The ServerReady API indicates if the server is ready for inferencing.Methods inherited from class io.grpc.stub.AbstractAsyncStub
newStub, newStubMethods inherited from class io.grpc.stub.AbstractStub
getCallOptions, getChannel, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOnReadyThreshold, withOption, withWaitForReady
-
Method Details
-
build
protected GRPCInferenceServiceGrpc.GRPCInferenceServiceStub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions) - Specified by:
buildin classio.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceStub>
-
serverLive
public void serverLive(GrpcPredictV2.ServerLiveRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerLiveResponse> responseObserver) The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.
-
serverReady
public void serverReady(GrpcPredictV2.ServerReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerReadyResponse> responseObserver) The ServerReady API indicates if the server is ready for inferencing.
-
modelReady
public void modelReady(GrpcPredictV2.ModelReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelReadyResponse> responseObserver) The ModelReady API indicates if a specific model is ready for inferencing.
-
serverMetadata
public void serverMetadata(GrpcPredictV2.ServerMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerMetadataResponse> responseObserver) The ServerMetadata API provides information about the server. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelMetadata
public void modelMetadata(GrpcPredictV2.ModelMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelMetadataResponse> responseObserver) The per-model metadata API provides information about a model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelInfer
public void modelInfer(GrpcPredictV2.ModelInferRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelInferResponse> responseObserver) The ModelInfer API performs inference using the specified model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-