Package inference
Class GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub
java.lang.Object
io.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub>
io.grpc.stub.AbstractBlockingStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub>
inference.GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub
- Enclosing class:
GRPCInferenceServiceGrpc
public static final class GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub
extends io.grpc.stub.AbstractBlockingStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub>
A stub to allow clients to do synchronous rpc calls to service GRPCInferenceService.
Inference Server GRPC endpoints.
-
Nested Class Summary
Nested classes/interfaces inherited from class io.grpc.stub.AbstractStub
io.grpc.stub.AbstractStub.StubFactory<T extends io.grpc.stub.AbstractStub<T>> -
Method Summary
Modifier and TypeMethodDescriptionbuild(io.grpc.Channel channel, io.grpc.CallOptions callOptions) The ModelInfer API performs inference using the specified model.The per-model metadata API provides information about a model.The ModelReady API indicates if a specific model is ready for inferencing.The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.The ServerMetadata API provides information about the server.The ServerReady API indicates if the server is ready for inferencing.Methods inherited from class io.grpc.stub.AbstractBlockingStub
newStub, newStubMethods inherited from class io.grpc.stub.AbstractStub
getCallOptions, getChannel, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOnReadyThreshold, withOption, withWaitForReady
-
Method Details
-
build
protected GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions) - Specified by:
buildin classio.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingV2Stub>
-
serverLive
public GrpcPredictV2.ServerLiveResponse serverLive(GrpcPredictV2.ServerLiveRequest request) throws io.grpc.StatusException The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.
- Throws:
io.grpc.StatusException
-
serverReady
public GrpcPredictV2.ServerReadyResponse serverReady(GrpcPredictV2.ServerReadyRequest request) throws io.grpc.StatusException The ServerReady API indicates if the server is ready for inferencing.
- Throws:
io.grpc.StatusException
-
modelReady
public GrpcPredictV2.ModelReadyResponse modelReady(GrpcPredictV2.ModelReadyRequest request) throws io.grpc.StatusException The ModelReady API indicates if a specific model is ready for inferencing.
- Throws:
io.grpc.StatusException
-
serverMetadata
public GrpcPredictV2.ServerMetadataResponse serverMetadata(GrpcPredictV2.ServerMetadataRequest request) throws io.grpc.StatusException The ServerMetadata API provides information about the server. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
- Throws:
io.grpc.StatusException
-
modelMetadata
public GrpcPredictV2.ModelMetadataResponse modelMetadata(GrpcPredictV2.ModelMetadataRequest request) throws io.grpc.StatusException The per-model metadata API provides information about a model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
- Throws:
io.grpc.StatusException
-
modelInfer
public GrpcPredictV2.ModelInferResponse modelInfer(GrpcPredictV2.ModelInferRequest request) throws io.grpc.StatusException The ModelInfer API performs inference using the specified model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
- Throws:
io.grpc.StatusException
-