Package inference
Class GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub
java.lang.Object
io.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub>
io.grpc.stub.AbstractBlockingStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub>
inference.GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub
- Enclosing class:
GRPCInferenceServiceGrpc
public static final class GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub
extends io.grpc.stub.AbstractBlockingStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub>
A stub to allow clients to do limited synchronous rpc calls to service GRPCInferenceService.
Inference Server GRPC endpoints.
-
Nested Class Summary
Nested classes/interfaces inherited from class io.grpc.stub.AbstractStub
io.grpc.stub.AbstractStub.StubFactory<T extends io.grpc.stub.AbstractStub<T>> -
Method Summary
Modifier and TypeMethodDescriptionbuild(io.grpc.Channel channel, io.grpc.CallOptions callOptions) The ModelInfer API performs inference using the specified model.The per-model metadata API provides information about a model.The ModelReady API indicates if a specific model is ready for inferencing.The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.The ServerMetadata API provides information about the server.The ServerReady API indicates if the server is ready for inferencing.Methods inherited from class io.grpc.stub.AbstractBlockingStub
newStub, newStubMethods inherited from class io.grpc.stub.AbstractStub
getCallOptions, getChannel, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOnReadyThreshold, withOption, withWaitForReady
-
Method Details
-
build
protected GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions) - Specified by:
buildin classio.grpc.stub.AbstractStub<GRPCInferenceServiceGrpc.GRPCInferenceServiceBlockingStub>
-
serverLive
The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.
-
serverReady
The ServerReady API indicates if the server is ready for inferencing.
-
modelReady
The ModelReady API indicates if a specific model is ready for inferencing.
-
serverMetadata
public GrpcPredictV2.ServerMetadataResponse serverMetadata(GrpcPredictV2.ServerMetadataRequest request) The ServerMetadata API provides information about the server. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelMetadata
public GrpcPredictV2.ModelMetadataResponse modelMetadata(GrpcPredictV2.ModelMetadataRequest request) The per-model metadata API provides information about a model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelInfer
The ModelInfer API performs inference using the specified model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-