Package inference
Interface GRPCInferenceServiceGrpc.AsyncService
- All Known Implementing Classes:
GRPCInferenceServiceGrpc.GRPCInferenceServiceImplBase
- Enclosing class:
GRPCInferenceServiceGrpc
public static interface GRPCInferenceServiceGrpc.AsyncService
Inference Server GRPC endpoints.
-
Method Summary
Modifier and TypeMethodDescriptiondefault voidmodelInfer(GrpcPredictV2.ModelInferRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelInferResponse> responseObserver) The ModelInfer API performs inference using the specified model.default voidmodelMetadata(GrpcPredictV2.ModelMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelMetadataResponse> responseObserver) The per-model metadata API provides information about a model.default voidmodelReady(GrpcPredictV2.ModelReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelReadyResponse> responseObserver) The ModelReady API indicates if a specific model is ready for inferencing.default voidserverLive(GrpcPredictV2.ServerLiveRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerLiveResponse> responseObserver) The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.default voidserverMetadata(GrpcPredictV2.ServerMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerMetadataResponse> responseObserver) The ServerMetadata API provides information about the server.default voidserverReady(GrpcPredictV2.ServerReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerReadyResponse> responseObserver) The ServerReady API indicates if the server is ready for inferencing.
-
Method Details
-
serverLive
default void serverLive(GrpcPredictV2.ServerLiveRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerLiveResponse> responseObserver) The ServerLive API indicates if the inference server is able to receive and respond to metadata and inference requests.
-
serverReady
default void serverReady(GrpcPredictV2.ServerReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerReadyResponse> responseObserver) The ServerReady API indicates if the server is ready for inferencing.
-
modelReady
default void modelReady(GrpcPredictV2.ModelReadyRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelReadyResponse> responseObserver) The ModelReady API indicates if a specific model is ready for inferencing.
-
serverMetadata
default void serverMetadata(GrpcPredictV2.ServerMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ServerMetadataResponse> responseObserver) The ServerMetadata API provides information about the server. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelMetadata
default void modelMetadata(GrpcPredictV2.ModelMetadataRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelMetadataResponse> responseObserver) The per-model metadata API provides information about a model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-
modelInfer
default void modelInfer(GrpcPredictV2.ModelInferRequest request, io.grpc.stub.StreamObserver<GrpcPredictV2.ModelInferResponse> responseObserver) The ModelInfer API performs inference using the specified model. Errors are indicated by the google.rpc.Status returned for the request. The OK code indicates success and other codes indicate failure.
-