Package inference

Interface GrpcPredictV2.ModelInferRequestOrBuilder

All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
All Known Implementing Classes:
GrpcPredictV2.ModelInferRequest, GrpcPredictV2.ModelInferRequest.Builder
Enclosing class:
GrpcPredictV2

public static interface GrpcPredictV2.ModelInferRequestOrBuilder extends com.google.protobuf.MessageOrBuilder
  • Method Details

    • getModelName

      String getModelName()
       The name of the model to use for inferencing.
       
      string model_name = 1;
      Returns:
      The modelName.
    • getModelNameBytes

      com.google.protobuf.ByteString getModelNameBytes()
       The name of the model to use for inferencing.
       
      string model_name = 1;
      Returns:
      The bytes for modelName.
    • getModelVersion

      String getModelVersion()
       The version of the model to use for inference. If not given the
       server will choose a version based on the model and internal policy.
       
      string model_version = 2;
      Returns:
      The modelVersion.
    • getModelVersionBytes

      com.google.protobuf.ByteString getModelVersionBytes()
       The version of the model to use for inference. If not given the
       server will choose a version based on the model and internal policy.
       
      string model_version = 2;
      Returns:
      The bytes for modelVersion.
    • getId

      String getId()
       Optional identifier for the request. If specified will be
       returned in the response.
       
      string id = 3;
      Returns:
      The id.
    • getIdBytes

      com.google.protobuf.ByteString getIdBytes()
       Optional identifier for the request. If specified will be
       returned in the response.
       
      string id = 3;
      Returns:
      The bytes for id.
    • getParametersCount

      int getParametersCount()
       Optional inference parameters.
       
      map<string, .inference.InferParameter> parameters = 4;
    • containsParameters

      boolean containsParameters(String key)
       Optional inference parameters.
       
      map<string, .inference.InferParameter> parameters = 4;
    • getParameters

      Deprecated.
      Use getParametersMap() instead.
    • getParametersMap

       Optional inference parameters.
       
      map<string, .inference.InferParameter> parameters = 4;
    • getParametersOrDefault

      GrpcPredictV2.InferParameter getParametersOrDefault(String key, GrpcPredictV2.InferParameter defaultValue)
       Optional inference parameters.
       
      map<string, .inference.InferParameter> parameters = 4;
    • getParametersOrThrow

      GrpcPredictV2.InferParameter getParametersOrThrow(String key)
       Optional inference parameters.
       
      map<string, .inference.InferParameter> parameters = 4;
    • getInputsList

       The input tensors for the inference.
       
      repeated .inference.ModelInferRequest.InferInputTensor inputs = 5;
    • getInputs

       The input tensors for the inference.
       
      repeated .inference.ModelInferRequest.InferInputTensor inputs = 5;
    • getInputsCount

      int getInputsCount()
       The input tensors for the inference.
       
      repeated .inference.ModelInferRequest.InferInputTensor inputs = 5;
    • getInputsOrBuilderList

       The input tensors for the inference.
       
      repeated .inference.ModelInferRequest.InferInputTensor inputs = 5;
    • getInputsOrBuilder

       The input tensors for the inference.
       
      repeated .inference.ModelInferRequest.InferInputTensor inputs = 5;
    • getOutputsList

       The requested output tensors for the inference. Optional, if not
       specified all outputs produced by the model will be returned.
       
      repeated .inference.ModelInferRequest.InferRequestedOutputTensor outputs = 6;
    • getOutputs

       The requested output tensors for the inference. Optional, if not
       specified all outputs produced by the model will be returned.
       
      repeated .inference.ModelInferRequest.InferRequestedOutputTensor outputs = 6;
    • getOutputsCount

      int getOutputsCount()
       The requested output tensors for the inference. Optional, if not
       specified all outputs produced by the model will be returned.
       
      repeated .inference.ModelInferRequest.InferRequestedOutputTensor outputs = 6;
    • getOutputsOrBuilderList

       The requested output tensors for the inference. Optional, if not
       specified all outputs produced by the model will be returned.
       
      repeated .inference.ModelInferRequest.InferRequestedOutputTensor outputs = 6;
    • getOutputsOrBuilder

       The requested output tensors for the inference. Optional, if not
       specified all outputs produced by the model will be returned.
       
      repeated .inference.ModelInferRequest.InferRequestedOutputTensor outputs = 6;
    • getRawInputContentsList

      List<com.google.protobuf.ByteString> getRawInputContentsList()
       The data contained in an input tensor can be represented in "raw"
       bytes form or in the repeated type that matches the tensor's data
       type. To use the raw representation 'raw_input_contents' must be
       initialized with data for each tensor in the same order as
       'inputs'. For each tensor, the size of this content must match
       what is expected by the tensor's shape and data type. The raw
       data must be the flattened, one-dimensional, row-major order of
       the tensor elements without any stride or padding between the
       elements. Note that the FP16 and BF16 data types must be represented as
       raw content as there is no specific data type for a 16-bit float type.
      
       If this field is specified then InferInputTensor::contents must
       not be specified for any input tensor.
       
      repeated bytes raw_input_contents = 7;
      Returns:
      A list containing the rawInputContents.
    • getRawInputContentsCount

      int getRawInputContentsCount()
       The data contained in an input tensor can be represented in "raw"
       bytes form or in the repeated type that matches the tensor's data
       type. To use the raw representation 'raw_input_contents' must be
       initialized with data for each tensor in the same order as
       'inputs'. For each tensor, the size of this content must match
       what is expected by the tensor's shape and data type. The raw
       data must be the flattened, one-dimensional, row-major order of
       the tensor elements without any stride or padding between the
       elements. Note that the FP16 and BF16 data types must be represented as
       raw content as there is no specific data type for a 16-bit float type.
      
       If this field is specified then InferInputTensor::contents must
       not be specified for any input tensor.
       
      repeated bytes raw_input_contents = 7;
      Returns:
      The count of rawInputContents.
    • getRawInputContents

      com.google.protobuf.ByteString getRawInputContents(int index)
       The data contained in an input tensor can be represented in "raw"
       bytes form or in the repeated type that matches the tensor's data
       type. To use the raw representation 'raw_input_contents' must be
       initialized with data for each tensor in the same order as
       'inputs'. For each tensor, the size of this content must match
       what is expected by the tensor's shape and data type. The raw
       data must be the flattened, one-dimensional, row-major order of
       the tensor elements without any stride or padding between the
       elements. Note that the FP16 and BF16 data types must be represented as
       raw content as there is no specific data type for a 16-bit float type.
      
       If this field is specified then InferInputTensor::contents must
       not be specified for any input tensor.
       
      repeated bytes raw_input_contents = 7;
      Parameters:
      index - The index of the element to return.
      Returns:
      The rawInputContents at the given index.