Class TorchServeConfiguration
java.lang.Object
org.apache.camel.component.torchserve.TorchServeConfiguration
- All Implemented Interfaces:
Cloneable
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptioncopy()intintintintgetUrl()voidsetInferenceAddress(String inferenceAddress) The address of the inference API endpoint.voidsetInferenceKey(String inferenceKey) The token authorization key for accessing the inference API.voidsetInferencePort(int inferencePort) The port of the inference API endpoint.voidsetListLimit(int listLimit) The maximum number of items to return for the list operation.voidsetListNextPageToken(String listNextPageToken) The token to retrieve the next set of results for the list operation.voidsetManagementAddress(String managementAddress) The address of the management API endpoint.voidsetManagementKey(String managementKey) The token authorization key for accessing the management API.voidsetManagementPort(int managementPort) The port of the management API endpoint.voidsetMetricsAddress(String metricsAddress) The address of the metrics API endpoint.voidsetMetricsName(String metricsName) Names of metrics to filter.voidsetMetricsPort(int metricsPort) The port of the metrics API endpoint.voidsetModelName(String modelName) The name of model.voidsetModelVersion(String modelVersion) The version of model.voidsetRegisterOptions(RegisterOptions registerOptions) Additional options for the register operation.voidsetScaleWorkerOptions(ScaleWorkerOptions scaleWorkerOptions) Additional options for the scale-worker operation.voidsetUnregisterOptions(UnregisterOptions unregisterOptions) Additional options for the unregister operation.voidModel archive download url, support local file or HTTP(s) protocol.
-
Constructor Details
-
TorchServeConfiguration
public TorchServeConfiguration()
-
-
Method Details
-
getInferenceKey
-
setInferenceKey
The token authorization key for accessing the inference API. -
getInferenceAddress
-
setInferenceAddress
The address of the inference API endpoint. -
getInferencePort
public int getInferencePort() -
setInferencePort
public void setInferencePort(int inferencePort) The port of the inference API endpoint. -
getManagementKey
-
setManagementKey
The token authorization key for accessing the management API. -
getManagementAddress
-
setManagementAddress
The address of the management API endpoint. -
getManagementPort
public int getManagementPort() -
setManagementPort
public void setManagementPort(int managementPort) The port of the management API endpoint. -
getMetricsAddress
-
setMetricsAddress
The address of the metrics API endpoint. -
getMetricsPort
public int getMetricsPort() -
setMetricsPort
public void setMetricsPort(int metricsPort) The port of the metrics API endpoint. -
getModelName
-
setModelName
The name of model. -
getModelVersion
-
setModelVersion
The version of model. -
getUrl
-
setUrl
Model archive download url, support local file or HTTP(s) protocol. For S3, consider using pre-signed url. -
getRegisterOptions
-
setRegisterOptions
Additional options for the register operation. -
getScaleWorkerOptions
-
setScaleWorkerOptions
Additional options for the scale-worker operation. -
getUnregisterOptions
-
setUnregisterOptions
Additional options for the unregister operation. -
getListLimit
public int getListLimit() -
setListLimit
public void setListLimit(int listLimit) The maximum number of items to return for the list operation. When this value is present, TorchServe does not return more than the specified number of items, but it might return fewer. This value is optional. If you include a value, it must be between 1 and 1000, inclusive. If you do not include a value, it defaults to 100. -
getListNextPageToken
-
setListNextPageToken
The token to retrieve the next set of results for the list operation. TorchServe provides the token when the response from a previous call has more results than the maximum page size. -
getMetricsName
-
setMetricsName
Names of metrics to filter. -
copy
-