Inference
This API returns AI powered tags for a given image url.
#
Create Inference Request#
Request#
Request TypePOST
#
Request HeadersContent-Type: application/jsonAuthorization: Bearer <api_key>
Note:
The API Key will be provided to you during onboarding phase. Keep your API keys secure!
#
EndpointThe base URL will be provided to you at the time of onboarding.
POST <base_url>/v1/infer
#
Request ParametersParam | Type | Required | Description |
---|---|---|---|
project_deploy_key | string | yes | Identifies the taxonomy you want to use to extract tags. |
data | JSON object | yes | Your input data that acts as a source to the infer tag(s) as defined as the schema via the tool. |
#
ResponseThe response will have a unique packet id to get tags information once the process is complete.
#
Response ParametersParams | Description |
---|---|
packet_id | Unique id representing your request. You will use this to get the tag information of the request you submitted once it is ready. |
#
Response Status CodeStatus codes indicate if the response was successful or not. For the different response codes we return, please refer the table below:
Status Code | Description |
---|---|
200 | Successful |
401 | Authorization has been denied for this request. |
400 | Validation failures. |
500 | Unhandled application errors |
#
ExamplePOST <base_url>/v1/inferHeaders{ "Content Type": "application/json", "Authorization": "Bearer <api_key>"}
{ "project_deploy_key": "<deploy key>" # data needs to be same as schema defined in the tool for a specific project "data": { "image_url": "<image_url>" }}
Response 200 OK{ "packet_id": "<packet_id>"}
#
Get Inference Data#
Request#
Request TypeGET
#
Request HeadersContent-Type: application/jsonAuthorization: Bearer <api_key>
Note:
The API Key will be provided to you during onboarding phase. Keep your API keys secure!
#
EndpointThe base URL will be provided to you at the time of onboarding.
GET <base_url>/v1/infer/<packet_id>
#
Request ParametersParam | Type | Required | Description |
---|---|---|---|
packet_id | string | yes | Unique id representing your request. You will get this when you submit data for inference using the POST request. Pass this as a query parameter. |
#
ResponseThe response will have a unique packet id to get tags information once the process is complete.
#
Response ParametersParams | Description |
---|---|
packet_id | Unique id representing your request. |
version | Version of deployment used to infer tags. |
status | Indicates status of the infer job submitted. Possible Values: IN_PROGRESS |
inferences | List of inference objects. This will be empty when the status is IN_PROGRESS. |
#
The Inference Object ParametersParams | Description |
---|---|
id | Unique ID representing a taxonomy bucket. |
name | Name of the taxonomy bucket. |
hierarchy | Represents the full hierarchy of the bucket in the taxonomy. |
result | JSON object representing the input data and tags generated. |
#
The Result Object ParametersParams | Description |
---|---|
input | Your input data that was used for prediction. |
output | List of prediction objects identified under the bucket. |
#
The Output Object ParametersParams | Description |
---|---|
prediction | Represents the predicted value. |
confidence | Indicates the confidence score of the predicted value. |
prediction_box | There will be additional keys depending upon the type of model selected. Eg: there will be bounding box when using a localizer model and this field represents that value. |
#
Response Status CodeStatus codes indicate if the response was successful or not. For the different response codes we return, please refer the table below:
Status Code | Description |
---|---|
200 | Successful |
401 | Authorization has been denied for this request. |
400 | Validation failures. |
500 | Unhandled application errors |
#
ExampleGET <base_url>/v1/infer/<packet_id>Headers{ "Content Type": "application/json", "Authorization": "Bearer <api_key>"}
Response 200 OK - when inference is in progress { "packet_id":"", "version": "1.0" # graphid / deploy key "status": "IN_PROGRESS", "inferences":[]}
Response 200 OK - when inference is completed{ "packet_id":"", "version": "", "status": "COMPLETED", "inferences":[{ "id": <bucket_id>, # attribute id "name": <bucket_name>, "hierarchy": "<bucket_hierarchy>", "result": { "input": { "image_url": "<image_url>", } "output": [ { "confidence": <prediction_score>, "prediction": <predicted_value>, "prediction_box": <cropped_image_used_for_prediction> } ] } }]}
#
Error ResponsesWe use standard HTTP Responses to indicate the various status of our APIs. For custom errors that our API returns we follow the following error JSON. This will be returned for the HTTP status code 400.
#
Error Response PayloadParams | Description |
---|---|
errors | List of error objects. |
#
Error ObjectParams | Description |
---|---|
type | Type of error returned. |
code | Error code that uniquely identifies the error. |
message | Short description of the error message. |
detail | Any additional details specific to the context of the error occurred. |
help | URL that explains the error in full description. Will be returned if available. |
#
Supported Error Types and CodesThe below table describes the error types and error codes for the inference APIs described in this document.
Error Type | Error Code | Description |
---|---|---|
auth_failure | ERR_DEPLOY_KEY | Returned when the deployment key is no longer valid or does not exist. |
validation_failure | ERR_SCHEMA | Returned when the schema of the data does not match the schema defined in the catalog. |
validation_failure | ERR_PACKET_ID | Returned when the packet id does not exist or request for the packet is no longer valid. |
server_failure | ERR_SERV_UNAVAILABLE | Returned when there is an internal outage. |