Skip to main content

Inference

This API returns AI powered tags for a given image url.

Create Inference Request#

Request#

Request Type#

POST

Request Headers#

Content-Type: application/jsonAuthorization: Bearer <api_key>
Note:

The API Key will be provided to you during onboarding phase. Keep your API keys secure!

Endpoint#

The base URL will be provided to you at the time of onboarding.

POST <base_url>/v1/infer

Request Parameters#

ParamTypeRequiredDescription
project_deploy_keystringyesIdentifies the taxonomy you want to use to extract tags.
dataJSON objectyesYour input data that acts as a source to the infer tag(s) as defined as the schema via the tool.

Response#

The response will have a unique packet id to get tags information once the process is complete.

Response Parameters#

ParamsDescription
packet_idUnique id representing your request. You will use this to get the tag information of the request you submitted once it is ready.

Response Status Code#

Status codes indicate if the response was successful or not. For the different response codes we return, please refer the table below:

Status CodeDescription
200Successful
401Authorization has been denied for this request.
400Validation failures.
500Unhandled application errors

Example#

POST <base_url>/v1/inferHeaders{  "Content Type": "application/json",  "Authorization": "Bearer <api_key>"}
{  "project_deploy_key": "<deploy key>"  # data needs to be same as schema defined in the tool for a specific project  "data": {      "image_url": "<image_url>"  }}
Response 200 OK{  "packet_id": "<packet_id>"}

Get Inference Data#

Request#

Request Type#

GET

Request Headers#

Content-Type: application/jsonAuthorization: Bearer <api_key>
Note:

The API Key will be provided to you during onboarding phase. Keep your API keys secure!

Endpoint#

The base URL will be provided to you at the time of onboarding.

GET <base_url>/v1/infer/<packet_id>

Request Parameters#

ParamTypeRequiredDescription
packet_idstringyesUnique id representing your request. You will get this when you submit data for inference using the POST request. Pass this as a query parameter.

Response#

The response will have a unique packet id to get tags information once the process is complete.

Response Parameters#

ParamsDescription
packet_idUnique id representing your request.
versionVersion of deployment used to infer tags.
statusIndicates status of the infer job submitted. Possible Values: IN_PROGRESS
inferencesList of inference objects. This will be empty when the status is IN_PROGRESS.

The Inference Object Parameters#

ParamsDescription
idUnique ID representing a taxonomy bucket.
nameName of the taxonomy bucket.
hierarchyRepresents the full hierarchy of the bucket in the taxonomy.
resultJSON object representing the input data and tags generated.

The Result Object Parameters#

ParamsDescription
inputYour input data that was used for prediction.
outputList of prediction objects identified under the bucket.

The Output Object Parameters#

ParamsDescription
predictionRepresents the predicted value.
confidenceIndicates the confidence score of the predicted value.
prediction_boxThere will be additional keys depending upon the type of model selected. Eg: there will be bounding box when using a localizer model and this field represents that value.

Response Status Code#

Status codes indicate if the response was successful or not. For the different response codes we return, please refer the table below:

Status CodeDescription
200Successful
401Authorization has been denied for this request.
400Validation failures.
500Unhandled application errors

Example#

GET <base_url>/v1/infer/<packet_id>Headers{  "Content Type": "application/json",  "Authorization": "Bearer <api_key>"}
Response 200 OK - when inference is in progress {    "packet_id":"",    "version": "1.0" # graphid / deploy key    "status": "IN_PROGRESS",    "inferences":[]}
Response 200 OK - when inference is completed{    "packet_id":"",    "version": "",    "status": "COMPLETED",    "inferences":[{      "id": <bucket_id>, # attribute id      "name": <bucket_name>,       "hierarchy": "<bucket_hierarchy>",      "result": {        "input": {            "image_url": "<image_url>",        }        "output": [           {            "confidence": <prediction_score>,            "prediction": <predicted_value>,            "prediction_box": <cropped_image_used_for_prediction>          }        ]      }    }]}

Error Responses#

We use standard HTTP Responses to indicate the various status of our APIs. For custom errors that our API returns we follow the following error JSON. This will be returned for the HTTP status code 400.

Error Response Payload#

ParamsDescription
errorsList of error objects.

Error Object#

ParamsDescription
typeType of error returned.
codeError code that uniquely identifies the error.
messageShort description of the error message.
detailAny additional details specific to the context of the error occurred.
helpURL that explains the error in full description. Will be returned if available.

Supported Error Types and Codes#

The below table describes the error types and error codes for the inference APIs described in this document.

Error TypeError CodeDescription
auth_failureERR_DEPLOY_KEYReturned when the deployment key is no longer valid or does not exist.
validation_failureERR_SCHEMAReturned when the schema of the data does not match the schema defined in the catalog.
validation_failureERR_PACKET_IDReturned when the packet id does not exist or request for the packet is no longer valid.
server_failureERR_SERV_UNAVAILABLEReturned when there is an internal outage.