Deployments
Inference

Inference

Description:

KeaML provides an effortless method to invoke your deployed machine learning models, allowing you to seamlessly integrate the power of your models into applications and workflows.

Steps for Model Inference:

  1. Locate the Model:

  2. Copy the Inference URL:

    • Click on "Inference URL" to copy the API endpoint to your clipboard. This URL is essential for accessing and invoking your deployed model.
  3. Invoke the Model:

    • Make a request to the copied API endpoint.

    • Include your API Key in the header with the key as X-API-KEY.

    • Ensure the body of the request is in application/json format, containing the input data structured as follows:

      {
        "input": "<expected format of the input>"
      }

Example:

Here is a generic example of how you might structure your HTTP request:

curl -X POST https://api.keaml.com/models/2XVhCGiQksENRbkAIjqikqbgL83/predict \
-H 'Content-Type: application/json' \
-H 'X-API-KEY: b8970666-6458-4f47-b28f-c891366b520e' \
-d '{
  "input": [[0.0, 0.0, 0.0, 0.0], [1.2, 3.222, 1.239, 9.836]]
}'

Utilize this straightforward process to leverage the capabilities of your KeaML deployed models, enabling powerful, real-time inferences to enhance your projects and applications.