APIs

You’ve built a great model. In fact, your model is so good that you want to let other people access its predictions. Often, you will want to make your predictions available to another program (e.g. a dashboard or a web application). The de-facto standard for programs to communicate with each other is through REST APIs.

For instance, let’s pretend you’ve built a model that determines whether someone is a cat or a dog person depending on their address. Your organisation wants to automatically classify customers who sign up to a mailing list so that they can be sent animal-appropriate advertising material. By wrapping your model in an API, you expose a fixed URI that can be queried by a software engineer to determine whether the user is a cat or a dog person. Typically, as soon as a user signs up, application supporting your website will query your model by sending an HTTP request with a JSON payload like:

{
  "firstName": "Enrico",
  "lastName": "Fermi",
  "address": "22 Clark Street",
  "city": "Chicago",
  "country": "United States"
}

Your API would read this payload and query your model to find out whether Enrico Fermi is more likely to be a cat or a dog person. It might then return:

{
  "probabilities": { "cat": 0.3, "dog": 0.7 },
  "mostLikely": "dog"
}

The application supporting your website can use this to send Fermi an email with pictures of chihuahuas.

Wrapping your model behind an APIs allows other programs in your organisation to use it.

APIs on SherlockML

Creating an API from the ground up involves a lot of boilerplate. You need to:

  • set up an authentication and authorisation system to let the right people access your API.
  • set up DNS records so that the Internet knows how to find your API.
  • set up TLS so that your API can be accessed over https.

SherlockML aims to remove this boilerplate so you can concentrate on the work that adds value. It generates API keys for you and hides your API behind a program that filters out unauthenticated requests. You can then distribute those keys to the people who need to access your API. It also gives you a stable URI that clients of your API can target to get their predictions. The target URI is only accessible over https, so communications to your API are always encrypted.

Develop an API

How you develop an API on SherlockML depends on what language you wrote your model in. For models written in python, SherlockML provides a method that takes in a WSGI python object, such as a flask server object, and runs this object. For models written in R, you can specify your API in an R file decorated with plumber commands that define the endpoints of your API. For anything else - anything written in another language, or using a different web server framework - SherlockML provides a method that simply requires you to create a bash script that runs your API.

For documentation on how to create APIs with these methods, see the following pages:

Test your API

In order to query APIs, you need to create a server to run it. During testing, it’s best to use a development server for this, as it gives you a terminal on your server.

Go to the “Development” tab and create a new server. This server will automatically start serving your API.

../../_images/api-dev-server.png

Once it’s up and running, we can start to test the API. If you find that you need to change the code for your API, simply make those changes to the source code files, and click Restart to re-run the API with the new code.

Test your API from the terminal

SherlockML APIs are served via port 8000, so while you are developing, you can test your API endpoints using the terminal in the deployment window, with e.g. cURL or httpie from the command line.

For our Cats vs. Dogs example APIs written in Python, R, or node.js, you could query your API as follows:

curl \
  -H "Content-Type: application/json" \
  --data '{"firstName": "Enrico", "lastName": "Fermi"}' \
  localhost:8000/predict

This should return the correct prediction ({"mostLikely": "dog"}).

Test your new API from Postman

If you want to test a more complex API, you may want to use a graphical REST client such as Postman, or httpie from the command line.

There are two important pieces of information that you need for accessing the API from outside of the server running your newly created API:

  • the API URL is the unique address of your server. It will look like https://cube-64e0d7d8-1a3d-4fec-82da-fa7afea9138e.api.sherlockml.io.
  • the API key for your server. You will need to provide the key in every request to the API.

You can get the URI and the API key for your development server from the Development tab:

../../_images/api-dev-server.png

Paste these into Postman (or another REST client). The API key needs to be passed as value to the SherlockML-UserAPI-Key header.

../../_images/postman-example.png

Send a POST request to your API with body:

{
  "firstName": "Enrico",
  "lastName": "Fermi",
  "address": "22 Clark Street",
  "city": "Chicago",
  "country": "United States"
}
../../_images/postman-request-body-example.png

Your API will reply with the correct payload:

{
  "mostLikely": "dog"
}

Congratulations! You now have a working API that returns model predictions for input data. If you want to share it with the world, you only have one step left to do: deploy it.

Deploy your new API

Running the API on a deployment server, rather than a development server, will ensure that it always has the same URL, even when you restart the server. To deploy the API, simply head to the Deployment tab and click “Deploy”. You will see the logs for your API underneath the server.

As you’ll notice, you don’t have command line access to the API server. This is to avoid accidental termination or editing of the API.

In order to give a colleague access to your API, you will need to create an access key for them. You can do so in the “Configure” screen by clicking Generate API Key.

../../_images/api-keys.png

Users with a key can then query your API from anywhere by passing the key as a header in their requests:

curl \
  --header "SherlockML-UserAPI-Key: YUANqoJCEShXxHfI05kQsmMtESltviRdwoAu668svANtsBGxuU" \
  https://my-test.api.sherlockml.io

By generating unique keys for each user, you can manage permissions and access closely.

Lifting the hood

SherlockML aims to remove a lot of the boilerplate around API creation. If you use APIs a lot, you will want to know exactly what that boilerplate is.

When you create an API, SherlockML creates a reverse proxy server. Any request to your API will first go through the reverse proxy. The proxy looks for the Sherlockml-UserAPI-Key header and validates its value (against an internal API). Assuming the key is valid, it redirects the request to port 8000 on the server running the API. The proxy also does TLS termination.