.. _apis-main: APIs ==== .. contents:: Contents :local: You've built a great model. In fact, your model is so good that you want to let other people access its predictions. Often, you will want to make your predictions available to another program (e.g. a dashboard or a web application). The de-facto standard for programs to communicate with each other is through `REST APIs `_. For instance, let's pretend you've built a model that determines whether someone is a cat or a dog person depending on their address. Your organisation wants to automatically classify customers who sign up to a mailing list so that they can be sent animal-appropriate advertising material. By wrapping your model in an API, you expose a fixed URI that can be queried by a software engineer to determine whether the user is a cat or a dog person. Typically, as soon as a user signs up, application supporting your website will query your model by sending an HTTP request with a JSON payload like: .. code-block:: json { "firstName": "Enrico", "lastName": "Fermi", "address": "22 Clark Street", "city": "Chicago", "country": "United States" } Your API would read this payload and query your model to find out whether Enrico Fermi is more likely to be a cat or a dog person. It might then return: .. code-block:: json { "probabilities": { "cat": 0.3, "dog": 0.7 }, "mostLikely": "dog" } The application supporting your website can use this to send Fermi an email with pictures of chihuahuas. Wrapping your model behind an APIs allows other programs in your organisation to use it. APIs on Faculty --------------- Creating an API from the ground up involves a lot of boilerplate. You need to: - set up an authentication and authorisation system to let the right people access your API. - set up DNS records so that the Internet knows how to find your API. - set up TLS so that your API can be accessed over `https`. Faculty aims to remove this boilerplate so you can concentrate on the work that adds value. It generates API keys for you and hides your API behind a program that filters out unauthenticated requests. You can then distribute those keys to the people who need to access your API. It also gives you a stable URI that clients of your API can target to get their predictions. The target URI is only accessible over `https`, so communications to your API are always encrypted. Develop an API -------------- How you develop an API on Faculty depends on what language you wrote your model in. For models written in python, Faculty provides a method that takes in a WSGI python object, such as a flask server object, and runs this object. For models written in R, you can specify your API in an R file decorated with `plumber` commands that define the endpoints of your API. For anything else - anything written in another language, or using a different web server framework - Faculty provides a method that simply requires you to create a bash script that runs your API. For documentation on how to create APIs with these methods, see the following pages: .. toctree:: :titlesonly: flask_apis/index plumber_apis/index custom_apis/index .. _develop-api: Test your API ------------- In order to query APIs, you need to create a server to run it. During testing, it's best to use a development server for this, as it gives you a terminal on your server. Go to the "Development" tab and create a new server. This server will automatically start serving your API. .. thumbnail:: images/api-dev-server.png Once it's up and running, we can start to test the API. If you find that you need to change the code for your API, simply make those changes to the source code files, and click **Restart** to re-run the API with the new code. Test your API from the terminal ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Faculty APIs are served via port 8000, so while you are developing, you can test your API endpoints using the terminal in the deployment window, with e.g. `cURL `_ or `httpie `_ from the command line. For our `Cats vs. Dogs` example APIs written in Python, R, or node.js, you could query your API as follows: .. code-block:: bash curl \ -H "Content-Type: application/json" \ --data '{"firstName": "Enrico", "lastName": "Fermi"}' \ localhost:8000/predict This should return the correct prediction (``{"mostLikely": "dog"}``). Test your new API from Postman ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you want to test a more complex API, you may want to use a graphical REST client such as `Postman `_, or `httpie `_ from the command line. There are two important pieces of information that you need for accessing the API from outside of the server running your newly created API: - the `API URL` is the unique address of your server. It will look like ``https://cube-64e0d7d8-1a3d-4fec-82da-fa7afea9138e.api.example.my.faculty.ai``. - the `API key` for your server. You will need to provide the key in every request to the API. You can get the URI and the API key for your development server from the `Development` tab: .. thumbnail:: images/api-dev-server.png Paste these into Postman (or another REST client). The API key needs to be passed as value to the ``UserAPI-Key`` header. .. note:: ``Faculty-UserAPI-Key`` header is accepted as well but has been deprecated. Please use ``UserAPI-Key`` header in requests to your APIs instead. .. thumbnail:: images/postman-example.png Send a POST request to your API with body: .. code-block:: json { "firstName": "Enrico", "lastName": "Fermi", "address": "22 Clark Street", "city": "Chicago", "country": "United States" } .. thumbnail:: images/postman-request-body-example.png Your API will reply with the correct payload: .. code-block:: json { "mostLikely": "dog" } Congratulations! You now have a working API that returns model predictions for input data. If you want to share it with the world, you only have one step left to do: deploy it. Deploy your new API ------------------- Running the API on a deployment server, rather than a development server, will ensure that it always has the same URL, even when you restart the server. To deploy the API, simply head to the ``Deployment`` tab and click "Deploy". You will see the logs for your API underneath the server. As you'll notice, you don't have command line access to the API server. This is to avoid accidental termination or editing of the API. In order to give a colleague access to your API, you will need to create an access key for them. You can do so in the "Configure" screen by clicking ``Generate API Key``. .. thumbnail:: images/api-keys.png Users with a key can then query your API from anywhere by passing the key as a header in their requests: .. code-block :: bash curl \ --header "UserAPI-Key: YUANqoJCEShXxHfI05kQsmMtESltviRdwoAu668svANtsBGxuU" \ https://my-test.api.example.my.faculty.ai By generating unique keys for each user, you can manage permissions and access closely. Lifting the hood ---------------- Faculty aims to remove a lot of the boilerplate around API creation. If you use APIs a lot, you will want to know exactly what that boilerplate is. When you create an API, Faculty creates a reverse proxy server. Any request to your API will first go through the reverse proxy. The proxy looks for the ``UserAPI-Key`` header and validates its value (against an internal API). Assuming the key is valid, it redirects the request to port 8000 on the server running the API. The proxy also does TLS termination.