How to access the OpenAI API service through endpoints

The OpenAI playground is an AI environment where users can interact with different artificial intelligence models, experiment with different natural language processing tasks, efficiently generate completions against prompts, and learn about the capabilities of AI. The OpenAI playground uses the OpenAI API in the backend to call OpenAI models. These models can also be accessed outside of the playground using endpoints. Endpoints serve as a gateway for interacting with the OpenAI service in different programmatic environments. By sending HTTP requests to these endpoints, users can access the AI model’s capabilities to perform their desired tasks.

In this Answer, we’ll familiarize ourselves with the functionality of the OpenAI API endpoints.

OpenAI endpoints

Selecting the right endpoint for your API request is very important. It takes a step-by-step approach to properly customize the responses from the OpenAI server. Here are some of the endpoints we can use in our API requests:

Models

To list and retrieve the different models provided by the OpenAI service, we use the models endpoint. We can use this endpoint to list the available models, retrieve information about any of the specific models, or delete a fine-tuned model.

  • List models: The list model endpoint provides a list of all the models available to be accessed by the OpenAI API service. We can see the available metadata along with the engine, including information about the owner, object, and ID. To use this, use the following command:

GET https://api.openai.com/v1/models
  • Retrieve models: To retrieve the metadata for a specific model. We can use the retrieve model endpoint. The response includes metadata to the specified model in the API request. Here’s how to implement this:

GET https://api.openai.com/v1/models/{model}

Instead of {model} we specify the name of the model we want metadata on, like replacing it with the gpt-3.5-turbo-instruct model.

Files

Files can be used to upload necessary documents that can be accessed through various API requests. The most common use of files is with features like an assistant and in fine-tuning a model. With this, we can upload our files, retrieve the content of the available files, and delete them when necessary.

  • List files: This command lists all the files associated with the specific OpenAI identity. The response to the request lists all the file objects along with their metadata. To use this endpoint, we use the following command:

GET https://api.openai.com/v1/files
  • Retrieve file: To retrieve information about a specific file available, we can use the retreive file endpoint. We must specify the file_id of the files we want to get information on. We can use the following command to access this functionality:

GET https://api.openai.com/v1/files/{file_id}

In the above command, instead of {file_id}, we specify the name of the specific file we want information on for example, the ID of the file object is file-a123.

Chat

The chat endpoints are used to create a conversation between the AI chat model and the user. To have a conversation efficiently, we can specify the roles and prompts for the chat model to work on.

Chat completion: To create a chat completion, we must specify the role of the chatbot and customize the working with the prompt. The model understands the user’s request and generates a response to that. To create such a request, we can use the following command:

POST https://api.openai.com/v1/chat/completions

It is important to specify the model and messages in the body of the API request. An example request body looks like the following:

{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}

Embeddings

Embeddings are the core of machine learning because they convert the given input into high-dimensional vectors. Embedded data can be easily consumed by machine learning algorithms. We can create an embedding vector using the following command:

POST https://api.openai.com/v1/embeddings

It is also mandatory to specify the model and input to be embedded in the body of the API request. We can also specify the encoding_format to define the type of embedding we need for our input. The body of the request looks like this:

{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}

The OpenAI server sends a list of embedding objects as a response to the request.

Setting up the OpenAI request

To send a proper OpenAI request, the user needs to set parameters and an appropriate engine to generate accurate results. Here are the steps in setting up the OpenAI API request:

  1. Define the URL: We can send both GET and POST requests using the OpenAI API service. Add the OpenAI URL, https://api.openai.com/, select the model, and select the type of request.

  2. Set the parameters: We can set parameters to customize the response from OpenAI. The parameters can change the length of the tokens, specify the need for creative responses, and much more.

  3. Set the header: The header of the API request is where we specify the authorization key and content type we need in response.

  4. Set the body: The body usually holds the request, including the prompt and input in JSON format. The output variable can also be set with an empty field.

You can add the OpenAI key in Headers as Authorization with the Bearer OPEN_API_KEY format.

Here is the execution of each of the endpoints and how we can efficiently set the header and parameters for customizing an API request to the OpenAI server:

Collection
List models
Retrieve model
List files
Retrieve file
Chat completion
Embeddings

Open a request from file tree

Working example of the OpenAI API endpoints

Copyright ©2024 Educative, Inc. All rights reserved