Learn how to generate or manipulate text using the completions endpoint of OpenAI API.
We'll cover the following
The completions endpoint
The completions endpoint can be used to perform many tasks on a text, including classification, generation, transformation, completion of incomplete text, factual responses, and others. The input to this endpoint is a text message called a prompt. The output is also some text, depending on how we design and provide the prompt.
The following URL uses the POST
method which can be used to call the completions endpoint:
https://api.openai.com/v1/completions
Request parameters
Let’s see some essential request parameters for this endpoint in the table below:
Fields | Format | Type | Description |
| String | Required | The ID of the engine that will be used to perform the task. |
| String/Array of strings | Optional | This is the text for which the completions will be generated. |
| Integer | Optional | This is the maximum number of tokens to generate in the completion. Default value: 16 |
| Float | Optional | This is a token sampling method used during evaluation. A higher number indicates that the model will take greater risks. For a more creative response use 0.9, and for those with a well-defined response use 0. Values between 0 and 1 inclusive can be used. Default value: 1 |
| Float | Optional | Nucleus sampling is an alternative to temperature sampling in which the model evaluates the outcomes of tokens with top p probability mass. So 0.1 indicates that only the top 10% probability mass tokens will be evaluated. Default value: 1 |
| String | Optional | This text comes at the end of a piece of text and helps the model for a better text generation. |
| Integer | Optional | It indicates the number of completions to generate. Default value: 1 |
| Integer | Optional | This value decides the number of tokens to return with the most likelihood. Default value: null |
| Integer | Optional | Server-side generates Default value: 1 |
| Integer | Optional | A value range between -2.0 to 2.0. The positive value penalizes the new tokens if they exist in the text, increasing the possibility of generating new things. |
| Integer | Optional | A value range between -2.0 to 2.0. The positive value penalizes the new tokens concerning their existing frequency in the text, decreasing the possibility of repeating the same words. |
Note: You can learn more about the
engine
argument in this lesson.
Let’s use the completions endpoint to write something about “artificial intelligence.” In the code widget below we use the engine="text-davinci-002"
model for the completions task. The temperature
value used is 0.9
. Make sure you’ve added your SECRET_KEY
. Press the “Run” button to see the response for our prompt
.
// Define endpoint URL hereconst endpointUrl = "https://api.openai.com/v1/completions";// Define Header Parameters hereconst headerParameters = {"Authorization": "Bearer {{SECRET_KEY}}","Content-Type": "application/json"};// Body Parametersconst bodyParameters = JSON.stringify({model: "text-davinci-002",prompt: "Write a tagline about artificial intelligence.",temperature: 0.9});// Setting API call optionsconst options = {method: "POST",headers: headerParameters,body: bodyParameters};// Function to make API callasync function createCompletion() {try {const response = await fetch(`${endpointUrl}`, options);// Printing responseprintResponse(response);} catch (error) {// Printing error messageprintError(error);}}// Calling function to make API callcreateCompletion();
We have imported the node-fetch
library to make an API call in the above code. Let’s see some code details:
- Line 2: We define the endpoint URL.
- Line 5–8: We define the header, which includes the authorization token and content type.
- Line 11–15: We define the request parameters required to make the API call.
- Line 18–22: We set options by specifying the
header
,body
parameters, and the request method asPOST
. - Line 25–34: We create a function
createCompletion
to make an API call usingfetch
and handle any exception if it occurs. TheprintResponse
andprintError
are the custom functions to print the respective objects. - Line 37: We invoke the
createCompletion
function.
Response fields
The response is a JSON object. Some essential attributes are given below:
Fields | Format | Description |
| Array object | It is an array of objects. Every object contains valuable information about the response. The size of the array will be equal to the n parameter that we provided in the request parameters. |
| Float | Because |
| String | It contains the |
Prompt design
The completions endpoint is a simple text-in and text-out model. We can instruct it on what to do by providing examples. A well-written prompt will result in good output.
Basics of prompt design
OpenAI can perform many simple and complex tasks in text analysis. To get the most out of it, we have to be very specific about our prompt. We must take particular care about the following points:
- Provide one or more examples.
- Proofread the prompt for spelling and grammatical mistakes because that can affect the output.
- Check for setting parameters such as
temperature
andtop_p
.
The following prompt exhibits the generation use case by generating some text corresponding to the provided prompt:
Write a tagline about artificial intelligence.
We may get the following output:
The future is now.
Note: The output can be different for each run.
Try out different prompts and examine the varying responses. For example:
"Suggest one name for a cat."
"Suggest two names for a cat."
"Suggest names for a cat."
In the first example, the API will return only one name. In the second example, it will return two names. And for the third example, it will return multiple names.
We’ll see that the model is great at understanding the context of the sentences, and the prompt
plays an important role in this regard. It understands what and how many names are required.
In case the results of the API are not as satisfactory as you might expect, follow the checklist below:
- Ensure clarity about the intended generation of the text.
- Give multiple examples. (We’ll see this prompt design in the next lessons.)
- Check for the mistakes in the provided examples.
- Use settings
temperature
andtop_p
correctly.
Get hands-on with 1200+ tech skills courses.