OpenAI API "InvalidRequestError: Unrecognized argument: messages"

When utilizing OpenAI's API, you might come across an error message that reads: "InvalidRequestError: Unrecognized request argument supplied: messages". Generally, this issue comes up when the openai.Completion.create function is attempted with the gpt-3.5-turbo model. The root of this problem is that the gpt-3.5-turbo model, being a chat model, needs a distinctive function to generate completions.

Note: Learn about the differences between the model gpt-3.5-turbo and gpt-3.5-turbo-0301

Understanding the error

The InvalidRequestError pops up when your request to the OpenAI API is malformedNot conforming to a standard type. or lacks some necessary parameters. In the context of this issue, the messages argument is unrecognized due to its inappropriateness for the function in use.

The solution

To bypass this error, use the openai.ChatCompletion.create function instead of openai.Completion.create. The openai.ChatCompletion.create function is explicitly designed to accommodate chat models such as gpt-3.5-turbo.

This small change in your code can avoid this error:

import openai
class OpenAIAgent:
def __init__(self, key, model = "gpt-3.5-turbo"):
openai.api_key = key
self.model = model
def generate_responses(self, prompts):
return [self._get_chat_completion(prompt) for prompt in prompts]
def _get_chat_completion(self, prompt):
response = openai.ChatCompletion.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
max_tokens=20,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
return response['choices'][0]['message']['content']
# Usage:
agent = OpenAIAgent('your-api-key')
prompts = ['What are your functionalities?', 'What is the best name for an ice-cream shop?', 'Who won the premier league last year?']
responses = agent.generate_responses(prompts)

Note: This code will only be executable when you enter your API key. To learn how to obtain ChatGPT's API key, click here.

Explanation

  • Line 1: import openai - Import the OpenAI library.

  • Line 3: Define a class OpenAIAgent.

  • Line 4: Constructor of the class that sets API key and the model.

  • Line 5: Set the OpenAI API key.

  • Line 6: Set the model for generating text.

  • Line 8–9: The method that returns generated responses for each prompt in the input list.

  • Line 11–21: A private method that interacts with OpenAI's API to get a single response.

  • Line 12–20: API call to OpenAI to generate a response.

  • Line 21: Extract and return the content of the generated message.

  • Line 24: Create an instance of OpenAIAgent. Replace 'your-api-key' with your secure OpenAI API key.

  • Line 25: Declare a list of prompts

  • Line 26: Generate responses for a list of prompts.

In this code snippet, we've switched out openai.Completion.create with openai.ChatCompletion.create. Additionally, item has been substituted with prompt in the messages argument to rightly point to the current prompt in the loop.

Working through an example

Assume you have a collection of prompts:

prompts = ['What are your functionalities?', 'What is the best name for an ice-cream shop?', 'Who won the premier league last year?']

To generate responses for these prompts from the gpt-3.5-turbo model, the get_response function should be called:

responses = get_response(prompts=prompts)

This will return a list of responses from the model, one for each prompt in your list.

Remember that it's important when interacting with OpenAI's API to utilize the correct function and arguments tailored for the model in use. The openai.ChatCompletion.create function is constructed for chat models, whereas the openai.Completion.create function is intended for completion models. If the wrong function or arguments are used, you'll run into an InvalidRequestError.

Copyright ©2024 Educative, Inc. All rights reserved