Managing the Response from the OpenAI API

Understand various way to manage the response from the API, including specifying response format and ensuring deterministic outputs.

When integrating the OpenAI API, especially the Chat Completions API, into your applications, understanding how to manage and interpret the responses is important. This section will discuss handling responses, focusing on features like JSON mode and reproducible outputs, as well as managing tokens to optimize your usage and costs.

JSON mode

Here are a few scenarios where someone might want the OpenAI API to return a JSON object:

  1. Integration with web applications: When integrating the API into a web application, receiving data in JSON format allows for easy parsing and manipulation of the data on the frontend or backend.

  2. Data analysis: For users interested in analyzing the API's output, receiving it in JSON format makes it easier to structure the data for analysis tools or software.

  3. Chatbots: When developing chatbots, receiving responses in JSON format allows developers to easily extract and use specific parts of the response, such as extracting only the answer to a user's question.

  4. Automated reporting: In scenarios where the API's output is used to generate reports or summaries, receiving the data in JSON format facilitates the automated extraction and presentation of relevant information.

  5. Machine-to-machine communication: JSON is a widely accepted format in APIs for machine-to-machine communication, allowing for clear and structured data exchange between different systems or components of a system.

Enabling JSON mode

We could just tell the model in our prompt that we want the response to be JSON. However, there is a better and more consistent method.

To ensure the output is a valid JSON object, you can set the response_format to {"type": "json_object"} when using models like gpt-4-turbo-preview or gpt-3.5-turbo-0125. This setting makes sure all responses are strings which will parse into a valid JSON object.

When JSON mode is enabled, the response should be a string that can be parsed into a JSON object. However, be aware that this output may not adhere to a specific schema but will be valid JSON.

  1. Always instruct the model to produce JSON output, or you might end up with a stream of whitespace.

  2. Check the finish_reason before parsing the JSON to ensure the message isn’t cut off due to token limits.

Try out the JSON mode below:

Get hands-on with 1400+ tech skills courses.