Exercise: Content Moderation Using OpenAI

Test the concepts we’ve learned with a coding exercise.

We'll cover the following

Problem statement

In this exercise, implement the content moderation system using the OpenAI Moderation API. The objective is to assess whether a given text input contains content that might warrant moderation. The OpenAI Moderation API provides a method openai.Moderation.create, which takes the text input and obtains moderation results, including details about flagged categories.

Coding challenge

Write a Python function called is_content_safe(text_input) that uses the OpenAI Moderation API to check if the provided text input is considered safe or if it contains potentially objectionable content. The function should return a boolean value (True if safe, False if not).

Get hands-on with 1200+ tech skills courses.