OpenAI’s research paper Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets led the company to launch a first-of-its-kind fine-tuning endpoint that allows us to get more out of GPT-3 than was previously possible by customizing the model for our particular use case. Customizing GPT-3 improves the performance of any natural language task GPT-3 can perform for our specific use case.

Working with customized GPT-3

Let us explain how the customizing of GPT-3 works.

Pre-trained GPT-3

OpenAI pre-trained GPT-3 on a specially prepared dataset in a semi-supervised fashion. When given a prompt with just a few examples, it can often intuit what task we are trying to perform and generate a plausible completion. This is called few-shot learning.

Fine-tuning GPT-3

By fine-tuning GPT-3 on their own data, users can create a custom model version tailored to their specific project needs. This customization allows GPT-3 to be more reliable and efficient in various use cases. Fine-tuning the model involves adjusting it to consistently perform in the desired way. This can be done using an existing dataset of any size or incrementally adding data based on user feedback.

Get hands-on with 1200+ tech skills courses.