Navigating Risks: Misinformation, Misuse, and Quality Impact

From automation to misinformation

An entirely new category of risk may come into the picture when we consider the potential misuse of GPT-3. Possible use cases here are as trivial as applications designed to automate writing term papers, clickbait articles, and interacting on social media accounts, all the way to intentionally promoting misinformation and extremism using similar channels.

The authors of the OpenAI paper that presented GPT-3 to the world in July 2020, Language Models are Few-Shot LearnersBrown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901., included a section called Misuse of Language Models:

“Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing, and social engineering pretexting. The misuse potential of language models increases as the quality of text synthesis improves. The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from human-written text in 3.9.4 represents a concerning milestone in this regard.”

Get hands-on with 1200+ tech skills courses.