Introduction: Detecting Customer Emotions to Make Predictions
Get an overview of what we will cover in this chapter.
We'll cover the following
Sentiment analysis relies on the principle of compositionality. How can we understand a whole sentence if we cannot understand parts of a sentence? Is this tough task possible for NLP transformer models? We will try several transformer models in this chapter to find out.
Chapter overview
We will start with the Stanford Sentiment Treebank (SST). The SST provides datasets with complex sentences to analyze. It is easy to analyze sentences like “The movie was great.” However, what happens if the task becomes very tough with complex sentences like “Although the movie was a bit too long, I really enjoyed it”? This sentence is segmented. It forces a transformer model to understand the structure of the sequence and its logical form.
We will then test several transformer models with complex sentences and simple sentences. We will find that no matter which model we try, it will not work if it isn’t trained enough. Transformer models are like us. They are students that need to work hard to learn and try to reach real-life human baselines.
Running DistilBERT, RoBERTa-large, BERT-base, MiniLM-L12-H84-uncased, and BERT-base multilingual models is fun! However, we will discover that some of these students require more training, just like we would.
Along the way, we will see how to use the output of the sentiment tasks to improve customer relationships and see a nice five-star interface you could implement on your website.
Finally, we will use GPT-3’s online interface for sentiment analysis with an OpenAI account. No AI development or API is required!
This chapter covers the following topics:
Get hands-on with 1200+ tech skills courses.