Sarcasm Classification Using BERT
Google released the Bidirectional Encoder Representations from Transformers (BERT) model in 2018. It consists of stacked encoders of the transformer model released in 2017 and is pretrained on masked language modeling and next-sentence prediction tasks. It has seen massive success in modeling linguistic and semantic features in NLP applications. As a result, BERT has been successfully used for question answering, multigenre classification of text, and sentence completion tasks.
In this project, we'll fine-tune the BERT model to detect sarcastic tweets. To do this, we'll use pandas and NumPy for manipulating the dataset. We'll also use the seaborn and Matplotlib libraries for creating visualizations and Keras and TensorFlow for implementing deep learning. Finally, we'll use the scikit-learn library to evaluate the model and compute its classification report.