Let's learn how to fine-tune the BERT model to perform text summarization. First, we will understand how to fine-tune BERT for extractive summarization, and then we will see how to fine-tune BERT for abstractive summarization.

Extractive summarization using BERT

To fine-tune the pre-trained BERT for the extractive summarization task, we slightly modify the input data format of the BERT model. Before looking into the modified input data format, let's first recall how we feed the input data to the BERT model.

Say we have two sentences: 'Paris is a beautiful city. I love Paris'. First, we tokenize the sentences, and we add a [CLS] token only at the beginning of the first sentence, and we add a [SEP] token at the end of every sentence. Before feeding the tokens to the BERT, we convert them into embedding using three embedding layers known as token embedding, segment embedding, and position embedding. We sum up all the embeddings together element-wise, and then we feed them as input to the BERT. The input data format of BERT is shown in the following figure:

Get hands-on with 1200+ tech skills courses.