BERTSUM for Extractive Summary
Explore how to apply BERTSUM for extractive summarization by classifying important sentences with a simple classifier, inter-sentence transformer, or LSTM. Understand how to fine-tune the pre-trained BERT model jointly with the summarization layer for effective text summarization tasks.
In extractive summarization, we create a summary by selecting only the important sentences from the given text. To perform extractive summarization, we obtain the representation of every sentence in the given text using a pre-trained BERT model.
Now let's see how to use BERTSUM in the following three ways:
BERTSUM with a simple classifier
BERTSUM with an inter-sentence transformer
BERTSUM with LSTM
BERTSUM with a classifier
We feed the representation of a sentence to a simple binary classifier, and the classifier tells us whether the sentence is important or not. That is, the classifier returns the probability of the sentence being included in the summary. The classification layer is often called the summarization layer. This is shown in the following figure:
From the preceding figure, we can observe that we feed all the sentences from a given text to the pre-trained BERT model. The pre-trained BERT model will return the representation of each sentence,
For each sentence
From the preceding equation, we can observe that we are using a simple sigmoid classifier to obtain the probability