[1908.07026] Topic Augmented Generator for Abstractive Summarization
We believe this is likely a fruitful direction for the task of abstractive summarization where rephrasing and introducing new concepts that are not observed in the input texts are essential.
Abstract: Steady progress has been made in abstractive summarization with
attention-based sequence-to-sequence learning models. In this paper, we propose
a new decoder where the output summary is generated by conditioning on both the
input text and the latent topics of the document. The latent topics, identified
by a topic model such as LDA, reveals more global semantic information that can
be used to bias the decoder to generate words. In particular, they enable the
decoder to have access to additional word co-occurrence statistics captured at
document corpus level. We empirically validate the advantage of the proposed
approach on both the CNN/Daily Mail and the WikiHow datasets. Concretely, we
attain strongly improved ROUGE scores when compared to state-of-the-art models.
‹Figure 1: Top (CNN/DM), Bottom (WikiHow). Boxplots of pairwise KL divergences in the topic distributions θ∗ between the original documents and: the Ground Truth (GT), generated summaries. Lower implies better semantic coherence. (Quantitative results)Figure 2: From CNN/DM. Our TAG+Cov model generates 3 main sentences instead of 2 for PG+Cov. (Qualitative results)Figure 3: From WikiHow (“How to Make Vegetable Curry in Harvest Moon Animal Parade”). (Qualitative results)›
[1705.04304] A Deep Reinforced Model for Abstractive Summarization[1808.10792] Bottom-Up Abstractive Summarization[1902.09243] Pretraining-Based Natural Language Generation for Text Summarization[1712.06100] Query-Based Abstractive Summarization Using Neural Networks[1902.09243v1] Pretraining-Based Natural Language Generation for Text Summarization