What is it about?

Abstract text summarization unavoidably faces problems of inconsistency with the original text. This paper introduces a sequence tagging task to achieve multi-task learning for abstract text summarization models. In this sequence tagging task, we meticulously designed annotated datasets at both entity and sentence levels based on an analysis of the XSum dataset, aiming to enhance the factual consistency of generated summaries. Experimental results demonstrate that the optimized BART model yields favorable performance in terms of ROUGE and FactCC metrics.

Featured Image

Read the Original

This page is a summary of: Optimization of the Abstract Text Summarization Model Based on Multi-Task Learning, October 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3650400.3650469.
You can read the full text:

Read

Contributors

The following have contributed to this page