SummTriver: A new trivergent model to evaluate summaries automatically without human references

Citation data:

Data & Knowledge Engineering, ISSN: 0169-023X, Vol: 113, Page: 184-197

Publication Year:
2018
Captures 3
Readers 3
Social Media 20
Shares, Likes & Comments 20
DOI:
10.1016/j.datak.2017.09.001
Author(s):
Luis Adrián Cabrera-Diego; Juan-Manuel Torres-Moreno
Publisher(s):
Elsevier BV
Tags:
Decision Sciences
article description
The automatic evaluation of summaries is a hard task that continues to be open. The assessment aims to measure simultaneously the informativeness and readability of summaries. The scientific community has tackled this problem with partial solutions, in terms of informativeness, using ROUGE. However, to use this method, it is necessary to have multiple summaries made by humans (the references). Methods without human references have been implemented, but there are still far from being highly correlated to manual evaluations. In this paper we present SummTriver, an automatic evaluation method that tries to be more correlated to manual evaluation by using multiple divergences. The results are promising, especially for summarization campaigns. Besides this, we also present an interesting analysis, at micro-level, of how correlated the manual and automatic summaries evaluation methods are, when we make use of a large quantity of observations.