What is it about?

This paper explores the use of crowdsourcing to classify statement types in film reviews to assess their information quality. Employing the Argument Type Identification Procedure which uses the Periodic Table of Arguments to categorize arguments, the study aims to connect statement types to the overall argument strength and information reliability. Focusing on non-expert annotators in a crowdsourcing environment, the research assesses their reliability based on various factors including language proficiency and annotation experience. Results indicate the importance of careful annotator selection and training to achieve high inter-annotator agreement and highlight challenges in crowdsourcing statement classification for information quality assessment.

Featured Image

Read the Original

This page is a summary of: Crowdsourcing Statement Classification to Enhance Information Quality Prediction, January 2024, Springer Science + Business Media,
DOI: 10.1007/978-3-031-71210-4_5.
You can read the full text:

Read

Contributors

The following have contributed to this page