What is it about?
We know the prevalence of hate on the Web, yet how effective are algorithms used to detect online hate? Our experiments show they produce outputs of limited explanatory value. Our response is to connect sociological theory with computational methods to create outputs of improved explanatory value.
Featured Image
Photo by Andre Hunter on Unsplash
Why is it important?
We are left to wonder the consequences of decisions made using algorithms with questionable explanatory rigour is an open question. Our experiments show how even the most apprently sophisticated technologies produce counter intuitive results. We need to start developing software that is informed by sociological theory to create systems of improved explanatory relevance.
Perspectives
Read the Original
This page is a summary of: Social Science for Natural Language Processing: A Hostile Narrative Analysis Prototype, June 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3447535.3462489.
You can read the full text:
Contributors
The following have contributed to this page