What is it about?

We know the prevalence of hate on the Web, yet how effective are algorithms used to detect online hate? Our experiments show they produce outputs of limited explanatory value. Our response is to connect sociological theory with computational methods to create outputs of improved explanatory value.

Featured Image

Why is it important?

We are left to wonder the consequences of decisions made using algorithms with questionable explanatory rigour is an open question. Our experiments show how even the most apprently sophisticated technologies produce counter intuitive results. We need to start developing software that is informed by sociological theory to create systems of improved explanatory relevance.

Perspectives

Hate speech detection is a growing industry that attracts significant investment. Moreover, hate speech moderation is important for maintaining the value of social media companies. But most fundementatlly, the Web and society we want should be free from the fear and consequences of hate speech. Current sytems for detecting hate, and indeed other NLP technologies are revolutionising how we interact with machines., but are do they work well for analysing texts for meaning. Personal assistants can convert natural language inputs to generally useful outputs with reasonable reliability. But the same technologies powering these revolutionary technologies are also found in sytems used to extract meaning from text. The meaning extraction systems do not appear to be connected to sociological theories - consider whether there is actually any underpinning theory of sentiment which informs the design of sentiment analysis systems. Our reserach seeks to improve NLP by developing systems that are informed by theories from Peace Research. To do so, we believe, means creating systems that have improved explanatory value than the current state-of-the-art. We would hope this system will improve how hate speech is controlled online in order to get the Web we want.

Stephen Anning
University of Southampton

Read the Original

This page is a summary of: Social Science for Natural Language Processing: A Hostile Narrative Analysis Prototype, June 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3447535.3462489.
You can read the full text:

Read

Contributors

The following have contributed to this page