What is it about?

The ongoing spread and expansion of information technology and social media sites has made it easier for people to access different types of news – political, economic, medical, social etc. - through these platforms. This rapid growth in news outlets and the demand for information has blurred the lines between real and fake news, and led to the dissemination of fake news, which is a dangerous state of affairs. The outbreak of the coronavirus pandemic and a rising awareness of the dangers posed all across the globe saw a parallel rise in fake news and rumors, as like as unsubstantiated statements and deceptive ideas. The main aim of this study is supposed to set out to overcome these kind of problems in the future, with application of deep learning algorithms (LSTM, Bi-LSTM, BERT), using a large dataset (39279 rows) to identify fake and correct textual or verbal news. The results of the deep learning application using different algorithms show that the BERT model performed the best, achieving a text classification accuracy of 96.63 %.

Featured Image

Why is it important?

This paper is crucial as it addresses the pressing issue of fake news proliferation in the era of widespread information technology and social media usage. The coronavirus pandemic highlighted the dangers of misinformation, making the need for effective detection methods more urgent. By employing advanced deep learning algorithms such as LSTM, Bi-LSTM, and BERT, the study leverages modern AI techniques to tackle this problem. The research is based on a substantial dataset of 39,279 rows, enhancing the validity and robustness of its findings. Notably, the BERT model achieved a remarkable text classification accuracy of 96.63%, demonstrating its efficacy in distinguishing between real and fake news. This high level of accuracy indicates that the model can be a powerful tool for mitigating the spread of misinformation. The implications of this study are significant, offering a pathway for developing more reliable fake news detection systems that can be adopted by social media platforms and news organizations to protect public discourse and trust.

Perspectives

This paper is vital as it tackles the critical issue of fake news proliferation amid the rise of information technology and social media. Using advanced deep learning algorithms like LSTM, Bi-LSTM, and BERT, the study analyzes a large dataset (39,279 rows) and achieves a high accuracy of 96.63% with the BERT model in identifying fake news. This high level of accuracy underscores the model's potential as a robust tool for combating misinformation. The research highlights the necessity of interdisciplinary efforts to develop reliable fake news detection systems, crucial for safeguarding public discourse and trust.

Yahya Layth Khaleel
Tikrit University

Read the Original

This page is a summary of: Application of deep learning algorithms detecting fake and correct textual or verbal news, Production Systems and Information Engineering, January 2022, Production Systems and Information Engineering,
DOI: 10.32968/psaie.2022.2.4..
You can read the full text:

Read

Contributors

The following have contributed to this page