What is it about?

This study investigates whether AI tools like ChatGPT reflect the political opinions of average Americans. The researchers asked ChatGPT to answer political questions and write about sensitive topics from different political perspectives. They found that ChatGPT’s answers were often more left-leaning than the general public’s. When generating images or writing essays, the AI showed a similar bias. Sometimes, it even refused to create content from a right-wing perspective. These findings suggest that popular AI tools might not be neutral and could influence public opinion. The authors call for more transparency and accountability in how these systems are designed.

Featured Image

Why is it important?

As AI tools like ChatGPT become widespread in education, journalism, and politics, understanding their impact on public opinion is more urgent than ever. This study provides one of the first systematic, peer-reviewed investigations into political bias in AI-generated text and images. Uniquely, it uses real survey data and a range of methods to show that ChatGPT is more aligned with left-leaning views than the average American. The research also reveals that ChatGPT sometimes refuses to generate right-wing content, raising serious questions about freedom of expression, fairness, and transparency in AI. These insights are especially timely with upcoming global elections and increasing public concern about misinformation and bias in digital platforms.

Perspectives

This article was the result of a collaborative effort by authors who share a common concern about the social impact of emerging technologies. As researchers trained to think critically about incentives and systems, we were curious to see whether generative AI reflects the diversity of views present in a democratic society. We approached the topic with rigor and a commitment to impartial analysis, and what we found raised important questions about how these tools are shaping information and political discourse. We hope this work encourages thoughtful debate and contributes to the broader discussion on AI governance, bias, and transparency.

Dr Fabio Yoshio Suguri Motoki
University of East Anglia

Read the Original

This page is a summary of: Assessing political bias and value misalignment in generative artificial intelligence, Journal of Economic Behavior & Organization, February 2025, Elsevier,
DOI: 10.1016/j.jebo.2025.106904.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page