What is it about?
This study explores the impact of advanced AI systems, specifically Generative AI (GenAI), on the accountability relationship in public sector settings. GenAI, such as ChatGPT, can hold conversations that seem very human-like. The study focuses on how these AI systems change the way government actors interact with the public. This paper shows that what was once a straightforward relationship between government actor and public citizen becomes more complex as we intertwine GenAI systems into existing public practices.
Featured Image
Photo by Jose Mizrahi on Unsplash
Why is it important?
The use of GenAI in government services could greatly enhance efficiency and citizen engagement. However, it also complicates accountability because it introduces a new layer (the AI system) into the interaction between government and citizens. Ensuring that these systems are used responsibly and that public servants are adequately prepared to manage them is crucial for maintaining trust and effective service delivery. Our findings show exactly how we envision interactions between government actors and public citizens will change through our outlined dual-phase accountability relationship cycle. Finally, we identify some recommendations on how to navigate the usage of GenAI systems in order to ensure accountability is maintained in public processes.
Perspectives
Read the Original
This page is a summary of: Evolving Generative AI: Entangling the Accountability Relationship, Digital Government Research and Practice, May 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3664823.
You can read the full text:
Contributors
The following have contributed to this page