What is it about?

Many people around the world have health problems that need regular care and support. Sometimes, they can join online chat groups where they can talk to other people with similar conditions and get advice from health professionals. However, these professionals are often very busy and have many tasks to do. They need to manage the chat groups, answer questions, provide information, and offer emotional support. This can be hard and stressful for them. This paper is about how we can use large language models (LLMs) to help these professionals do their work better. LLM is a technology that can understand and generate natural language, like the words we use to communicate. We studied how a professional moderated two chat groups for young people living with HIV in Kenya. We found out what kind of work he did and what challenges he faced. Then we explored how LLM could assist him in different ways, such as creating engaging messages, summarizing chat topics, suggesting solutions, and analyzing emotions. We also discussed the potential benefits and risks of using LLMs in this context. We argued that AI should not replace the human professional, but rather work with him as a copilot. We also pointed out some limitations and ethical issues of AI, such as data quality, accuracy, privacy, and fairness. We suggested some design principles and future research directions to make AI more useful and trustworthy for health chat groups.

Featured Image

Why is it important?

This study is unique and timely because it investigates how artificial intelligence (AI) can help health professionals who moderate online chat groups for young people living with HIV in Kenya. It uses a a real-world observations to understand the work and challenges of the moderators, and how AI could assist or interfere with their tasks. It also discusses the ethical and practical issues of using AI in healthcare settings, especially in low-and middle-income countries (LMICs) where health resources are scarce. It contributes to the research on mobile messaging and AI-based medical support, and raises important questions for future work in this area.

Perspectives

This study explores the potential of using large language models (LLMs) as copilots to support the facilitation of peer support chat groups for young people living with HIV in Kenya. The authors use an ethnographic approach to understand the facilitator's work and the challenges they face in maintaining these groups. They then discuss the benefits and risks of employing LLMs to assist the facilitator in various tasks, such as co-producing engaging content, summarising chat history, providing sentiment analysis, and offering timely and accurate health information. The study contributes to the literature on mobile messaging and SMS-based medical support by advocating for the adoption of LLM-enabled copilots, while also acknowledging the complexities of sensitive medical and emotional contexts]. The study also highlights the need for addressing the data divide, ethics, privacy, and security issues when utilizing LLMs for facilitation support.

Najeeb Abdulhamid
Microsoft Corp

Read the Original

This page is a summary of: Can Large Language Models Support Medical Facilitation Work? A Speculative Analysis, November 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3628096.3628752.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page