What is it about?
Artificial intelligence (AI) has infiltrated many aspects of life, including the practice of psychology, but our extant literature has yet to fully consider the impact of this infiltration. Indeed, the almost unbelievable pace of technological advancement necessitates an examination of the risks associated with the use of AI as well as the development of specific tools that can be used to manage, and ultimately mitigate, those risks. We conducted a Risk and Social Impact Assessment (RSIA), which identified seven specific risks associated with AI’s role in health service psychology: Transparency; Discrimination; Fairness and Bias; Data Protection, Security, and Privacy; Errors, Hallucinations, and Ethical Violations; Regulation and Governance; as well as Challenges to Professional Identity. While discussing those risks, we also offer suggestions for risk management as well as a decision-making matrix useful for professional psychologists considering the use of AI in their clinical work.
Featured Image
Photo by Milad Fakurian on Unsplash
Why is it important?
Artificial intelligence offers the potential to revolutionize the practice of professional psychology by providing insights and solutions that were previously beyond the reach of conventional methods (Olawade, et al., 2024, p. 2; see also Parsons, 2026). Farmer and colleagues recently described how AI was poised to emerge as a “therapy assistant, claiming to interpret language and emotional cues during session, providing personalized assessment and treatment plans, and generally enhancing psychologist’s decision-making capabilities” (2025, p. 20; see also Cheng, et al., 2023; Parsons, 2026). Luxton (2014) went so far as to predict that advancing AI technology could result in a “super clinician” (p. 333) exhibiting capabilities beyond those of human practitioners (see also Fiske, Henningsen, & Buyx, 2020; Hutnyan & Gottlieb, 2025). This suggests that AI could fundamentally change our profession (Cheng, et al., 2023; Hutnyan & Gottlieb, 2025) perhaps going so far as to potentially replacing health service psychologists in some activities rather than just augmenting their performance of those activities (Parsons, 2026, pp. 91-104). In fact, APA (2024) showed at least 29% of 846 sampled psychologists used AI once in the preceding 12 months, with 58% of them using AI for report writing - and they used these services while at the same time indicating minimal or no self-perceived competence in AI. Regarding more profession-specific applications, AI has been used by mental health professionals to assist in diagnostic screenings (e.g., Alanezi, 2024) as well as therapeutic interventions (Parsons, 2026). Unfortunately, the extant literature has yet to fully articulate the risks involved with using AI in such capacities. We respectfully offer, therefore, a Risk and Social Impact Assessment (RSIA; e.g., Vanclay, 2003). In this context, risk refers to the unknown outcomes of a planned action (i.e., technological innovations) regardless of whether they are positive or negative (Mahmoudi, et al., 2013). Social impact, conversely, is defined as “changes to the norms, values, and beliefs of individuals that guide and rationalize their cognition of themselves and their society” (Slootweg, et al., 2001, p. 25). As an example of impact: “AI has the potential to enhance psychological clinical decision-making and outcomes, improve access to care, and enhance provider workflow and efficiency” (APA, 2025, p. 1; see also Parsons, 2026). But, as an example of risk: even though it has long been known society’s biases are ingrained in the data upon which AI systems are trained (e.g., Noble, 2018), as AI systems become more powerful they may in fact begin to independently replicate such discriminations (e.g., Greenblatt, et al., 2024).
Perspectives
Research is often “me-search” (Altenmüller, Lange, & Gollwitzer, 2021, p. 1), in that our identities can influence our interest in, and understanding of, science (e.g., Sagan, 1995). In this case, the authorship team which developed this manuscript was made up of a mix of acceptance, skills, and uncertainty regarding the nature of technology in general, and AI specifically. Specifically, the team is made up of three graduate students (one clinical mental health counseling MS, two counseling psychology Phd.) and one senior counseling psychology faculty member. The second author, a male member of Generation Z, has grown up with the technology which gave rise to AI and as such considers it a normal, even required, part of everyday life. He has been privileged by the immediate access to information brought by access to this technology, and has developed an appreciation for its role in his development. Next, as a male member of Generation Z and an international doctoral student in counseling psychology, the third author has greatly benefited from AI applications in learning new languages and cultures. Like many digital natives, he experiences the increasing integration of technology into daily life, with his cellphone often feeling like an extension of his body. Although he embraces technological advancements, he is also concerned about AI’s growing influence on the psychology profession and the broader job market. At the same time, he remains uncertain about the ethical and professional boundaries of AI use in psychological practice. Similarly, as a cusp-millennial woman, the fourth author has been watching the rise of technology in real time for much of her life. Seeing how quickly AI has become popular across a variety of avenues has been startling to her, and as someone with a background in the legal field ethical concerns within the profession are paramount. All told, these developing professionals brought to the drafting of this manuscript an awareness of how clinicians, students, and supervisors were (somewhat wantonly) using AI systems in the course of their professional activities despite the lack of any professional or ethical guidance regarding best practices or an awareness of the social privileges associated with such use. The lead author, in turn, is an older Caucasian male well-known on campus as being quite leery of technology, mostly because a lifetime of consuming science fiction has shown him how an overreliance on such technology can go badly for humanity (Star Trek and Asimov’s Three Laws notwithstanding). More importantly, he has recently found himself confronted with graduate students questioning what role AI should play in their work. The lack of a significant extant literature on this issue surprised him, and this contributed to his inability to advise these students as to what might constitute best practices regarding how to ethically incorporate AI into their professional training, clinical work, or academic experiences. He became concerned, as it seemed that psychologists were, in effect, surrendering to the technological conveniences (e.g., deskilling, Farmer et al., p. 21) associated with AI without reflecting on the costs to our clients, our profession, and to humanity in general. In the end, the team tried to adopt Kissinger’s (2024) suggested balance between blind faith and unjustified fear. We approached AI’s impact on psychology with an understanding that, barring a revolt of AI systems and any subsequent war on humanity (e.g., Cameron, 1984), AIs impact on the profession of psychology, heretofore only hinted at (e.g., Illovsky, 1994; Luxton, 2014, 2016a; Sharf, 1985) seems quite inevitable and that the positives may in fact outweigh the negatives. As such, AI systems are poised to transform our work, provided of course that we have reflected on what that technology says about humanity, as well as properly considered the risks, benefits, and yes even meanings inherent to the development and the deployment of such advanced systems.
Stephen Wester
University of Wisconsin Milwaukee
Read the Original
This page is a summary of: Identification and management of the risks associated with artificial intelligence in health service psychology., Professional Psychology Research and Practice, March 2026, American Psychological Association (APA),
DOI: 10.1037/pro0000674.
You can read the full text:
Contributors
The following have contributed to this page







