AI-powered mental health applications, such as chatbots that simulate therapists and mood trackers, are becoming increasingly available. While these tools offer affordable and accessible solutions to address gaps in current mental health services, experts warn that overreliance on AI for children’s mental health raises serious ethical concerns.
Most AI mental health apps are unregulated and designed for adults. However, there is growing debate over their potential use with children. Bryanna Moore, PhD, assistant professor of health humanities and bioethics at the University of Rochester Medical Center, stresses the importance of addressing ethical issues when discussing AI’s role in children’s mental healthcare.
Research shows that children may view AI chatbots as having “moral standing and mental life,” which could lead to an unhealthy dependence on these systems instead of forming essential human connections. This is concerning because a child’s mental health is deeply influenced by their social environment, including interactions with family and peers. Pediatric therapists take these relationships into account to help support the child’s development, but AI chatbots lack this crucial context.
Furthermore, AI chatbots may struggle to detect when a child is in danger. Without access to a child’s personal relationships and family context, these systems may fail to intervene appropriately in critical situations.
AI technology could also deepen existing health disparities. Factors such as a child’s race, ethnicity, socioeconomic status, and home life play a significant role in their mental health and access to care. Children from lower-income families are more likely to face challenges like abuse, neglect, or exposure to violence and substance misuse. These children often require specialized mental health care but may lack access to it. AI chatbots could become a substitute for human therapy in these cases, but experts caution that they should not replace the critical human connection offered by traditional therapy.
Most AI therapy chatbots are currently unregulated. The U.S. Food and Drug Administration (FDA) has approved only one AI-based mental health app for those suffering from serious depression. Without regulatory oversight, there are concerns about misuse, inadequate reporting, and potential disparities in training data or user access.
“There are many unanswered questions,” Moore explained. “We’re not calling for AI or therapy bots to be removed. Rather, we need to approach their use with caution, particularly when it comes to children’s mental health.”
Şerife Tekin, PhD, an associate professor at SUNY Upstate Medical’s Center for Bioethics and Humanities, joined Moore and Herington in addressing these concerns. Tekin focuses on the ethics of AI in medicine and the philosophy of psychiatry.
Looking ahead, the team hopes to collaborate with AI developers to better understand how these tools are created. They aim to explore whether developers consider ethical and safety issues in their design process, and how much research involving children, parents, and mental health professionals influences these AI models.
Related Topics: