Speaking of Change: Navigating the Impact of Advanced AI Voice Models for Health Support Professionals

Background
OpenAI has recently unveiled an interactive, low-latency, multimodal, dual-channel ChatGPT 4o-based voice model in alpha (OpenAI, 2024). This model mimics human emotions and interacts with life-like real-time configurable voices. Similar advancements are expected to bring about massive changes in many fields. Jobs like companions, life coaches, mentors, psychiatrists, psychologists, therapists, peers, and counselors, under the general term “Human Support Professionals,” already feel the impact of less powerful digital tools. Like past workers displaced by automation, many in these and related fields wonder how AI and AI companions (Roose, 2024) will affect their livelihoods. Here are four possible ways forward, tips, and questions for those facing these new technologies’ challenges and opportunities.

Replacement Scenario

In the replacement scenario, AI might take over roles traditionally held by humans. With advancements in AI providing cost-effective, 24/7, personalized support, Human Support Professionals could face significant job displacement. This potential shift in the job market underscores the importance of being proactive, prepared for such changes, and adapting to new roles and responsibilities.

Regulatory Resistance Scenario

AI might face regulations that prevent it from taking on many of these roles. There are valid worries about job loss, privacy, ethical issues, effectiveness, bias, and service quality that must be addressed. While resistance through this route might prevail temporarily, AI’s widespread availability and affordability could lead to ignoring such barriers. Also, not all professionals will feel the same level of threat, which could weaken and divide the alliances needed for effective regulatory control. Furthermore, with the fast pace of change, it is doubtful that a regulatory approach could keep pace. Voluntary product certification might promote standards without strict regulation.

Integration Scenario

A collaborative approach involves working with AI to ensure the best possible experience for those in need. We have already invited AI to our virtual work meetings with limited but essential roles. Why not invite AI into the one-on-one and group settings as knowledgeable, tenacious, imperfect partners and help train and safeguard them to be better at the assigned tasks and work with experts for the best outcomes? In this scenario, AI is a complementary tool, not a replacement. Human Support Professionals can leverage AI for real-time data retrieval and analysis, crisis monitoring, limited interaction, treatment, mood detection, scheduling, 24/7 support, administrative tasks, and other roles. This would allow professionals to focus on the more nuanced aspects of emotional and psychological support. This collaboration could enhance the effectiveness of these professional workers, making interventions more equitable, accessible, timely, and personalized.

Specialization Scenario

In this scenario, AI manages routine, repetitive, and administrative tasks, freeing human support specialists to focus on complex, creative, and highly personalized support, which requires deep human empathy, intuition, and unique problem-solving skills.  Specialists may evolve their roles to focus on advanced support and therapeutic methods, cross-disciplinary approaches, and integrating cultural, contextual, and other nuances where AI falls short or remains biased (for now).
Recommendations
Many professionals with roots in behavioral and public health are becoming aware of the sharp rise in the use and potential of AI companions and therapists and their potential impact on population and individual mental health and behavior. These new voice models will significantly accelerate those trends. Such AI companions may be a short-term salve for loneliness (Frietas et. al., 2024) and other problems but also risk being a long-term catalyst for isolation. However, it’s important to note that we can help ensure that such tools have components and parallel approaches that encourage human social connections while addressing short-term needs and desires. Health professionals should continuously learn about AI technologies, participate in AI ethics discussions, and advocate for policies that ensure responsible AI use in healthcare. This way, human support practitioners can strive to maintain desired human elements as they embrace the benefits of AI voices.
The pace and extent of AI’s impact on human support professionals is uncertain. However, two things are clear: this change will come sooner than many expect, and adaptability is crucial. Exploring, adopting, and embracing promising technology and striving towards integration and specialization scenarios will likely be the most successful professional paths. Those who combine AI with people skills will most likely thrive. While these changes may seem daunting and, for some, disconcerting and disorienting, professionals dedicated to helping others have a unique opportunity to take advantage of this multi-faceted technology for the greatest benefit of their profession and clients.
Related Questions for Deeper Thought
ChatGPT 4o supplied these follow-up questions using this prompt: “Create a list of supplementary questions for the audience that will enhance and complement the blog post for an audience of health professionals. Ensure these questions add value and provide deeper insights and meanings.”

“How can health professionals maintain ethical standards when integrating AI companions into their practice?”
“What tasks within your current role could be effectively augmented or automated by AI companions like ChatGPT 4o?”
“How can AI companions enhance the accessibility and equity of mental health services in underserved communities?”
“What are the potential risks of over-reliance on AI companions in healthcare, and how can these risks be mitigated?”
“How might introducing AI companions impact the patient-provider relationship, and what strategies can maintain the human touch in care?”
“What training and skills development will be necessary for health professionals to work effectively alongside AI companions?”
“How can regulatory frameworks balance innovation and protecting jobs and professional standards in the healthcare sector?”
“What role can AI companions play in addressing the current mental health crisis, particularly in the context of increasing societal isolation?”
“What are the long-term implications of AI integration for the professional identity and career progression of human support specialists?”

Acknowledgment and Disclaimer: This opinion blog used Grammarly AI to improve grammar, enhance conciseness, and maintain a consistent tone. ChatGPT was used to suggest structure, provide content, and clarify concepts. OpenAI. (2024). ChatGPT (Mar 24 version) [Large language model].
The Rural Communities Opioid Response Program Technical Assistance (RCORP-TA) is supported by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS) under grant number U6BRH32364, totaling $6,000,000 with 0% financed with non-governmental sources. This information or content and conclusions are those of the authors and should not be construed as the official position or policy of, nor should any endorsements be inferred by HRSA, HHS or the U.S. Government.

Hot Topics

Related Articles