Agentic AI systems that can make autonomous suggestions, decisions, or even take actions with minimal human oversight will revolutionize healthcare in the years to come. These bleeding-edge systems hold immense potential to improve efficiency and accuracy within clinical environments, ultimately enhancing patient outcomes, reducing physician burden, and even potentially reducing cost of care.
The emerging technology holds enormous potential for the healthcare industry. However, as Stan Lee taught us through the venerable words of Spiderman, “with great power comes great responsibility”, we need to move forward with circumspection before integrating agentic AI in physician workflows. The healthcare industry needs to harness this new powerful technology with considerations and meticulous planning around clinical safety, regulatory compliance, explainability of system-generated decisions or actions, clinical workflows, and retention of human accountability. The following sections provide a deeper look into these considerations and outline a framework for responsible and effective implementation of agentic AI in healthcare.
Ensuring clinical safety and efficacy
Healthcare organizations seeking to implement agentic AI must prioritize clinical safety and efficacy through a carefully orchestrated approach. By adopting a crawl-walk-run approach, organizations should begin implementing AI agents for back-office administrative functions like billing reconciliation, invoicing, and fraud detection where the learning opportunities are abundant, and do not directly impact clinical decision-making or patient safety. This initial phase allows organizations to develop foundational experience with agentic AI implementation and governance while minimizing patient risk. As confidence and capabilities grow, organizations can thoughtfully expand into clinical operations (e.g., appointment schedule/rescheduling, clinical asset management, etc.), systematically collecting and analyzing evidence of effectiveness, reliability, and safety.
This evidence-based approach enables organizations to make informed decisions about the pace of agentic AI adoption and to fine-tune their implementation strategies based on real-world reliability and safety data. Success will hinge on robust AI governance frameworks that establish clear protocols for use case selection and evaluation, risk management, and monitoring ensuring that each step forward in agentic AI implementation is grounded in demonstrated safety and measurable improvements in healthcare delivery.
Navigating the regulatory and compliance landscape
Integrating agentic AI systems into healthcare requires navigating novel regulatory challenges due to their autonomous decision-making capabilities. The Food and Drug Administration’s (FDA’s) framework for Software as a Medical Device (SaMD) addresses new complexity with agentic AI, as these systems can independently initiate actions and adapt their behavior over time. For example, an agentic AI that autonomously adjusts ventilator settings requires more stringent controls than traditional decision-support tools. The FDA’s recent guidance emphasizes continuous monitoring of such autonomous systems, requiring documentation of both the initial decision-making parameters and any subsequent automated modifications.
Privacy compliance becomes more nuanced with agentic AI’s ability to independently access and process patient data. Beyond standard HIPAA requirements, organizations must implement granular access controls and audit mechanisms that track not only human access to protected health information, but also each autonomous action taken by the AI agent. This includes logging what patient data the agent accessed, why it accessed it, and what decisions it made based on that data. Organizations must establish clear protocols for when and how these AI agents can independently initiate data access or sharing.
Lastly, documentation requirements should address the AI agent’s autonomous behaviors, including logs of independent actions taken, changes in decision-making patterns, and any instances where the agent operated outside expected parameters. Organizations need well-defined protocols for handling scenarios where agentic AI makes unexpected decisions or deviates from expected behavior, including incident reporting mechanisms and clear chains of accountability.
The importance of explainability and transparency
AI transparency and explainability are not just technical concerns. They are essential for building trust with healthcare workers, driving adoption of systems, and ensuring patient safety in healthcare. Healthcare teams require a clear understanding of not only what AI agents decide, but also the reasoning behind their recommendations. Busy physicians not only need concise, readily accessible explanations of AI outputs that fit seamlessly into their existing workflows but also an understanding of AI agents’ decision-making processes.
This approach to explainability serves a purpose beyond merely building trust. It creates a foundation for continuous system improvement as well. When healthcare professionals can comprehend and critique AI decisions, they actively help to continually refine it, ultimately leading to more effective and reliable AI solutions. The success of integrating AI agents in healthcare settings hinges on this fundamental commitment to making these systems understandable and accountable to those who use them.
Designing for a physician-centered workflow
Seamless integration into existing physician workflows is crucial for adoption of agentic AI systems. Even the most performant AI agent will face resistance if it adds cumbersome steps or disrupts established processes and physician habits. Systems that minimize extra clicks and thoughtfully incorporate AI functions into standard electronic health record (EHR) platforms increase the likelihood that physicians will use them consistently. Some factors hindering adoption by physicians include:
- Disruption of established routines, necessitating new occupational habits to form
- Increase to their cognitive workload, taking away from patient diagnoses and care delivery
- Lack of real-time decision-making and slowing down pace of care delivery
- Swivel-chain workflow step requiring switching between screens or applications
- Unintuitive user interface requiring time-consuming training and FAQs
- Alert fatigue resulting in notifications being ignored
Before integrating agentic AI capabilities in a healthcare setting, due considerations need to be given to these factors by placing the physician needs and priorities at the center during ideation, design, development, testing, deployment, and maintenance.
Maintaining physician oversight and accountability
Contrary to what popular sci-fi shows like Star Wars and Star Trek would have us believe, AI (autonomous agentic or not) will not replace humans in delivering patient care in our lifetime. Ultimately, physicians must remain the final decision-makers in patient care, using AI to augment, rather than supplant, their expertise. When designing agentic AI systems, clear lines of accountability (e.g., documentation requirement for AI-assisted decisions, audit of AI recommendations, escalation paths when AI and physician assessments differ, etc.) must be baked into the design.
The design of agentic AI systems must incorporate risk-stratified guardrails. High-risk decisions, such as medication dosing or ventilator management, should require explicit physician approval, while lower-risk activities like data analysis and predictions may operate with greater autonomy under clear audit protocols. These systems should neither encroach into physician’s area of expertise, nor reduce their autonomy to make clinical decisions for patient care. The role of these AI agents to help physicians make clinical decisions are summarized below.
These will allow physicians to make even more nuanced and well-informed medical decisions while spending more time with patients and tending to their wellbeing. As AI agents mature and are afforded more autonomy, such as adjusting a patient’s ventilator setting, designers need to ensure that they bake in human-in-the-loop to maintain physician autonomy and oversight to reduce patient risk.
Conclusion
Agentic AI offers a transformative opportunity to enhance clinical decision-making, streamline workflows, and ultimately improve patient care. However, this technology should be introduced with a careful balance of innovation and caution. By prioritizing the considerations mentioned in this article, healthcare organizations can confidently harness the potential of agentic AI while preserving the vital human touch at the heart of medicine. As these systems continue to evolve, they hold the potential as transformative multipliers to elevate medical practice and support healthcare teams in delivering high-quality, equitable, and compassionate care.