Artificial intelligence (AI) in healthcare is undergoing a structural shift. While earlier generations of AI focused on prediction, classification, or narrow automation, a new paradigm—agentic AI—is emerging. Agentic AI systems are designed not merely to respond to prompts, but to reason, plan, act, and adapt over time in pursuit of clinical or operational goals. This evolution marks a transition from AI as a tool to AI as an active participant in healthcare delivery.
What Is Agentic AI?
Agentic AI refers to systems endowed with a degree of agency: the ability to make decisions about what actions to take next based on context, goals, and feedback. Technically, these systems combine large language models (LLMs), memory, planning modules, and tool-use capabilities to orchestrate multi-step workflows across digital environments. Recent advances in long-context reasoning, tool calling, and multi-agent architectures have made such systems feasible at scale [1–3].
In healthcare, this means an AI system can continuously monitor patient data, interpret evolving clinical contexts, retrieve guidelines, interact with electronic health records (EHRs), and escalate decisions to clinicians when predefined thresholds are crossed—rather than producing static outputs.
How Agentic AI Differs from Traditional Healthcare AI
Most deployed healthcare AI systems remain reactive. They generate predictions (e.g., risk scores) or recommendations when triggered by a user. Agentic AI, by contrast, is goal-directed and persistent. It operates across time, maintains memory of prior states, and dynamically selects actions.
This distinction is critical. As highlighted in Nature Medicine, clinical AI failures often stem not from model inaccuracy but from poor integration into workflows and decision processes [4]. Agentic AI directly addresses this limitation by embedding reasoning and orchestration into the system itself.
Key Healthcare Use Cases
1. Chronic Disease Management
Long-term conditions such as diabetes, cardiovascular disease, and heart failure require continuous monitoring and adaptive care. Agentic AI systems can integrate wearable data, patient-reported outcomes, medications, and behavioural signals to personalise interventions over time. Unlike dashboards, these agents can decide when and how to intervene, aligning closely with just-in-time adaptive intervention (JITAI) principles [5].
2. Clinical Operations and Workflow Automation
Healthcare delivery is fragmented across administrative and clinical systems. Agentic AI can coordinate appointment scheduling, discharge planning, documentation, and follow-up care by navigating multiple digital systems autonomously. Studies in JAMA and NEJM have repeatedly identified administrative burden as a major contributor to clinician burnout—suggesting substantial value in workflow-level AI automation [6,7].
3. Clinical Decision Support (CDS)
Traditional CDS tools rely on static alerts and rules, often leading to alert fatigue. Agentic AI enables context-aware CDS, where recommendations are reasoned through patient-specific data, guideline hierarchies, and uncertainty. Such systems can present options with explanations, track clinician decisions, and adapt future recommendations accordingly [8].
4. Patient-Facing Care Agents
Conversational agents powered by agentic AI can go beyond education and triage. They can conduct structured assessments, monitor adherence, coordinate care escalation, and maintain continuity between clinical encounters. When governed appropriately, these systems may improve access and equity in care delivery [9].
Risks, Regulation, and Governance
Despite its promise, agentic AI introduces new risks. Errors can propagate across workflows, autonomy can obscure accountability, and hallucinations may have clinical consequences. As noted in The Lancet Digital Health, increasing AI autonomy amplifies the need for robust governance frameworks [10].
Many agentic systems will likely qualify as Software as a Medical Device (SaMD), requiring regulatory oversight from bodies such as the Therapeutic Goods Administration and the Food and Drug Administration. Human-in-the-loop controls, auditability, bounded autonomy, and post-market surveillance are therefore essential.
Design Principles for Safe Agentic AI
Evidence from high-impact clinical AI research suggests several guiding principles:
1. Reliability and usability are patient-safety issues, not engineering afterthoughts [4,6].
2. Autonomy should be graduated, with clear escalation pathways to clinicians.
3. Failure modes must be explicitly designed and tested.
4. Governance, ethics, and accountability must be embedded from the outset [10].
5. Interoperability is foundational to safe deployment at scale.
Conclusion
Agentic AI represents a fundamental evolution in healthcare AI—from isolated intelligence to continuous, goal-oriented care orchestration. When thoughtfully designed, it offers a pathway to more personalised, efficient, and scalable healthcare systems. However, its success will depend less on algorithmic sophistication and more on system design, clinical governance, and trust.
Agentic AI is not about replacing clinicians. It is about building reliable, accountable digital agents that work alongside them—safely enhancing care in an increasingly complex healthcare landscape.
References
1. Bommasani R, et al. On the opportunities and risks of foundation models. Nature Medicine. 2023;29:311–324.
2. Wei J, et al. Emergent abilities of large language models. Transactions on Machine Learning Research. 2022.
3. Yao S, et al. ReAct: Synergizing reasoning and acting in language models. NeurIPS. 2023.
4. Kelly CJ, et al. Key challenges for delivering clinical impact with artificial intelligence. Nature Medicine. 2019;25:47–53.
5. Nahum-Shani I, et al. Just-in-time adaptive interventions (JITAIs) in mobile health. Annals of Behavioral Medicine. 2018;52:446–462.
6. Rotenstein LS, et al. Physician burnout and its drivers. JAMA. 2018;320:1131–1150.
7. National Academy of Medicine. Taking Action Against Clinician Burnout. NEJM Catalyst. 2019.
8. Sutton RT, et al. An overview of clinical decision support systems. Journal of the American Medical Informatics Association. 2020;27:141–146.
9. Bickmore TW, et al. Automated interventions and conversational agents in healthcare. The Lancet Digital Health. 2020;2:e305–e312.
10. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. The Lancet Digital Health. 2019;1:e12–e13.
