
AI Lifecycle Management: Assurance from Governance to Surveillance for Healthcare Providers
- Published
- Apr 11, 2025
- Topics
- Share
In the complex healthcare provider ecosystem, organizations must implement innovative technologies to keep up with evolving patient care standards. Artificial intelligence (AI) helps organizations enhance efficiency, reduce costs, improve patient outcomes, and deliver robust, personalized care. With a variety of AI techniques, your organization can integrate AI systems like predictive analysis, machine learning, and large language models to assist employees in clinical, operational, and financial duties and tasks, including:
- Medical note generation, such as through ambient listening
- Diagnostics and imaging (e.g., image reconstruction, de-noising, segmentation, and labeling the region of interest)
- Clinical summaries
- Patient risk outcome predictions
- Remote monitoring
- Personalized treatment plans
- Clinician inbox messaging (drafted responses)
- Chatbots and automated processes for prescription refills, scheduling, triage, past due balances, and patient follow-ups
- Referral leakage and loop closure
- Specialty applications in gastroenterology, cardiovascular, hematology, and other services
Why AI Lifecycle Management is Important
Healthcare provider organizations are encountering a wide range of AI tools from various vendors, including native Electronic Health Records (EHRs), each with unique capabilities and requirements. Although having a wide range of tools offers benefits, it also brings challenges, such as integrating AI tools with existing EHR systems or other technologies, fragmented workflows, and increased cognitive load for clinicians. These challenges stem from increases in vendor sprawl and the complexity of the tech stack, necessitating vigorous validation to find the best fit for specific clinical contexts and comprehensive training programs to promote effective use and ongoing system maintenance. The end-to-end AI lifecycle consists of policies, practices, and oversight mechanisms designed to keep AI systems fair, transparent, and aligned with governing standards, going beyond just machine learning operations (MLOps). Establishing AI governance provides organizations with the foundation for successful AI use, advancing safe AI practices, reducing patient harm, and preventing diagnostic and treatment errors.
Risks Associated with AI Deployment
Although AI is a powerful tool that seems all-knowing, it’s not without risks, making establishing AI governance in your healthcare delivery an imperative. From questioning your AI’s response accuracy to feeling confident that patient data is secure, implementing AI governance provides peace of mind and confidence in your system’s ability. Common risks associated with AI in healthcare include:
- Adverse patient outcomes resulting from potential errors in diagnosis and treatment recommendations and increasing MedMal claims
- Risk of perpetuating existing healthcare biases if AI systems have biased data
- Difficulty understanding and explaining AI tool functionality and decision-making capabilities
- Shadow AI
- Physician burnout
- High velocity of 3rd party or EHR vendor offerings yet to be vetted
- Tools turned on automatically by EHR vendors
Governance Frameworks for AI in Healthcare
There is no one-size-fits-all solution, as there are various AI governance models, some specifically tailored to healthcare settings. Review your current systems to identify inefficiencies and gaps and determine the necessary tools for success. From there, you can decide which framework best meets your unique needs.
Three Common AI Frameworks
NIST AI Risk Management Framework (AI RMF)
The National Institute of Standards and Technology (NIST) developed guidelines for responsibly identifying, assessing, and managing AI risks. The framework focuses on four key principles:
- Govern – Establish clear AI policies and accountability.
- Map – Understand AI risks and vulnerabilities.
- Measure – Continuously assess model performance and fairness.
- Manage – Actively mitigate risks and refine AI usage.
AI controls can be created for specific workflows and applicable use cases based on the NIST framework (e.g., defining acceptable use and outlining mechanisms for incident response).
Coalition for Health AI (CHAI)
The CHAI framework focuses on developing standards and best practices for AI in healthcare, emphasizing collaboration among stakeholders to promote ethical AI deployment. The Coalition for Health AI released a “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,” highlighting recommendations to increase assurance standards on trustworthiness in AI.
Trustworthy and Responsible AI Network (TRAIN)
As a framework dedicated to operationalizing trustworthy AI in healthcare, TRAIN develops, adopts, and shares best practices for ethical and responsible AI use while enabling effective, equitable, and efficient AI-driven healthcare for all.
Recommendations for Implementing AI Lifecycle Management
Due to its complex and evolving nature, implementing AI lifecycle management is challenging across all industries, especially within healthcare organizations. Although it may require trial and error, there are key steps and recommendations that healthcare practices can follow to implement AI smoothly.
EisnerAmper has developed a novel framework that integrates best practices from NIST, CHAI, TRAIN, and our experiences in risk and safety at healthcare organizations. We have curated over 100 best practices to deploy around stages of maturity with an emphasis on the following:
- Governance
- Technology
- Finance
- Clinical Controls Integration
- Model testing
- Transparency
- Monitoring
Implementing AI Lifecycle Management: The Next Steps
Although establishing an AI lifecycle management that promotes safe, ethical, and practical AI use might seem overwhelming, know that you are not alone. EisnerAmper’s framework is tailored to fit different healthcare organizations’ needs to address the following:
- Communication across stakeholders from Boards to end users, including legal and compliance
- Clarity of roles and responsibilities that drives reliability and accountability
- Monitoring from onboarding through continuous learning with a keen eye on adoption with agility to recalibrate
- Financial investments to estimate TCO and tied to ROI
From implementing a framework that fits your organization’s unique needs to ongoing evaluation and adaptation of governance frameworks for continued improvement, EisnerAmper’s team of experienced professionals is here to help at every step. Contact us below to learn how we can support and empower your organization with AI governance.
What's on Your Mind?
Start a conversation with the team