HITRUST this week unveiled its new AI Danger Administration Evaluation, which it payments as a complete evaluation method for mitigating the dangers of synthetic intelligence deployments, in healthcare and different organizations.
WHY IT MATTERS
The evaluation is supposed to assist be sure that organizations have sufficient governance in place for implementing AI instruments, and that these guardrails may be successfully communicated by firms to administration groups and boards of administrators.
HITRUST says its method aligns with requirements issued by each NIST and ISO/IEC, and is supported by an evaluation framework and SaaS platform to assist adopters show that AI risk-management outcomes are met.
“The overall effort to deal with threat administration at scale can take weeks or months of labor simply to design and keep an evaluation method, socialize that method and to organize for the evaluation work itself,” added Bimal Sheth, EVP of requirements growth and assurance operations at HITRUST. “Even then, there may be questions on completeness and high quality and the work may be exhausting the place the group needs to align to a number of trade requirements.”
Designed for any group utilizing such instruments – together with machine studying algorithms and enormous language fashions for generative AI – the framework is designed to assist healthcare and different leaders validate their method to threat administration for these fast-evolving applied sciences.
“The AI RM resolution can be utilized as a self-assessment and benchmarking instrument, or firms can interact one among over 100 HITRUST exterior assessor companies to validate and confirm implementation,” stated Jeremy Huval, chief innovation officer at HITRUST, in an announcement.
THE LARGER TREND
The brand new risk-management instrument comes lower than a 12 months after HITRUST introduced its AI Assurance Program in October 2023. That venture seeks to supply an method, impressed by the HITRUST Frequent Safety Framework, to assist healthcare organizations develop methods for safe, sustainable and reliable AI fashions.
HITRUST says it additionally plans to launch a brand new AI Safety Certification Program – which can embrace AI-specific management specs included within the HITRUST CSF and enhancements to the corporate’s assurance methodologies, programs and ecosystem – towards the tip of this 12 months.
Earlier this month, one other group, NIST, unveiled an open-source platform for AI security assessments. The free instrument, referred to as Dioptra, goals to assist builders perceive and mitigate a few of the distinctive knowledge dangers with AI and machine studying fashions.
ON THE RECORD
“Requirements for AI threat administration are evolving quickly, and it’s essential for firms to deal with these ideas with a considerate and complete method,” stated Robert Booker, chief technique officer at HITRUST, in an announcement saying the AI Danger Administration Evaluation. “Governance of this essential and highly effective functionality is important to unlocking the potential that AI presents, and threat administration is crucial to implementing AI responsibly.”
Mike Miliard is government editor of Healthcare IT Information
E-mail the author: [email protected]
Healthcare IT Information is a HIMSS publication.
The HIMSS AI in Healthcare Discussion board is scheduled to happen September 5-6 in Boston. Study extra and register.