How to properly evaluate AI technologies? UVA Health analytics leader has answers



Evaluating vendors’ AI technologies within an AI governance framework is of critical importance in a healthcare environment, many experts say. Vendor-provided tools must be safe, effective and ethical to provide the best outcomes for all patients.

As part of a governance structure there should be an effective system for understanding and evaluating risks associated with AI systems in clinical use. This approach defines four categories of risk and a grading system to assess the risk in each category.

These categories are:

  1. Correctness and transparency
  2. Fairness and equity
  3. Integrated workflow
  4. Safety and privacy

By highlighting the areas of risk, a health system can make more informed decisions on which tools to use, and the technology deployment teams can develop risk mitigations appropriate to the tools or workflows.

Establishing a robust governance framework to interact with vendor products will allow healthcare organizations to harness the benefits of commercial AI systems while minimizing risk and maintaining public trust, said Glenn Wasson, administrator of analytics at UVA Health. He holds a doctorate in computer science.

Wasson will be addressing this subject in a HIMSS25 educational session entitled “Dear AI Vendors: This Is What We Need.”

At UVA Health, Wasson oversees the way the health system curates and analyzes data for patient care and research. His responsibilities include data operations, analytics, data science and data visualization.

In this role, he works on everything from at-the-bedside predictive care to population risk modeling to untangling hospital ranking algorithms. He is particularly interested in how to create problem-solving cultures.

We interviewed Wasson for a preview of his HIMSS25 session.

Q. What do hospitals and health systems need to understand about AI tools compared with previous software?

A. AI is moving into every aspect of today’s healthcare enterprise, opening the door to potential improvements in everything from diagnoses and treatment planning to billing and resource management to research and drug discovery.

But AI systems also carry novel risks not present in previous generations of software. Organizations need to understand these risks and their potential impacts and mitigations to effectively govern the AI within their enterprise.

Understanding the risks and benefits of AI systems is a complex question. Provider organizations seldom have the access, bandwidth and/or talent to perform analysis of vendor code.

Instead, this session discusses a dialog between provider organizations and vendors that can highlight sources of risk. This dialog will require increased transparency around data, algorithms and workflows, but can also help vendors develop provider confidence in their tools.

Q. How will you be focusing on AI in your HIMSS25 session?

A. This session is about artificial intelligence and that comes in many variations, generative AI being the latest. Within healthcare, AI’s ability to analyze evidence based on past human experience allows it to be used in many different scenarios.

These include diagnosis prediction and treatment selection, personalized medicine, staff scheduling, bed allocation, supply chain operation, remote monitoring, improvements in billing and coding, cost reductions through efficiency, and many more.

In this session we want to discuss different use cases for AI and the workflows that allow AI and humans to collaborate in making decisions. We will discuss how an understanding of the actions resulting from those decisions and the risks associated with those actions can build confidence that an AI system will safely and correctly provide the support needed.

Q. What is one of the various takeaways you hope HIMSS25 attendees will leave your session with and be able to apply when they return home to their organizations?

A. Attendees will leave with an understanding of the novel aspects of AI system evaluation that have not been a part of traditional technology assessment. We will provide a framework for AI system evaluation through a dialog (a series of questions) between providers and vendors to understand risks and mitigations.

The questions will consider different sources of risk – data sets, workflows, etc. – and will require qualitative and quantitative answers.

Therefore, this dialog is not meant to be simply a discussion between AI professionals – for example, data scientists or statisticians – but must include leaders and operators who understand the intended deployment environment and workflows.

Finally, we will discuss how the dynamic learning nature of today’s advanced AI means that this dialog must be ongoing.

Wasson’s session, “Dear AI Vendors: This Is What We Need,” is scheduled for Tuesday, March 4, from 10:15-11:15 a.m. at HIMSS25 in Las Vegas.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication



Source link

Leave a Comment