Artificial intelligence can refer to a variety of technology-assisted applications. At its essence, it refers to systems or machines that mimic human intelligence to perform tasks and, in some cases, that can iteratively improve themselves based on the information they collect. In healthcare, the applications of AI that we see most often are:
However, AI comes with numerous challenges that, if not identified and corrected, can generate incorrect findings and, at worst, threaten patient health and safety.
Bias is introduced at every stage of AI development and use. AI systems learn from their training data and absorb biases found in this data. Studies show that AI developed with data from a particular setting or constituency (such as academic medical centers or Medicare patients) is most effective for that population and not always effective for others. Even if AI systems learn from accurate, representative data, there can still be problems if that information reflects underlying biases and inequalities in the health system. For example, African American patients receive, on average, less treatment for pain than white patients. An AI system learning from health system records might learn to suggest lower doses of painkillers to African American patients even though that decision reflects systemic bias, not a biological reality. Resource allocation AI systems could also increase inequality by assigning fewer resources to patients considered less desirable or less profitable by health systems for various problematic reasons.
Algorithms for machine learning are often chosen by trying several options and choosing the best fit. Humans are making that choice, bringing their own biases to it. Further, if they don't understand the potential biases from each possible approach they can't be considered in the selection process. People also make decisions about thresholds for matches and other aspects of an application that affect its results; these, too, can introduce new or promulgate existing biases.
In addition, bias and inequity can result from the way AI is applied to patients. Many people don't understand that AI is inherently probabilistic, how their results come to be, and how they are best interpreted. Someone, usually without clinical expertise, is deciding what's good enough to flag as a match. They don't always weigh risks of including extra matches or omitting too many potential matches. When physicians apply these results to their patients, they need to consider this and act accordingly, not just assume that the AI accurately identified whether or not a patient has X or needs Y.
MHDC is working with payers and providers to address several of these AI problems. Our Data Governance Collaborative is increasing the adoption of common clinical vocabularies and application programming interfaces that enable the sharing of information among organizations. These efforts produce more coherent and structured clinical information and reduce the risks associated with the current fragmentation of clinical data across numerous systems, within and outside the healthcare enterprise. It also explores how AI works, some of the ways bias can be introduced, and how the industry as a whole can try to minimize them.
We are also working on automating the burdensome process of prior authorization. We are working with a leading payer and specialty provider to identify and document when prior authorization is required, reducing the volume of prior authorization requests the provider submits to the payer. Part of this work is aimed at replacing bots that try (and fail) to make complex operations routine (such as accessing multiple payer portals for prior authorization information) with simpler and more direct machine-to-machine exchanges that are more robust, reliable, and scalable.
We are also actively participating in industry discussion around AI including recent sessions where ONC solicited ideas around how to improve AI use in healthcare. We provided suggestions in several areas including data collection, reporting, education, and technical and user support lines. Don't be surprised if new certification requirements and other HHS regulations or programs show up around AI.
MHDC believes that AI has the potential to dramatically improve clinical care, reduce administrative complexity, and improve the patient and member experience. We are mindful of the risks that ill-considered and imprudent use of AI can present and, by collaborating with numerous constituents, are confident we can navigate this complex process successfully.