BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Implementing Change Part Two: What AI Can’t Do For Healthcare

Rick Newell, MD MPH is CEO of Inflect Health, Chief Transformation Officer at Vituity, and passionate about driving change in healthcare.

I’ve talked about what artificial intelligence can do to improve healthcare for patients, physicians, other healthcare providers and payers. Just as important is understanding what AI can’t do—to prevent tech entrepreneurs and investors from pouring time and resources down rabbit holes that won’t bring about better care, improved working conditions or lower costs. Based on my experiences as a practicing emergency physician and an executive for a healthcare innovation and investment hub, here are my thoughts and observations on what AI can’t do for healthcare:

Patients need a human connection.

The first step a physician takes in healing is actively listening to the patient and making them feel seen, heard and understood. While technology can help people communicate, patients typically want a kind eye and comfort when they are in distress—not a thumbs up from a chatbot.

As AI is inevitably used more in disease detection, analysis and treatment, I think we must avoid the urge to push patients onto self-service apps for everything. Quite the opposite: We should look to free physicians and providers to put even more time and focus into being the human face of medicine.

AI solutions are only as good as the data.

Much of AI works by crunching vast collections of data. But often, there’s either conflicting data or a total lack of it. I routinely have patients present to my emergency department as undifferentiated blank slates, yet the patients themselves are aware that something is wrong. An elderly man who presents to the emergency department saying “I know something’s wrong” is usually correct. If all available data says he’s fine, we’ve learned to keep digging, and more times than not, we’ll find that he is right.

Machine logic can’t think like a human.

Machine learning has proven superior to human minds in early detection of oncoming diseases and in correlating symptoms into likely diagnoses, but it can’t replace human thinking—which is often nonlinear and creative—in recognizing when things just don’t add up. I think it would be wrong to think of AI as eventually doing the physicians’ work rather than assisting them to do it better.

In the emergency department, we often see nontextbook cases that AI as it stands today might mismanage. Automation has already proven its abilities to analyze digitized images and information, but it hasn’t yet been successfully applied to controlling tactile tasks such as joint reductions, suturing or placing central lines. Those acts still require a skilled practitioner to guide the work manually.

Engineers aren’t physicians.

Moreover, there’s a specific problem I still see in healthcare AI systems under development: Machine learning systems need to be trained on very large datasets. A human expert needs to tell the AI: “This is a positive reading. This is a negative reading. This number indicates that possibility.” Too often that training is done without the full involvement of physicians with relevant experience. As the saying in software goes: Garbage in, garbage out.

Solving healthcare’s challenges requires human framing.

While AI can solve the most challenging mathematical problems, it cannot determine the best mental model to use for a situation, nor recognize when to replace that model with a superior one. The book Framers: Human Advantage in an Age of Technology and Turmoil lays out how the biggest issue facing society is framing our problems, not solving them. I’ve noticed that much of the current AI in healthcare does not solve a useful problem because the problem was not framed by practicing healthcare experts.

For example, we could ask AI to determine optimal masking and isolation protocols to minimize spread of Covid-19 while maximizing economic growth. But a human mental model needs to decide the importance of individual autonomy versus group safety. Furthermore, a human has to decide how much is a life worth and the value relative to the economic gain from fewer restrictions. A human also must decide the time horizon for the trade-offs to optimize the solution—one month or 10 years?

Once we frame the problem, we must then create boundaries for how the AI can solve it. In this example, race/ethnicity is closely tied to economic opportunity, so a model may be more accurate if the AI is allowed to use race/ethnicity data in its calculations. But doing so may encourage an AI solution that promotes even more inequality. The concern is legitimate that AI could propagate inequity if its boundaries are set by data scientists looking for the most accurate model rather than the most fair and just.

I believe AI has enormous untapped potential to make healthcare better for all involved. But to get there, we need to keep clearly in mind that there are things AI can’t and may never do.


Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Follow me on LinkedInCheck out my website