Artificial intelligence (AI) promises to transform everything from baseball to college admissions. And this week, AI filled the news cycle with corporate intrigue when OpenAI fired, and then apparently rehired, its co-founder and CEO.
But what will AI do for healthcare and how do consumers feel about it? That’s the subject of two recent reports. The short answer is that despite reservations, consumers see upside. Many have high hopes for AI to improve healthcare.
According to new survey results from the Deloitte Center for Health Solutions, about half (53%) of consumers surveyed believe generative AI could improve access to healthcare. Another 46% said they think it could make healthcare more affordable. These figures were even higher—69% and 63%, respectively—among respondents who reported already using AI.
The Deloitte survey was conducted in the fall of 2023 with a nationally representative sample of more than 2,000 U.S. adults. The majority (84%) of respondents had heard of generative AI and 48% reported that they’re already using it. Health and wellness was the third most common reason respondents reported using AI, after fun and work tasks. Specifically, 19% of respondents who are using AI for health and wellness said they use it to learn about specific health conditions. Nearly as many said they use it to research treatment options (16%) or to understand medical or health insurance terms (15%).
Respondents without health insurance were more likely than insured respondents to report using generative AI for healthcare purposes—47% compared to 38%. Uninsured respondents were more likely to use generative AI for mental health support (17% vs 10%), to find a healthcare provider (13% vs. 9%), and to get recommendations about medications to ask a doctor about (10% vs. 7%).
These discrepancies may suggest that AI is filling gaps uninsured people face in accessing healthcare and health-related information. They also illustrate how important it is that generative AI provides reliable results, which most consumers who use it believe it does. The Deloitte survey showed that 69% of respondents using AI for health-related purposes find the information they get to be very or extremely reliable and just 5% say the results are not at all or not very reliable (the balance are neutral).
This confidence may not always be well founded, according to said Bill Fera, M.D., principal at Deloitte Consulting LLP, who contributed to the report..
“Consumers should be careful with over-reliance on [generative] AI tools at this juncture and should always validate findings with a clinician,” Fera said. “Many models have been shown to be propagating bias, so customers should always be aware of potential bias until we can remediate models and eliminate bias.”
Despite limitations, consumers who have used AI related to health or healthcare tended to be optimistic about its potential; 71% said they agree that generative AI could revolutionize how healthcare is delivered, compared with 50% of respondents who haven’t used AI for health reasons.
“Some of consumers’ optimism, I believe, comes from their desire for more personalized healthcare experiences,” Fera said. “They see generative AI as increasing personalized access to the healthcare system with automated personalized follow-ups, motivational nudging tailored to them, and appropriate triaging for immediate needs to the appropriate level of care whether it’s an instant virtual visit versus an in-person office visit or a need for urgent care.”
Fera suggested that generative AI may be especially well suited to administrative simplification tasks between health plans, providers, and consumers, such as billing transparency to clarify consumer financial responsibilities.
“[That’s] what consumers want as they take greater control of their healthcare,” he said.
The survey showed that consumers are comfortable with AI to inform them about new treatments when they become available (71%), help reviewing and interpreting lab (66%) and imaging results (62%), determining the best treatments and medications for their condition (58%), determining the urgency of their need for treatment (54%), and diagnosing their condition (51%).
That comfort with AI depends on a key factor: transparency. Eighty-three percent of respondents said that it’s very or extremely important that their healthcare providers disclose when they’re using generative AI for treatment or clinical support.
This finding is consistent with a previous survey conducted by Carta Healthcare, a clinical data company. In that survey, conducted with more than 1,000 U.S. consumers also this fall, 80% said that knowing if their healthcare provider is using AI is important to improving their comfort with it.
Overall in the Carta Healthcare survey, three out of four respondents said they don’t trust AI in a healthcare setting, but levels of trust varied widely by generation. Sixty-one percent of Millennials said they trust the use of AI in healthcare; on the other hand, 62% of Baby Boomers and 54% of Gen X respondents do not.
One in four respondents said they believe AI would provide better information than their provider and 60% said they think AI could ease burdens created by healthcare labor shortages.
On the less positive end, a majority (63%) worried that AI would put their healthcare data at risk and the same proportion worried that AI would lead them to get less time with their provider.
The Carta Healthcare survey revealed some confusion about AI. Nearly three-quarters (71%) said they don’t know if their provider uses AI tools and 43% admitted that their understanding of AI is limited. They’re open to learning more, though, with 47% reporting that if they understood it better, they’d be more likely to trust it. Two-thirds (65%) also said that an explanation from their healthcare provider would make them more comfortable with their provider’s use of AI.
“We are entering an exciting phase of accelerating evolution in artificial intelligence,” Fera said. “That future will be extremely bright if we proceed with the appropriate transparency and trustworthy frameworks that support and elevate critical thinking for human beings who remain in the middle of complex, sensitive tasks and decision making.”
Read the full article here