How AI could disrupt the future of health care regulations [PODCAST]




Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

We dive deep into the evolving role of artificial intelligence in health care with Muhamad Aly Rifai, a practicing internist and psychiatrist. We explore how AI is not only shaping clinical care but also transforming the enforcement of health care regulations. With a focus on the ethical dilemmas and potential risks of AI in medicine, Muhamad Aly shares his insights on the future of physician autonomy and patient care in an increasingly AI-driven world.

Muhamad Aly Rifai is a practicing internist and psychiatrist in the Greater Lehigh Valley, Pennsylvania.

He discusses the KevinMD article, “The use of artificial intelligence in the enforcement of health care regulations.”

Microsoft logo rgb c gray

Our presenting sponsor is DAX Copilot by Microsoft.

Do you spend more time on administrative tasks like clinical documentation than you do with patients? You’re not alone. Clinicians report spending up to two hours on administrative tasks for each hour of patient care. Microsoft is committed to helping clinicians restore the balance with DAX Copilot, an AI-powered, voice-enabled solution that automates clinical documentation and workflows.

70 percent of physicians who use DAX Copilot say it improves their work-life balance while reducing feelings of burnout and fatigue. Patients love it too! 93 percent of patients say their physician is more personable and conversational, and 75 percent of physicians say it improves patient experiences.

Help restore your work-life balance with DAX Copilot, your AI assistant for automated clinical documentation and workflows.

VISIT SPONSOR → https://aka.ms/kevinmd

SUBSCRIBE TO THE PODCAST → http://kevinmd.com/podcast

RECOMMENDED BY KEVINMD → http://kevinmd.com/recommended

GET CME FOR THIS EPISODE → http://kevinmd.com/cme

I’m partnering with Learner+ to offer clinicians access to an AI-powered reflective portfolio that rewards CME/CE credits from meaningful reflections. Find out more: http://kevinmd.com/learnerplus

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome back Muhammad Aly Rifai. He is an internal medicine physician and psychiatrist. Today’s KevinMD article is “The Use of Artificial Intelligence in the Enforcement of Health Care Regulations.” Muhammad Aly, welcome back to the show.

Muhammad Aly Rifai: Thank you for having me.

Kevin Pho: So let’s jump straight into this article, but before talking about it, what led you to write it in the first place?

Muhammad Aly Rifai: So, I have been on the receiving end of investigations and prosecutions by the United States Department of Justice and the Department of Health and Human Services, the Office of Inspector General. And during this prosecution, I discovered that the Department of Justice, as well as the Office of Inspector General for the Department of Health and Human Services, had been using artificial intelligence to basically target physicians who are outliers. Specifically, my area of expertise—even before the advent of COVID and telehealth—I was doing telehealth, and I was an outlier because there was nobody else that was doing telehealth. I was also doing psychiatric services to rural nursing homes in Pennsylvania. So, unfortunately, the precogs of the artificial intelligence caught me because I was an outlier. But the artificial intelligence software did not recognize that I was the only one providing these services and there were no comparators.

So that kind of really prompted me to try to dig in deeper into the subject of artificial intelligence. And some of the things that I discovered were very shocking, and I detail that in the article.

Kevin Pho: All right, and before talking about that article, for those who aren’t familiar with your story, just tell us what kind of accusations the Department of Justice levied against you.

Muhammad Aly Rifai: So the Department of Justice indicated that I was billing for services that were not provided. Not that we actually didn’t see the patient. We actually saw the patient, but we provided specialized psychiatric services to severely psychiatrically ill individuals who were in rural nursing homes in Pennsylvania. The government said that we were just providing the bare-bone services, while in reality, we were providing detailed and very specialized services. And they could not distinguish that. The AI basically picked me up because I was an outlier. I was the only one providing services, and there were no comparators.

Kevin Pho: All right, so let’s talk about your article. You explore the role of AI in targeting you. Tell us what you found.

Muhammad Aly Rifai: So, the United States Department of Justice and the Office of Inspector General for the Department of Health and Human Services, as well as the Centers for Medicare Services, really have started honing in on the utilization of artificial intelligence. Artificial intelligence is basically just a computerized construct, a machine learning construct that mimics human intelligence. There are multiple versions of artificial intelligence, but specifically, they were targeting the billing codes. They were sending software to swim through the billing codes and see if they could identify codes that were aberrant—people who were different than others, as well as people who were doing things that were not appropriate.

Specifically, in my case, they had software that checked for billing on deceased patients. In my case, for example, we had three errors where we had two individuals with the same date of birth, but we billed on the deceased one instead of the live one. However, in addition to finding these three, the software found an additional ten patients who had died before I was even born, when I was in kindergarten, or in high school. And they charged me or indicated that I billed on those patients, when in reality, I wasn’t even practicing at that time.

Kevin Pho: So when you brought up these rebuttals and inaccuracies, what was the response you received?

Muhammad Aly Rifai: A response of embarrassment and a lack of explanation. Actually, the jury was very sympathetic to those findings, and that was one of the factors that led them to come back with a verdict of not guilty because they realized the inaccuracy of the software that the government was using and that it was basically 80, 90 percent inaccurate and in its infancy. This is one of the reasons that made me write about this topic.

Kevin Pho: Now, is your case an outlier scenario? For the most part, when the government uses artificial intelligence to look for potential fraud, is it generally effective, or is your case the norm?

Muhammad Aly Rifai: My case is the norm. It’s typical. The software, the artificial intelligence, is not ready yet. It hasn’t learned. I’m actually in the process of writing an article about another event that happened a year before my case where there was some aberrant billing involving urinary catheters. Individuals and companies billed for $3 billion worth of urinary catheters, and the Medicare software failed to detect it. It was individuals in the community who alerted Medicare to that billing. The artificial intelligence is not ready for prime time, and it’s producing faulty results that result in unjust prosecutions of physicians.

Kevin Pho: Do you have any insight into the technology behind the artificial intelligence and why it isn’t as accurate as it should be?

Muhammad Aly Rifai: Sure, sure. Number one, it is in its infancy—it is still learning. During my prosecution, I learned that one of the companies that are Medicare contractors, which conduct audits and use artificial intelligence, Safeguard Services, is a subsidiary of a larger company called Peraton (P-A-R-A-T-O-N). I discovered that Peraton was a contractor for various federal agencies, like the FBI and CIA, providing artificial intelligence for tracking terrorism and other security threats. They were adapting that software for use in Medicare billing.

When you look at their advertising, it can be unnerving to see that this kind of technology is being used to target physicians for Medicare billing. So, it’s software tailored for other purposes being adapted to probe for billing abnormalities in the Medicare system.

Kevin Pho: So if a physician gets caught up in one of these cases where they are falsely accused, what kind of options do they have in terms of next steps?

Muhammad Aly Rifai: They have to scrutinize what was done in terms of the statistical analysis. We’re seeing cases where a small number of aberrant billings are identified, and then extrapolated to larger numbers. For example, in the case of Dr. Rajendra Batra in Detroit, they reviewed only six charts out of 25,000 patients and extrapolated from that review, ultimately asking for $455 million back. Dr. Batra was found not guilty after the experts they brought in uncovered issues with how artificial intelligence was being used to inflate findings from the Department of Justice.

Kevin Pho: Are there specific areas of medicine or typical cases more prone to these types of false AI accusations?

Muhammad Aly Rifai: Yes, for example, physicians accused of overprescribing opioids were often targeted and screened using artificial intelligence. Now, some of those physicians are defending themselves by bringing in experts to show they were not outliers and that the AI targeted them inappropriately. When you dig into the actual evidence, which was built on AI algorithms, you can sometimes prove that the targeting was wrong, leading to not-guilty verdicts.

Kevin Pho: You told your story in our last episode, but for those who didn’t hear it, can you share how long it took to clear your name, the emotional difficulties, and the time and effort it required?

Muhammad Aly Rifai: It was a very difficult journey that took a lot of hard work, emotions, and sleepless nights sifting through the evidence provided by the Department of Justice to figure out how the software was applied. Because of my knowledge of computers and my experience as an expert witness and psychiatrist, I was able to identify inconsistencies with the help of other experts in that field. We found the discrepancies in the software and how the evidence was presented, which allowed us to challenge it effectively.

However, the trend we’re seeing is that the Office of Inspector General, the Centers for Medicare Services, and the Department of Justice are implementing software that predicts who might overcode or cause issues proactively, which is heading in a very dangerous direction.

Kevin Pho: What is your solution from the government’s perspective? Given the size of our current health care system, some automation is necessary to sift out bad actors. Is there a role for artificial intelligence? If so, how would you improve its use?

Muhammad Aly Rifai: Sure, sure. I believe there is a tremendous role for artificial intelligence, but it has to be well-trained and backed by human oversight to verify its findings. In my case, the AI model produced results without any human verification. They just presented the data in court, and it was embarrassing when we pointed out the inaccuracies. It’s crucial to have trained individuals validate these findings. For example, the Office of Inspector General for the Department of Health and Human Services only has one physician on staff out of 1,600 employees. Involving more physicians to verify the findings would be a significant improvement. If Medicare approached me with findings reviewed by several physicians, I would understand. But when college or high school graduates review the billing and call it fraudulent, without understanding its nuances, it becomes a big problem.

Kevin Pho: We’re talking to Muhammad Aly Rifai. He is an internal medicine physician and psychiatrist. Today’s KevinMD article is “The Use of Artificial Intelligence in the Enforcement of Health Care Regulations.” Muhammad Aly, we’ll end with some of your take-home messages for the KevinMD audience.

Muhammad Aly Rifai: I think artificial intelligence is here to stay, and health systems and practitioners need to be aware that they could be perceived as outliers based on the codes or procedures they use. It’s important for them to have their own systems to compare and see how they stand relative to their peers. AI has great potential, and I hope we continue discussing how it can be effectively used in other areas of medicine.

Kevin Pho: Well, thank you again for sharing your story, time, and insight. Thanks for coming back on the show.

Muhammad Aly Rifai: Thank you.






Source link

About The Author

Scroll to Top