Debate Considers Whether Harnessing AI in Medicine Is Unlocking Potential or Unleashing Chaos


Artificial intelligence (AI) is rapidly transforming the healthcare landscape, offering promises and challenges. Its judicious use can bring significant benefits to clinical practice, but it is not without pitfalls. During The Great AI Debate: Unlocking Potential or Unleashing Chaos? at ACR Convergence on Monday, presenters debated the pros and cons of AI use.

Recorded sessions at ACR Convergence 2025, including the debate, will be available on demand to all registered meeting participants within 72 hours of the live presentation through October 31, 2026, by logging into the meeting website.

AI Augments the Clinician

Jeffrey R. Curtis, MD, MS, MPH
Jeffrey Curtis, MD, MS, MPH

Jeffrey Curtis, MD, MS, MPH, Professor of Medicine in the Division of Clinical Immunology and Rheumatology at the University of Alabama at Birmingham, said that AI can be useful in medicine to help perform time-consuming or rote tasks more efficiently, to do challenging tasks more effectively, and to help complete “impossible” tasks.

“AI excels at handling time-consuming, repetitive tasks that require minimal cognitive load,” Dr. Curtis said. “For example, a tool like Open Evidence can quickly provide clinical reasoning and literature searches tailored to specific patient scenarios.”

These platforms are particularly useful for generating letters of medical necessity or prior authorization (PA) requests, streamlining administrative burdens that consume valuable physician time, he said.

“However, it’s important to remember that while AI can support these processes, insurance companies may still reject PA requests regardless of the evidence presented,” Dr. Curtis noted.

AI can augment, rather than replace, clinical intelligence.

“For instance, differential diagnosis generators can remind us to look for rare conditions — scurvy in a young patient with an unusual diet — that might otherwise be overlooked,” Dr. Curtis said.

AI-powered ambient listening tools, like virtual medical scribes, can transcribe patient encounters and filter out irrelevant content, freeing up clinicians to focus on patient care.

“Integration with electronic health records (EHRs) is crucial for maximizing the utility of these tools, enabling automatic extraction of relevant data, such as ICD-10 (International Classification of Diseases, Tenth Revision) codes and treatment histories,” Dr. Curtis emphasized.

When it comes to tackling complex and perhaps impossible tasks, AI’s ability to process vast datasets opens new frontiers in medicine.

“AI can help phenotype patients with rheumatoid arthritis (RA) using EHR data, potentially predicting disease trajectories and informing personalized management plans,” Dr. Curtis said. “In imaging, AI-driven thermography and finger fold analysis can remotely assess disease activity in RA patients, reducing the need for in-person visits and improving access to care.”

AI tools can more efficiently screen patients for clinical trials, especially when complex phenotypes are involved. This accelerates research and ensures that eligible patients are identified more accurately, Dr. Curtis noted.

“AI chatbots can draft responses to patient portal messages; studies show that while these tools may not save time, clinicians report that they actually help them maintain empathy and consistency in communication,” he said.

On the research and education front, generative AI can draft institutional review board documents, grant proposals, and lay summaries for research participants, making research more accessible and transparent. Dr. Curtis added that AI is also being used to screen fellowship and staff applications, saving significant administrative time.

AI Needs Guardrails

Jinoos Yazdany, MD, MPH
Jinoos Yazdany, MD, MPH

“Despite AI’s promises, its adoption is outpacing our understanding of long-term impacts, echoing previous waves of medical innovation that brought unintended consequences years later,” said Jinoos Yazdany, MD, MPH, Chief of the Division of Rheumatology at UCSF San Francisco General Hospital. “For rheumatologists and other clinicians, it is crucial to recognize the risks inherent in this technology and advocate for responsible implementation.”

AI is not a neutral tool, she asserted.

“Unlike a stethoscope, it acts as a force amplifier, magnifying the data, assumptions, and biases fed into it,” Dr. Yazdany said. 

She described early AI models that demonstrated glaring biases — such as the infamous Google image classifier mislabeling Black individuals, or Amazon’s hiring algorithm screening out resumes from women’s colleges. While these examples were obvious and eventually corrected, today’s AI models are more complex and their biases more subtle, often rooted in unrepresentative training data, Dr. Yazdany explained.

“Studies have shown that clinical trials and datasets are overwhelmingly composed of white individuals, with other racial and ethnic groups, lower-income populations, and rural residents significantly underrepresented. These disparities are then embedded in the AI models, potentially perpetuating inequities in diagnosis and care,” she said.

A widely used healthcare algorithm in the U.S. was found to under-prioritize Black patients for additional resources because it used healthcare spending as a proxy for medical need — overlooking the fact that lower spending often reflects reduced access to care, not lesser need.

Similarly, AI models interpreting chest X-rays have been shown to systematically underdiagnose young women, Black and Hispanic patients, and Medicaid recipients. The root cause: training data that does not reflect the diversity of the general population, compounded by lower-quality images from under-resourced settings, Dr. Yazdany noted.

AI tools can also pose risks to professional expertise, as over-reliance on AI can lead to “cognitive offloading,” where clinicians lose critical thinking skills. Evidence from other specialties shows that when AI is unavailable, clinicians’ performance can drop below pre-AI levels, suggesting skill atrophy from dependence on technology, Dr. Yazdany said.

Medical education also faces additional challenges: trainees may never fully develop essential skills (“never-skilling”) or may learn incorrect patterns due to flawed AI outputs (“mis-skilling”). Automation bias — blindly trusting algorithmic recommendations — can increase errors, as seen in studies where medical students accepted false alerts from e-prescribing systems. 

“To safeguard patients and professional integrity, clinicians must demand transparency in AI development,” Dr. Yazdany said.

This includes insisting on representative training data, rigorous bias testing, and clear regulatory standards for liability and privacy. Only by implementing AI tools that truly serve clinicians and patients — and by remaining discerning in their adoption — can we avoid embedding and scaling existing inequities, she said.