AI And Its Potential Contribution To Healthcare

Posted by Peter Rudin on 30. May 2025 in Essay

AI Healthcare       Credit:medtigo.com

Introduction

The utilization of artificial intelligence (AI) in clinical practice has increased and is evidently contributing to improved diagnostic accuracy, optimized treatment planning, and improved patient recovery. The rapid evolution of AI, especially in respect to generative AI and large language models (LLMs), has reignited the discussions concerning their potential impact on the healthcare industry, particularly in respect to the role of healthcare providers. Questions such as ‘can AI replace doctors?’ or ‘will doctors who are using AI replace those who are not using it?’ have been raised with increasing intensity. The following discusses benefits and risks related to AI’s potential contribution to healthcare, hopefully providing guidance regarding its future development.

Today’s Problems

AI has been used in medicine for decades as experts compare its future potential with the decoding of the human genome or the rise of the internet. The impact is expected to show up in doctor-patient interactions, physicians’ paperwork load, medical research and medical education. Most of these effects are likely to be positive, increasing efficiency, reducing mistakes, easing the nationwide crunch in primary care and creating space for longer, deeper person-to-person interactions. But there are serious concerns as well. The U.S. healthcare system, long criticized as costly, inefficient, and inordinately focused on treatment over prevention, has been showing cracks. Current data sets too often reflect societal biases that reinforce gaps in access and quality of care for disadvantaged groups. Without correction, these data sets have the potential to cement existing biases that will increasingly influence how healthcare operates. Another important issue, experts say, is that AIs remain prone to ‘hallucination,’ making up ‘facts’ and presenting them as if they are real. Consequently there is the danger that medicine will not be bold enough in facing these problems. The latest AI has the potential to remake healthcare top to bottom, but only if given a chance. The wrong priorities with a  focus on money instead of health, could easily reduce the AI ‘revolution’ to an underwhelming exercise in tinkering around the edges. 

Will AI Replace Medical Doctors?

Considering current applications in clinical practice, AI is already an integral part of health services without replacing doctors. For example, AI-aided decision support systems for MRI machines to assist diagnosis or improving voice recognition in dictation devices to maintain documentation.  However, recent developments in AI are highly complex, rapidly evolving and overwhelmingly positive. By collaborating with doctors, AI can contribute to a more efficient and streamlined healthcare system. A decision by a doctor aided by AI could be more accurate than without AI, minimizing risk for patients, improving decision-making processes, and  the quality of service. The concept of ‘Collaboration’ offers another interesting way to further advance the use of AI. The Human-in-the-Loop (HITL) approach emphasizes a collaborative partnership between AI and human expertise for solving a health problem. Through collaborative decision-making, AI offers insights, and individuals leverage their knowledge for final judgment, establishing oversight and quality control to validate AI predictions, reducing potential errors or biases. This further contributes to trust and acceptance as ethical practices are important, ensuring transparency, accountability and explainability in AI decisions.

AI Tools

A new tool developed by biomedical informatics researchers at the University of Buffalo demonstrates that it can outperform most human physicians taking the U.S. Medical Licensing Examination (USMLE), considered a benchmark of clinical competence. The USMLE is a three-step exam that is required for medical licensing in the United States, with each step evaluating different aspects of whether a doctor has the knowledge and skills to provide safe and effective medical care. Step 1 measures an understanding of basic science and foundational knowledge; Step 2 evaluates clinical knowledge of health and disease in the context of patient care; Step 3 measures the ability of the physician of clinical knowledge and the application of this knowledge in the unsupervised practice of medicine. The tool, called Semantic Clinical Artificial Intelligence (SCAI), can support doctors in clinical decision-making by offering medically reasoned responses rooted in evidence-based knowledge. Details of the performance of SCAI are published in JAMA Network Open. SCAI integrates data from authoritative sources, including peer-reviewed literature, clinical guidelines, pharmacology databases and safety protocols, while deliberately excluding potentially biased inputs such as clinical notes. This semantic reasoning approach is designed to reflect how physicians synthesize knowledge in medical training. “As physicians, we are used to using computers as tools, but SCAI is different; it can add to your decision-making and thinking based on its own reasoning,” said Peter L. Elkin, MD, chair of biomedical informatics at the University and the lead author of the study.

This research adds to a growing body of work suggesting that AI can exceed human-level performance on standard medical evaluations. However, this study goes further by showing that semantically enriched AI can ‘reason’ in a structured, medically informed way, which points to its potential beyond test taking to include assisting in clinical decision-making and diagnostics. Despite these powerful new findings, the researchers are clear that SCAI is not a replacement for physicians. Instead, it is envisioned as a digital assistant that can provide just-in-time knowledge and reasoning support to clinicians, helping improve diagnostic accuracy and access to specialty care. “AI is not going to replace doctors,” Elkin said. “But a doctor who uses AI may replace a doctor who does not.”

The Risk of Dependence on AI in Surgery

AI is transforming surgery, advancing robotic-assisted procedures, preoperative risk prediction and intraoperative decision-making. However, increasing reliance on AI raises concerns, particularly regarding the potential deskilling of surgeons and overdependence on algorithmic recommendations. This over-reliance risks diminishing surgeons’ skills, increasing surgical errors, and undermining their decision-making autonomy. The ‘black-box’ nature of many AI systems also presents ethical challenges, as surgeons may feel pressured to follow AI-driven recommendations without fully understanding the underlying logic. Additionally, AI biases from inadequate datasets can result in misdiagnoses and worsen healthcare disparities. While AI offers immense promise, a cautious approach is vital to prevent overdependence. Ensuring that AI enhances rather than replaces human skills in surgery is critical to maintaining patient safety. Ongoing research, ethical considerations, and robust legal frameworks are essential for guiding AI’s integration into surgical practice. Surgeons must receive comprehensive training to incorporate AI into their work without compromising clinical judgment. By taking these steps, healthcare systems can harness the benefits of AI while preserving the essential human aspects of surgical care.

Future Development

At a time when comments regarding AI swing between unchecked hype and dystopian paranoia, we need to find the middle ground, focusing on real problems and solving them with the right tools. First, AI is not here to take over clinical care. It is here to support it. And if we frame the future of medicine around replacing physicians instead of empowering them, we risk missing the most important opportunity healthcare has had in decades. The danger lies in assuming that because AI can deliver facts, it can replace care. That kind of thinking leads to over-reliance, which then leads to underinvestment in the clinical workforce and ultimately worse outcomes. Doctors are not a problem in the system; they are the system.  AI, when used correctly, makes them faster in their decision making, more informed and less burned out. It does not replace medical judgment, it enhances it and does not eliminate the need for empathy, experience or clinical insight. Yet, too many leaders are still stuck in the wrong conversation. A recent NVIDIA survey found that 83 percent of healthcare and life sciences professionals believe AI will revolutionize care. Nearly as many are increasing their AI investments. The organizations that will lead this next phase of healthcare are not the ones chasing automation. They are the ones building systems where AI and clinicians work together to deliver faster and more accurate results for better care. This is the shift we need, fostering partnership and abandoning hype.

Conclusion

The advancements in AI are reassuring, showing promise in creating a paradigm shift in healthcare by complementing and enhancing the skills of doctors and healthcare providers rather than replacing them. To successfully harness the power of AI, healthcare organizations must be proactive, especially now, where generative AI and LLMs are highly accessible but still in need of control and guidance. As AI becomes an essential component of modern healthcare, it is vital for organizations to invest in the necessary infrastructure, training, resources and partnerships to support its successful adoption and ensure equitable access for all.

Leave a Reply

Your email address will not be published. Required fields are marked *