Computer Science
A Perspective on Artificial Intelligence in Healthcare

Soham Shah '29
Apr 18, 2026
Artificial intelligence has rapidly become one of the most transformative forces in modern healthcare. As health systems are increasingly strained by rising patient volume and mounting demands for efficiency, AI has emerged as a promising tool to help clinicians process information faster, detect patterns more accurately, and expand access to care (Amalberti et al. 2019). Its greatest successes so far have appeared in areas such as medical imaging, screening, and diagnostic support, where large datasets and pattern-recognition tasks allow algorithms to perform with remarkable speed and precision (Miller 2018). At the same time, the growing power of these systems has raised an important question: how much decision-making authority should AI actually be given in medicine?
Although AI has demonstrated clear value in improving diagnostic performance and streamlining lower-risk clinical tasks, its role becomes far more problematic when applied to treatment decisions that require empathy, ethical judgment, and accountability. Therefore, the most effective role for AI in healthcare is as a regulated clinical support tool that strengthens diagnosis and treatment decisions while preserving human authority over final medical judgment.
AI in Diagnosis and Screening
Over the past decade, AI has proven capable of matching or exceeding human accuracy and speed, especially in pattern recognition tasks. Given the increasing burden on healthcare systems driven by population growth and a shrinking provider workforce, AI has consequently emerged as a potential solution to improve efficiency, expand access to care, and support clinical decision-making (Sahni 2023). Importantly, AI tools can reduce the variability and error that can arise from human fatigue or oversight.
As AI’s evolution outpaces expert predictions, many AI tools have demonstrated diagnostic capabilities that rival human expertise across numerous domains. In particular, screening and diagnostics have been among the most widely studied and implemented applications of AI. For example, McKinney et al. demonstrated that AI can surpass radiologist accuracy in breast cancer screening with mammography. In their study, McKinney et al. found that the AI system outperformed six expert radiologists, achieving an AUC-ROC (a performance metric) that exceeded the average radiologist’s by an absolute margin of 11.5%. The model also reduced false positives by 5.7% in the U.S. dataset and 1.2% in the U.K. dataset, while reducing false negatives by 9.4% and 2.7%, respectively, indicating improved diagnostic accuracy across both sensitivity and specificity metrics (McKinney et al. 2020) These findings show that AI is not merely a faster alternative but, in some cases, a more reliable one for detecting subtle disease patterns that even highly trained physicians may miss. In settings where early detection is critical, such accuracy could directly improve outcomes by enabling patients to begin treatment sooner and with greater confidence in the diagnosis.
Beyond mammography, AI’s diagnostic capabilities have extended to more complex pathologies, such as brain MRI. Rauschecker et al. successfully developed a system for generating differential diagnoses solely from brain MRI, achieving 91% top-three differential diagnosis accuracy. AI performance was markedly similar to that achieved by top academic neuroradiologists (86% accuracy) and outperformed radiology residents (56%), general radiologists (57%), and neuroradiology fellows (77%) (Rauschecker et al. 2020). This is especially important because brain MRI is far less straightforward than many screening tasks and often involves interpreting a wide range of possible conditions. AI’s strong performance here suggests that it can serve as a meaningful support tool even in more demanding diagnostic settings where physicians must sort through complex possibilities.
AI has also demonstrated incredible diagnostic speed and consistency, addressing a major barrier to patient care in high-volume healthcare settings. In visually intensive, pattern-recognition specialties such as radiology and pathology, recent studies suggest that AI can reduce diagnostic time by approximately 90% compared to traditional workflows (Jeong et al. 2025). In a prospective study by Yacoub et al., AI assistance alone reduced chest CT interpretation time by a mean of 93 seconds, corresponding to a 22.1% reduction. Radiologist efficiency was markedly enhanced, likely expediting downstream clinical decision-making and surgical intervention (Yacoub et al. 2022). Time reduction matters because even small gains in efficiency can compound across dozens of cases in a single day, especially in overburdened hospitals. In that sense, AI’s value is not only in improving accuracy, but also in helping clinicians move more quickly without sacrificing consistency in patient care.
When paired with diagnostic speed, AI’s consistency has made it especially useful and scalable in high-volume, lower-risk clinical settings (Gulshan et al 2016). Beyond demonstrating strong sensitivity and specificity, AI systems can reduce inter-observer variability caused by fatigue, distraction, or individual judgment, which often affect human interpretation. For example, Rajpurkar et al. created a neural network that achieved an area under the receiver operating characteristic curve (AUC) of 0.862 for pneumonia detection on chest radiographs, which was statistically significantly higher than the radiologists’ AUC of 0.808 for this pathology (Rajpurkar et al. 2018). This finding suggests that, in targeted image-recognition tasks, AI can identify subtle radiographic patterns with a level of consistency and accuracy that equals or even surpasses expert human interpretation.
Implications of consistency extend beyond individual cases to population-level equity. The significance of this consistency extends beyond any single case. In settings that rely on repeated screening, triage, or image review, applying the same standards each time can make care more reliable and efficient. This is precisely where AI offers the greatest practical value: not in replacing physician judgment, but in strengthening routine decision support where uniformity and speed are essential.
Limits of AI in Treatment Decision-Making
While AI has demonstrated clear value in medical diagnosis and screening, its application to treatment guidance poses far greater risks and requires substantial caution. Whereas diagnosis and screening are objective, pattern-recognition-based tasks, medical treatment is holistic and must consider patient preferences, the legal implications of decision-making, and clinical uncertainty, domains where current AI systems remain fundamentally limited.
Often, treatment decisions are made based on a patient’s personal values, quality-of-life priorities, or emotional needs. These “soft” dimensions of patient care are vitally important to the concept of shared decision-making, a core tenet of medicine that current AI neither perceives nor weights (Kerasidou et al. 2020). Human empathy is an essential piece of the puzzle, as it helps integrate the non-quantifiable dimensions of patient experience and fundamentally shapes what constitutes appropriate care for each individual.
A vignette-based experimental study by Kim et al. illustrated this phenomenon (Kim et al. 2025). This study found that physicians’ empathic communication helped align patients’ perceptions of their illness and treatment with clinical reality, reducing anxiety, decisional conflict, and intentions to seek a second opinion. By improving patients’ sense of control and understanding of their treatment, empathy also increased adherence, underscoring that effective treatment decision-making depends not only on clinical accuracy but also on emotionally attuned physician-patient communication. Thus, while AI may provide statistically grounded treatment recommendations based on population-level averages, these recommendations may not be implemented on the patient level.
Ethical and Legal Challenges
From an ethical standpoint, the “Black Box” problem creates an additional barrier to the use of AI to guide medical treatment. Introduced by Chau et al., the “Black Box” phenomenon refers to the opacity of many AI systems, which can generate clinical recommendations without providing a clear, human-understandable explanation of how those recommendations were reached (Chau et al. 2025). This presents a dilemma with the informed consent process, through which a patient receives a clear and meaningful explanation of the reasoning, risks, benefits, and alternatives underlying a specific treatment recommendation. In cases where AI provides treatment recommendations, patients are often not given clear explanations of the algorithm’s reasoning, the data on which it was trained, or the biases that may shape its output, undermining the informed consent process (De-Giorgio et al. 2025).
Leaders in the field of medicine-driven AI have echoed this sentiment, arguing that AI’s opacity alters the point at which a physician’s duty to inform meets the patient’s right to autonomy (Giacobello et al. 2025). This ethical uncertainty also highlights a related issue, as treatment decisions must account for legal responsibilities and consequences that AI systems cannot fully consider.
In cases where AI-driven medical decisions lead to unintended patient harm, existing legal frameworks are tasked with assigning liability. In today’s legal system, it is unclear who should bear responsibility: AI developers, the treating hospital system, or the physician who acted on the AI recommendation. This unclear delineation, apparent in both the United States legal system and abroad, leaves generative AI or black-box systems exposed (Duffourc et al. 2023). Proving harm or negligence in such cases can become exceptionally complex when AI is an intermediary step between action and outcome. In a United States healthcare system heavily dependent on malpractice doctrine, evidentiary burdens, and clearly defined professional accountability, ambiguity poses a substantial challenge to the safe adoption of AI in treatment decision-making.
Although AI is typically regulated as a medical device, current legal frameworks remain poorly equipped to assign liability, especially for generative or black-box systems. This concern is heightened by the limited clinical testing behind many AI tools currently entering medical practice. A 2024 review found that by October 2023, the U.S. Food and Drug Administration had authorized 691 AI/ML-enabled medical devices; yet, only 9 of them had been evaluated in an interventional trial. The same study reported that 96.7% were cleared through the 510(k) pathway, a regulatory process that often does not require the submission of new clinical data (Khera et al. 2024). This gap is significant because treatment decisions carry direct consequences for patient safety and legal responsibility. When a tool has not been rigorously tested in real clinical settings, it becomes much harder to determine who is accountable when harm occurs. For that reason, AI remains poorly suited to independent decision-making in a system that relies on clear standards of evidence and accountability.
A Tiered Framework for Clinical AI Use
Over the past decade, it has become evident that AI plays a clear role in delivering high-quality healthcare. Nevertheless, there has been considerable disagreement regarding how, where, and to what extent AI should be utilized. Rather than granting AI independent authority or rejecting it outright, healthcare organizations should adopt a tiered framework in which the degree of AI autonomy and utilization is matched to the associated risk, reversibility, and ethical complexity of each clinical decision.
At present, AI has the greatest potential to improve patient care in lower-risk contexts, such as screening, image triage, and preliminary risk stratification. AI-based screening fits this role well because it can flag patients early based on symptoms and lab results, including people who might not otherwise seek care, and help guide them to a physician for further evaluation. This type of preliminary review is often too time-consuming for physicians to perform consistently, particularly when their attention is needed for direct patient care. By automating repetitive, manual work, AI can reduce workload, save time, and preserve physicians’ energy for more complex and meaningful clinical tasks. Over time, those cumulative gains in efficiency may translate into better overall patient care.
Regarding this tiered framework, it may be prudent to delay broader incorporation of AI into high-stakes areas of patient care, where the consequences of error are more serious and often more difficult to detect before harm occurs. In areas such as treatment selection, surgical planning, or end-of-life decision-making, a flawed recommendation can shape outcomes in ways that are medically, ethically, and emotionally significant. This concern is heightened by the fact that AI systems are only as reliable as the data on which they are trained; when data reflects systematic biases, incomplete representation, or historical inequities in care, the model may reproduce those distortions at scale while still appearing objective (Jabbour et al. 2023). These limitations reinforce the need for a tiered framework in which the degree of AI autonomy is matched to the risk and ethical complexity of the clinical decision involved.
Unlike lower-risk settings, where errors may be caught through downstream physician review, high-stakes decisions leave far less room for correction and demand much more than pattern recognition alone. They require clinicians to weigh uncertainty, communicate tradeoffs, interpret patient values, and accept responsibility for outcomes in ways that extend beyond algorithmic predictions. Until AI systems become more transparent, more rigorously validated, and better equipped to avoid biases in individual cases, their role in these contexts should remain supportive and supplementary rather than determinative.
Edited by Taanvi Gowdar '28
References
Amalberti R, Vincent C, Nicklin W, Braithwaite J. Coping with more people with more illness. Part 1: The nature of the challenge and the implications for safety and quality. Int J Qual Health Care. 2019;31(2):154-158. doi:10.1093/intqhc/mzy235
Chau M, Rahman MG, Debnath T. From black box to clarity: Strategies for effective AI-informed consent in healthcare. Artif Intell Med. 2025;167:103169. doi:10.1016/j.artmed.2025.103169
De-Giorgio F, Benedetti B, Mancino M, Sala E, Pascali VL. The need for balancing “black box” Systems and explainable artificial intelligence: A necessary implementation in radiology. Eur J Radiol. 2025;185:112014. doi:10.1016/j.ejrad.2025.112014
Duffourc MN, Gerke S. The proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI. NPJ Digit Med. 2023;6(1):77. doi:10.1038/s41746-023-00823-w
Giacobello ML. Informed consent and bioethical advances in clinical settings. Front Psychol. 2025;16:1654586. doi:10.3389/fpsyg.2025.1654586
Gulshan V, Peng L, Coram M, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216
Jabbour S, Fouhey D, Shepard S, et al. Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vihttps://docs.google.com/document/d/1iJro9euGMBuJZlDqZ07giJywqVoWkvx-P4htbS_PIeI/edit?usp=sharinggnette Survey Study. JAMA. 2023;330(23):2275-2284. doi:10.1001/jama.2023.22295
Jeong J, Kim S, Pan L, et al. Reducing the workload of medical diagnosis through artificial intelligence: A narrative review. Medicine (Baltimore). 2025;104(6):e41470. doi:10.1097/MD.0000000000041470
Khera R, Oikonomou EK, Nadkarni GN, et al. Transforming Cardiovascular Care With Artificial Intelligence: From Discovery to Practice: JACC State-of-the-Art Review. J Am Coll Cardiol. 2024;84(1):97-114. doi:10.1016/j.jacc.2024.05.003
Kim S. Bridging clinical assessment-patient perception gaps through physician empathy: A vignette-based experimental study in South Korea. Patient Educ Couns. 2025;139:109230. doi:10.1016/j.pec.2025.109230
Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. 2020;98(4):245-250. doi:10.2471/BLT.19.237198
McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94. doi:10.1038/s41586-019-1799-6
Miller DD, Brown EW. Artificial Intelligence in Medical Practice: The Question to the Answer? Am J Med. 2018;131(2):129-133. doi:10.1016/j.amjmed.2017.10.035
Rajpurkar P, Irvin J, Ball RL, et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018;15(11):e1002686. doi:10.1371/journal.pmed.1002686
Rauschecker AM, Rudie JD, Xie L, et al. Artificial Intelligence System Approaching Neuroradiologist-level Differential Diagnosis Accuracy at Brain MRI. Radiology. 2020;295(3):626-637. doi:10.1148/radiol.2020190283
Sahni NR, Carrus B. Artificial Intelligence in U.S. Health Care Delivery. Drazen JM, Kohane IS, Leong TY, eds. N Engl J Med. 2023;389(4):348-358. doi:10.1056/NEJMra2204673
Yacoub B, Varga-Szemes A, Schoepf UJ, et al. Impact of Artificial Intelligence Assistance on Chest CT Interpretation Times: A Prospective Randomized Study. AJR Am J Roentgenol. 2022;219(5):743-751. doi:10.2214/AJR.22.27598
Image: Gerd Altmann. Artificial Intelligence, Network, Programming. Pixabay. Published September 30, 2018. https://pixabay.com/illustrations/artificial-intelligence-network-3706562/
Read More

Soham Shah '29
Apr 18, 2026
A Perspective on Artificial Intelligence in Healthcare
AI improves diagnostic speed and accuracy, but for treatment it must stay a support tool because of ethical, legal, empathy issues.

Kevin He '29
Apr 4, 2026
Universal Base-Edited CAR7 T-Cell Therapy to Treat Relapsed T-cell Acute Lymphoblastic Leukemia (T-ALL)
Base-edited CAR-T therapy offers new hope for relapsed T-ALL by inducing remission and bridging patients to stem cell transplant.

Keigo Fujita '29
Apr 3, 2026
Rejuvenating Senescent Dopaminergic Neurons: A Novel Therapeutic Approach to Parkinson's Disease
A novel therapeutic strategy for Parkinson's Disease may soon restore neuronal function without neuron elimination or replacement.