AI in Health: New Reviews Show Big Promises and Risks

Many new reviews from 2024 to 2026 are looking at AI in health. They found AI can help doctors and researchers but also has risks.

Recent examinations of large language models (LLMs) in healthcare reveal a landscape rife with potential yet shadowed by significant, often unaddressed, concerns. An umbrella review published in the Journal of Biomedical Science on May 7, 2025, synthesized findings from multiple systematic reviews, painting a picture of LLMs like ChatGPT being explored across diverse medical fields, from dermatology to public health dentistry. Similarly, a January 15, 2026, Nature Health article delves into the broader "Global Initiative on AI for Health," assessing who is actually deploying generative AI and its purported utility in boosting health literacy, particularly in areas like reproductive health.

Applications and Oversight Gaps

These LLMs are being pitched as powerful tools for medical research and patient care. A systematic review released February 1, 2026, via ScienceDirect, highlights "current trends, challenges, and future innovations" in LLM applications, while a January 21, 2025, piece in Communications Medicine meticulously cataloged existing applications and hurdles in patient care. Examples span evaluating AI's performance on medical licensing exams to assessing its accuracy in patient education for conditions like thyroid nodules and shoulder stabilization surgery. LLMs are seen as invaluable for "accelerating medical research and discovery" due to the sheer volume of medical literature and clinical data they can process, according to a report from July 15, 2025, titled "Large Language Models in Healthcare: Impact, Challenges, and Ethical Considerations." However, this same report notes that "regulators grapple with AI’s potential impact," suggesting a persistent lag in establishing robust oversight.

Read More: O'Leary and Carlson Clash Over Utah Data Center Tax Breaks

Concerns of Using Large Language Models in Health Care Research and Practice: Umbrella Review - 1

Public Perception and Trust Deficits

The public's engagement with LLMs for health information also presents a complex narrative. A study published on January 17, 2024, on arXiv indicated that a significant majority of individuals using LLMs for health queries – 105 out of 123 participants – felt compelled to cross-validate the information with other sources. This highlights an inherent distrust or, perhaps, a pragmatic awareness of the LLMs' limitations. The research also probed into the public's motivations for turning to these models, hinting at a desire for accessible, perhaps more immediate, health guidance.

Read More: Kenora Hospital Flood: Surgeries Cancelled for 3 Weeks

The Ethical Minefield

Beyond technical accuracy and public reception, the ethical dimensions loom large. The July 15, 2025, medtechnews.uk report flags the unsettling prospect of "The Weaponization of AI in Healthcare" and underscores the critical need for "trust and transparency." The underlying mechanics of how LLMs learn and present information, such as "training a reward model" to predict human preferences, raise questions about the biases embedded within these systems. The very data fueling these models—its availability and inherent characteristics—remain a point of inquiry, as noted by the ScienceDirect review from February 1, 2026.

Read More: NEXTDC Opens $1 Billion Data Hub in Kuala Lumpur for AI

A Tapestry of Reviews

The academic discourse on LLMs in healthcare is characterized by a flurry of systematic and umbrella reviews. These investigations, spanning from early 2024 to mid-2026, collectively examine the burgeoning role of these technologies. Key publications include:

  • May 7, 2025: Journal of Biomedical Science (Umbrella Review on ChatGPT in Healthcare)

  • January 15, 2026: Nature Health (Large Language Models in Global Health)

  • February 1, 2026: ScienceDirect (Impact on Medical Research and Patient Care)

  • January 17, 2024: arXiv (Public Concerns and Choices)

  • January 21, 2025: Communications Medicine (Applications and Challenges in Patient Care)

  • July 15, 2025: medtechnews.uk (Impact, Challenges, and Ethical Considerations)

Frequently Asked Questions

Q: What do new reviews say about AI like ChatGPT in healthcare?
New reviews published between 2024 and 2026 show that AI like ChatGPT has many potential uses in healthcare, such as helping with medical research and patient education. However, these reviews also highlight important risks and challenges.
Q: How are AI models being used in healthcare according to recent studies?
Studies show AI is being tested for many uses, like helping doctors understand medical information faster, improving patient education on health issues, and even performing well on medical exams. It can process large amounts of medical data quickly.
Q: Are people trusting AI for health information?
A study from January 2024 found that most people (105 out of 123) using AI for health questions check the information elsewhere. This shows a lack of full trust, even though people want quick health advice.
Q: What are the main risks of using AI in healthcare?
The risks include potential misuse of AI, a lack of clear rules and safety checks, and embedded biases in the AI systems. There is a need for more trust and openness in how these AI models work and the data they use.
Q: What happens next with AI in healthcare?
Regulators are still trying to figure out how to manage AI's impact on healthcare. There is a strong need for more research into the challenges and ethical issues to ensure AI is used safely and effectively for patients and doctors.