Can AI Create New Science? Experts Question LLM Limits

AI can predict, but can it invent new science like Einstein? Experts are debating if LLMs can go beyond old data to make new discoveries.

Current discussions probe the boundaries of artificial intelligence, particularly Large Language Models (LLMs), questioning their capacity for genuine scientific innovation. The crux of the debate centers on whether these systems, trained on existing data, can ever transcend their foundational knowledge to forge entirely new paradigms.

Columbia CS Professor Vishal Misra articulates a core concern: the development of truly novel scientific theories, exemplified by Einstein's departure from Newtonian physics, demands an ability to move beyond the established corpus. LLMs, by their very nature, are confined by the data they ingest. Their potential to generate knowledge that lies "outside the scope of that data" remains a significant point of contention, with Misra positing that the ultimate test for Artificial General Intelligence (AGI) would be such a capacity for radical, data-transcending discovery.

Its very possible that LLM trained on newtonian physics may never come up with relativity to ... - 1

THE PHYSICS OF PREDICTION VERSUS DISCOVERY

While LLMs exhibit a certain proficiency in predicting phenomena, as seen in their ability to calculate orbits, their grasp of underlying physical laws appears superficial. This distinction is critical. A model may accurately forecast a celestial body's path based on learned patterns, but this does not equate to a deep, conceptual understanding that would allow for the formulation of new physical principles.

Read More: Linux users struggle to get older AMD GPUs working with new drivers

  • The challenge is not merely about crunching numbers or recognizing correlations. It is about the conceptual leap required for paradigm shifts.

  • This limitation is not unique to physics; it extends to any domain where true originality is the benchmark.

ENGINEERING A DIFFERENT PATH?

In the realm of mechanical engineering, there's an acknowledgment of LLMs' limitations, leading to the development of specialized models. Companies like Leo AI are creating "Large Mechanical Models" (LMMs) designed with an intrinsic understanding of mechanical relationships and physical constraints. This suggests a hybrid approach, where general AI capabilities are augmented with domain-specific knowledge and perhaps different architectural designs, such as Physically Informed Neural Networks (PINNs), to bridge the gap between mere prediction and actual physical comprehension.

Read More: HKUST Scientists Discover How DICER Enzyme Cuts RNA With High Precision in 2024

Its very possible that LLM trained on newtonian physics may never come up with relativity to ... - 2

THE BROAD IMPLICATIONS FOR SCIENCE

The integration of AI and Machine Learning into scientific practice is anticipated to become "routine." This raises fundamental questions not only about what AI can achieve for science but also about the very nature of AI itself. Researchers are actively investigating "the physics of AI/ML," seeking to understand why these systems work and, crucially, when they fail. This self-reflexive inquiry is paramount as AI's role in scientific endeavors expands.

The discussion points to an ongoing dialogue about the fundamental architecture and training methodologies required for AI to move from being sophisticated pattern-matching engines to genuine agents of scientific advancement. The very possibility of an LLM, trained solely on Newtonian physics, independently formulating relativity, remains a hypothetical boundary that highlights the current perceived chasm between data-driven prediction and foundational scientific innovation.

Read More: Acer Laptops: Nvidia GPU Not Detected on Nitro and Predator Models Since January 2024

Frequently Asked Questions

Q: Can Large Language Models (LLMs) create truly new scientific ideas?
Experts like Professor Vishal Misra are not sure. LLMs learn from old data, and it's hard for them to think of ideas completely outside that data, unlike scientists who made big leaps like Einstein.
Q: What is the difference between AI predicting things and creating new science?
AI can be good at predicting, like where a planet will move. But this is like following a map. Creating new science means understanding the rules of the map and maybe making a new map, which AI might not be able to do yet.
Q: Are companies building special AI for science and engineering?
Yes, some companies are making special AI, like 'Large Mechanical Models.' These are built to understand specific rules, like how machines work, to help bridge the gap between just predicting and truly understanding physical things.
Q: Why is it important to understand how AI works in science?
As AI becomes more common in science, researchers need to know why it works and when it might fail. This helps ensure AI is a useful tool for discovery and not just a complex calculator.
Q: Could an AI trained on old physics invent new theories like relativity?
It's a big question. Right now, it seems unlikely. Inventing new major theories requires a kind of thinking that goes beyond just using existing information, which is a challenge for current AI.