Why an AI should never receive a Nobel Prize

Strictly opinion pieces that reflect the author's unique style. These opinion pieces must be based on verified data and be respectful of individuals, even if their actions are criticized. All opinion pieces by individuals outside the EL PAÍS editorial team will include, after the last line, a byline—no matter how well-known the author may be—indicating their position, title, political affiliation (if applicable), or main occupation, or any that is or was related to the topic addressed.
It would be confusing the instrument with the author, the result with the intention, the power of calculation with thought. It would not be a triumph of science, but a defeat of reason.

Since the launch of ChatGPT and generative artificial intelligence (AI) in late 2022, we've been immersed in an attribution frenzy. The media frequently speaks of "AI having discovered," "AI having created," "AI having decided." But these phrases contain a dangerous illusion : that of seeing not only intelligence, but even consciousness where none exists. As a consequence of this frenzy, every October, when the Nobel Prize winners are announced, there's no shortage of voices asking, "When will an artificial intelligence be honored?"
The idea may seem provocative, even inspiring, but at its core it harbors a profound philosophical and moral flaw: an AI is not a person, not a moral agent , and therefore cannot assume responsibility for its actions . In science, authorship and recognition are not granted solely for producing results, but for being accountable for them. Signing an article or accepting a prize implies being accountable for the methods employed, the decisions made, and the resulting consequences. A scientist can explain why they did what they did, correct their errors, defend their interpretation, or rectify it in accordance with the scientific method. An artificial intelligence, on the other hand, does not understand what it does. It has no intention or consciousness. It cannot lie or tell the truth; it only generates data or texts that we interpret as meaningful.
The example of AlphaFold, the artificial intelligence system developed by DeepMind, is illustrative. Its ability to predict the three-dimensional structure of proteins revolutionized molecular biology and paved the way for crucial advances in medicine. But when the Nobel Prize in Chemistry committee recognized this excellent result in 2024, the prize did not go to AlphaFold , despite it being a formidable and essential tool, but to the scientists who developed the foundations of computational protein design and its key contribution to protein folding prediction.
The committee members acted not only with common sense, but with moral sense. They knew how to distinguish between instrumental power and intellectual responsibility. A machine cannot receive the Nobel Prize for the same reason it cannot be judged or acquitted: because it is not a moral agent.
As Kant said, and Hannah Arendt reminded us, a moral agent is defined not by what it does, but by the consciousness with which it acts. And that consciousness—that capacity to deliberate, to accept consequences, to distinguish between good and evil—is what AI doesn't have and never will. When a scientist commits fraud or manipulates data, they are accountable to their peers and to society. When an AI fabricates data, it does so without intent or remorse. It doesn't lie: it simply doesn't know what it's talking about. Its "error" isn't moral, it's statistical.
Confusing the workings of AI with true intelligence is a sign of our times: a mixture of dazzlement and fatigue. Perhaps we're seduced by the idea that machines think because we're tired of thinking ourselves, because thinking requires so much effort. We delegate moral decisions, medical diagnoses, job hiring, even court rulings to algorithms, not because they're wiser, but because they relieve us of the burden of responsibility.
Awarding an AI would be taking this delegation to the extreme of abdicating human authorship . It would be turning intelligence into an automatic function, detached from judgment, experience, or risk. Science, at its best, is the opposite: an exercise in doubt, an act of intellectual courage and responsibility. The Nobel Prizes , beyond their limitations, embody precisely those values. They celebrate not only discoveries, but the selfless pursuit of knowledge and the moral commitment to truth. They do not reward those who process billions of data points at high speed, but those who deeply understand. The merit lies not in efficiency, but in understanding, and understanding implies responsibility.
Awarding a Nobel Prize to an AI would be confusing the instrument with the author, the result with the intention, the computing power with the thought. It would not be a triumph of science, but a defeat of reason.
Ramón López de Mántaras is a research professor at the Artificial Intelligence Research Institute (CSIC).
EL PAÍS