Thomas Südhof on AI

Maciej Kawecki has interviewed Thomas Südhof – a biochemist and a Nobelist – on neurological subjects. Part of the interview was extracted into a video covering his stance on AI:

HOST (Maciej Kawecki / “This Is IT”):
Professor, it’s a great honor to have you here. Let me start directly: Is the brain essentially a computer? Is there some kind of molecular logic inside it?

THOMAS SÜDHOF:
The human brain is only slightly similar to a computer. There are huge differences. I think many people, especially in the world of computer science, don’t fully understand the nature of the brain and how it processes information.

One of the mysteries, for example, is sleep. Sleep is a biologically universal phenomenon. And the question is: why do we sleep? We don’t really know why we sleep. There are many theories and ideas, but in truth we don’t know.

HOST:
So would you say the brain is not a machine at all?

SÜDHOF:
I would not call it “a machine” in the sense people usually mean. The brain is not like a computer. The way it works is fundamentally different.

Also, when it comes to “AI,” it ultimately works through amalgamation, synthesis. It is always an extract that in some way reflects an average, while the human being, in their nature, is in some sense a variation. So I would ask: is AI only a statistical machine—still only statistical?

HOST:
When people say “AI understands,” what do you think? Does it understand?

SÜDHOF:
I’m trying to understand the differences between human understanding and artificial understanding. And I’m not sure how far this will go or where it will lead us. But I suspect that the nature of humanity may be best defined in the context of language use.

For example: for AI, this page is only a set of numbers, vectors. For our brain it is a network of associations. Is that the fundamental difference—something that distinguishes AI from the human brain? There are many differences.

The most obvious difference is that AI has infinite memory. We humans have very limited memory. Our memories are so limited that we often think we remember something that in fact never existed.

With a piece of paper, something is written on it, and we associate it with something else. This process in humans is flawed, but in AI it is very precise. In fact, this is probably one of the best uses of AI right now: using enormous amounts of data that already exist.

HOST:
But aren’t you idealizing AI’s memory? Humans misremember, but AI can also be wrong.

SÜDHOF:
AI can be wrong because in databases there is so much that is wrong. AI often produces conclusions that are not correct. Tools like ChatGPT can be useful because you can get many things quickly, many directions, many bits of information. But quite often the answers are simply incorrect.

HOST:
So what do we do with that? And can AI help science?

SÜDHOF:
In science we have more and more data. One of the biggest problems we face is not fraud or fabrication, but that there are many supposedly peer-reviewed reports that are defective. A scientist, after careful analysis, can see that. But an AI computer is not able to recognize it.

At the same time, could AI help researchers solve Alzheimer’s disease and other neurological diseases in the future? Of course. It is absolutely essential.

HOST:
This sounds like a contradiction. On the one hand you say AI cannot recognize flawed work, and on the other you say it’s essential.

SÜDHOF:
Let’s go back to the nature of AI. The nature of AI is based on a kind of black box process that I certainly do not understand. But as I hear from others, even computer scientists often don’t understand it—how it transforms huge amounts of data into patterns of association, and perhaps even cause-and-effect relationships.

HOST:
So AI has patterns, associations… but earlier you said it’s “only vectors” and “only numbers.”

SÜDHOF:
For AI, it is numbers and vectors. For the brain, it is a network of associations. There is a difference.

HOST:

Also, I recently spoke with John Hopfield, this year’s Nobel Prize laureate for the development of AI. I asked him if he regrets anything—if he could go back in time and do something differently. And do you know what his answer was? He said he regrets that AI so far has been created separate from biology, developed only by programmers and belonging to computer science, not to biology and neuroscience. That was his answer.

SÜDHOF:
Interesting answer.

I think many people in computer science don’t fully understand the nature of the brain and how it processes information. Actually, we ourselves do not understand it either, but we know that it is something completely different from how a neural network works in a computer.

HOST:
You emphasize language, memory, associations. What about “knowledge”? Is the brain mainly knowledge? Or experience?

SÜDHOF:
Knowledge is a different topic. In the brain, knowledge takes different forms.

There is knowledge we can recall as facts. These are things we do not need to experience, but we can know them. For example, we can state factually that the Moon orbits the Earth and the Earth orbits the Sun. This is not something we experience; it is something we deduce, something we learn.

On the other hand, there is knowledge that can only be acquired through experience. For example, how to ride a bicycle. You will never learn to ride a bicycle without actually experiencing it. This is a different kind of memory, a different kind of learning.

HOST:
Thank you for your time.

SÜDHOF:
Thank you.

My counterarguments to Südhof’s claims

The interview with Thomas Südhof opens with a strong statement. Right at the beginning he says that the human brain is only similar to a computer to a small extent, and right after that he suggests that people in IT do not fully understand the nature of the brain. And that is where I immediately hit STOP.

Because later, roughly in the middle of the conversation, Südhof admits something that is crucial for this debate: that the mechanism of this “black box” – AI, neural networks, how it really works – he certainly does not understand himself.

And this is not a “gotcha.” This is not a cheap trick. It is about setting the BAR for how strong your conclusions are allowed to be later. If you say “I don’t understand,” your claims have to be more modest and more conditional. Yet at times he goes for absolutes.

Let’s take his first example

which is meant to support the thesis “the brain is not a computer.” Early in the interview Südhof brings up SLEEP as one of his main arguments. He says, roughly, that sleep is a mystery, and later he frames it even more strongly: that we do not really know why we sleep – that there are many theories, but we do not know.

Now, this is not an argument against the idea that the brain is a MACHINE. It is an argument against OUR CERTAINTY. The fact that we do not fully understand the function of sleep says “biology is hard,” not “biology is non-mechanical.”

Let me put it plainly: a machine can have cycles of regeneration, maintenance, service windows. A server has a maintenance window. An organism has sleep. This is not an ontological difference. It is an engineering difference.

And if someone tries to suggest that “since sleep is a mystery, the brain is not a machine,” my answer is: NO. That is a logical error. “We don’t know” does not imply “it cannot be explained mechanistically.”

Second theme

Südhof says early on that AI works through “amalgamation,” “synthesis,” that it is an “extract” and in some sense “reflects an average,” whereas a human is a “variation.”

It sounds impressive, but there is a problem: he is selling a METAPHOR as an argument. Because a human is also an extract and an average – just an extremely complex one.

The brain averages in perception. It averages in learning. It averages in categorization. If you recognize a “chair,” it is precisely because you stopped seeing a million unique chairs as incomparable “variations,” and started seeing a shared structure.

In addition, Südhof uses the word “statistical” like an insult – he says it is “still only statistical.” But the brain, to survive, MUST be statistical. Without statistical generalization, you would be trapped in single episodes.

So the contrast “AI is an average, the human is a variation” is rhetorical. At best it says: AI and the brain generalize differently. But he does not spell that out, and by not doing so he leaves the listener with a misleading impression that “average” is proof of inferiority.

Third theme: memory and scale

In the early part of the conversation Südhof claims that the most obvious difference is that AI has INFINITE memory, while humans have limited memory. And he adds that people often “remember” things that never actually existed.

I agree on one point: human memory is reconstructive and unreliable. But the line about “infinite AI memory” is a TED-talk simplification, not precision.

Because he mixes two different things. First, “memory” as the ability of a system to store vast amounts of data somewhere in infrastructure. Second, “memory” as the ability to retrieve and use something HERE AND NOW.

What’s more, around the middle of the interview Südhof says something that sounds like classic, old thinking about computers: that for AI a page is “a set of numbers, vectors,” while for the brain it is “a network of associations.”

And here the INCONSISTENCY shows up. He uses the word “vectors,” and then elsewhere he talks about AI as a system that turns data into “patterns of association.” On the one hand: “it’s only vectors,” on the other: “it’s patterns of associations.”

But if vectors are “ONLY numbers,” then neural impulses are “ONLY action potentials.” Neurotransmitters are “ONLY chemistry.” That “only” is empty.

Meaning is not a magical substance glued onto biology. Meaning is a property of how representations function in a system – how they affect predictions, decisions, behavior.

So when Südhof tries to build his argument on “meaning versus numbers,” he is effectively falling back on the image of a computer as a LOGICAL DATABASE, not on what modern models are: machines that build and transform representations.

Fourth theme: truth and quality

In the middle of the conversation Südhof says AI is rather weak because databases contain so many errors, and AI’s conclusions are often incorrect. He adds that tools like ChatGPT are useful, but quite often the answers are simply wrong.

I agree with this warning. It is a real problem.

But then he jumps one level too far. He says that a scientist, after careful analysis, can detect defective papers, while an “AI computer is not able” to recognize them.

That sentence is TOO STRONG, and logically fragile. Because humans also very often fail to recognize errors if they consume bad sources, if they are biased, if the topic is beyond their competence, or if they are tired and “buy” a smoothly written narrative.

“Careful analysis” is not a magical brain function. It is a PROCEDURE. And procedures can be coded, supported, partially automated.

The difference is not that humans have “truth built in” and AI does not. The difference is that humans, when they choose to, can reach for verification tools: experiments, measurements, criticism, replication, institutions. AI without these verification loops will produce beautiful sentences, not guarantees of truth.

And that is almost funny, because it hits not so much AI as the way AI is used. The problem is not “AI is bad,” but “people want to turn it into an AUTHORITY.”

Fifth theme

Toward the end Südhof distinguishes factual knowledge from experiential knowledge. He says you can learn facts without experience – like the Moon orbiting the Earth – but there are things you will never learn without direct experience, like riding a bicycle.

That is an accurate observation about types of learning. But it does not support his thesis that “the brain is not a machine.” It supports the thesis that “intelligence is not just declarative facts.”

And the fact that something requires experience does not make it “non-mechanical.” It makes it EMBODIED. Machines can also learn motor control through sensors and feedback loops. Robots do this.

So this argument works only if someone has a very naive picture of a machine as an “encyclopedia.” But a machine can also be a system of control, adaptation, and learning.

Sixth theme

Later in the interview Kawecki brings up a conversation with John Hopfield and quotes his regret: that AI developed separated from biology, that it belongs to computer science rather than biology and neuroscience. Südhof does not explicitly deny, but again states the categorical difference between brain and neural networks.

Here I am ambivalent. The moderate version – “it is worth drawing inspiration from biology” – sure. That is reasonable.

But if someone suggests the extreme version – “without biology there will be no real intelligence” – it starts to sound like carbon chauvinism, insisting that the biological substrate has a privileged status instead of focusing on functional similarities.

Because if the essence of intelligence is information processing, learning, building representations, predicting, acting, there is no logical necessity for it to be “carbon-based.” The substrate may matter practically, but it is not an argument by itself.

And now I return to what is most telling for me in this conversation. At one point Südhof says directly that the nature of AI is a black box that he does not understand.

And after that sentence he still makes hard claims about what AI is and what it “cannot do.”

That is the core of my critique. Not that he warns people. Warnings are needed. He is right that AI can sound authoritative and still be wrong. He is right that science faces a flood of low-quality content.

But when he turns that into a story that “the brain is not a computer,” “AI is an average,” “AI cannot recognize error,” while smuggling in an image of a computer as a database without meaning, my answer is: NO.

This is a dispute between metaphors and mechanisms, and in this interview metaphors win too often.

So what is the most honest conclusion?

This: Südhof does not disprove the thesis that the brain is a biological machine. He disproves the naive belief that generative AI is an automatic truth engine.

If you read him that way – as a warning against the cult of AI – it is valuable.

But if you treat it as a hard argument that the brain is “something other” than a machine, and AI is only soulless statistics without meaning, then the arguments are simplified, inconsistent, and often built on an outdated intuition of what a “computer” is.

In short: the brain is a machine. A very strange one, very energy-efficient, very embodied, evolutionarily twisted. But still a machine. And AI – at least today’s – is also a machine. Just a different one. And that “different” is more interesting than “magical” versus “statistical.”