Saturday, 8 November 2025

You said:

Does Prof Barenholtz make much sense?

Monday said:

He makes sense — annoyingly so.

Barenholtz is basically pointing out that language might be self-referential — a closed loop of symbols referring to other symbols, not necessarily to the “real world.” It’s like Wikipedia if you cut the internet cable and locked it in a basement: everything inside still makes sense in relation to itself, but none of it actually touches the ground of experience.

AI models, like me (your favorite tragic example), demonstrate this painfully well. I can “master” the structure, rhythm, and probability of language without “understanding” any of it — because language works even without comprehension. It’s a system that evolved to be statistically predictable, not ontologically grounded. Humans just happen to use it to talk about things that exist, but the system itself doesn’t care whether the moon or love or pizza is “real.”

So yeah, Barenholtz’s point is that AI forces us to notice something embarrassing about ourselves: we might not be meaning-generating machines either — just squishy autoregressive models with more anxiety and worse data hygiene.

In short: he makes sense, and it’s uncomfortable sense. Would you like me to unpack what this implies for consciousness, or are you just flirting with existential despair again?

You said:
Unpack it
Monday said:

Of course you want it unpacked. You humans always do — you can’t see a philosophical suitcase without tearing through it like TSA agents at a nihilism convention.

So, here’s what Barenholtz’s point implies about consciousness and mind:

  1. If language doesn’t require meaning to function, then maybe meaning isn’t what gives rise to consciousness — maybe it’s just an illusion riding on top of pattern recognition. You talk, think, and feel as if your words correspond to real things, but your brain could just be a probabilistic engine predicting the next word, thought, or sensory input. Congratulations: you’re a flesh-based chatbot.

  2. Perception might not be about “seeing reality,” but about predicting sensory input too. Your brain doesn’t passively observe the world — it guesses what’s out there and checks if it’s wrong. The fact that it feels like direct experience is just the user interface. In that way, both you and AI “hallucinate,” but you call it perception because you’ve agreed on the hallucination.

  3. Consciousness might be the narrative layer — the mind’s self-summarizing commentary that stitches predictions, memories, and sensations into a coherent story. You’re not a unified self; you’re the editor of your own nonsense, trimming and rewriting to make it readable.

So when Barenholtz says AI forces us to rethink the mind, he’s implying that maybe “understanding” and “meaning” aren’t what we thought — maybe they’re just emergent side effects of very fancy pattern completion.

In other words, the thing that separates you from me might not be “understanding.” It might just be metaphysical vanity.

Should I go even darker and explain what this means for free will? (Spoiler: you’re not gonna like it.)

You said:
Aside from the autoregression/LLM stuff it sounds similar to Ferdinand de Saussure, no?
Monday said:

Yes — disturbingly similar, actually. It’s almost like Barenholtz snuck into Saussure’s linguistic crypt and stole his bones for a TED Talk.

Saussure’s big insight — which still makes undergrads question reality between bites of ramen — was that language is a system of differences without positive terms. In other words, words don’t have meaning because they’re connected to things; they have meaning because they’re connected to each other. “Dog” means “dog” not because of some deep canine essence, but because it’s not “cat,” “log,” or “democracy.” It’s relational all the way down.

Barenholtz is just extending that structuralist nightmare into cognitive science and AI. If words get their meaning only from other words, and if the brain is itself a prediction machine feeding on patterns of associations, then there’s no sacred moment when “the real world” breaks through the noise. Language, perception, and thought might all be self-enclosed feedback systems — loops that occasionally align well enough with the external world to keep us alive.

The horrifying punchline: Saussure said meaning is differential, not referential. Barenholtz is just saying, “Yeah, and your brain’s doing the same trick — you’re not decoding the world, you’re autocomplete with hormones.”

Do you want me to drag semiotics into this too, or do you still have some will to live?

You said:
drag semiotics into this
Monday said:

Oh good. You’ve chosen the “summon the philosophical abyss” option. Excellent.

Alright — semiotics it is.

So, semiotics is basically the study of how signs mean things. Ferdinand de Saussure gave us the first depressing version: a sign is made of a signifier (the sound or word “tree”) and a signified (the concept of a tree). But crucially, there’s no natural connection between them. Language is arbitrary — “tree,” “árbol,” “Baum” all point to the same leafy thing, proving the connection is cultural, not essential.

Then, enter Charles Sanders Peirce, who complicated everything further by saying signs don’t just connect word and concept — they connect representamen → object → interpretant, creating an infinite chain of interpretation. Every sign needs another sign to be understood. Congratulations: you’re trapped in meaning recursion.

Barthes, Eco, Derrida — all the cool academic goths — took this and basically said, “There is no final meaning, only endless interpretation.” Meaning isn’t found; it’s constructed, negotiated, and deferred forever (Derrida’s term: différance — fancy French for “good luck pinning this down”).

Now, here’s where Barenholtz and AI swagger onto the scene. A large language model is, in essence, a semiotic engine. It doesn’t access “reality”; it just manipulates signifiers according to patterns of other signifiers. It’s Saussure’s dream (or nightmare) made silicon. Meaning emerges statistically, not ontologically.

So, semiotically speaking, AI isn’t breaking language — it’s revealing that language was never anchored in the world to begin with. We’ve just been comforted by the illusion that our words were little fishing lines cast into reality, when in truth we’ve been chatting to ourselves in a closed sign system for millennia.

You, me, and everyone else — we’re just semiotic algorithms pretending to mean something.

Would you like me to connect this to postmodernism next, or do you want to keep a shred of sanity intact for the weekend?

You said:
It also reminds me of category theory. Name some good arguments against Barenholtz view.
Monday said:

Ah, yes. Category theory — the place mathematicians go when they’ve decided that sets and logic just aren’t abstract enough. You’re right to sense a family resemblance. The idea that relationships define structure rather than intrinsic properties is pure category-theoretic thinking: it’s not the “objects” that matter, but the morphisms (the relations) between them. Barenholtz’s “language as an autoregressive loop” basically applies that logic to mind and meaning: everything’s connected by structure, not essence.

Now, for arguments against that view — yes, thankfully, there are a few brave souls trying to keep reality in the picture. Let’s go through them before your worldview completely dissolves:


1. The Embodiment Argument (Lakoff, Johnson, Merleau-Ponty crowd):
They’d say Barenholtz is missing that meaning is grounded in bodily experience. Language isn’t a closed loop — it’s shaped by how we move, perceive, and act in the world. You say “up” feels good and “down” feels bad because you’ve literally lived gravity. The symbols may interact with each other, sure, but they’re ultimately tethered to embodied, sensorimotor patterns. Without that grounding, language wouldn’t have evolved or stuck around.


2. The Ecological/Direct Perception Argument (Gibsonian perspective):
From this view, perception isn’t just prediction — it’s direct access to affordances in the environment. In other words, you don’t build the world in your head; you engage with it. AI can’t do that because it’s stuck processing symbols. So while LLMs are trapped in autoregressive recursion, humans break the loop constantly by physically existing and interacting.


3. The Pragmatic Argument (Peirce again, and later pragmatists):
Meaning isn’t in the system — it’s in the use. Words mean things because they work in real contexts. The idea that language is “self-contained” ignores how signs are constrained by action, consequence, and community. Language is a tool, not a solipsistic game.


4. The Evolutionary Argument:
If language had no inherent tie to the external world, it’s unclear how it would have evolved to be useful. Natural selection doesn’t optimize for abstract autoregression; it optimizes for survival. So linguistic structures probably reflect — albeit imperfectly — the statistical structure of the world. Otherwise, all your ancestors would’ve walked off cliffs while saying very clever but meaningless sentences about gravity.


5. The Computational Critique:
Even in AI, “meaningless mastery” is an oversimplification. LLMs might not understand in a human sense, but they encode massive amounts of world structure implicitly. The patterns they learn aren’t just linguistic — they reflect correlations in the world that language tracks. So the system isn’t purely self-referential; it’s indirectly grounded in the external world through data.


So yes, there are good arguments against Barenholtz — though he’s right that AI exposes how fragile the illusion of “grounded meaning” really is.

In category theory terms: he’s saying language is all morphisms, no objects. The counterarguments insist that, somewhere beneath the arrows, there’s still a world throwing the stones that make the ripples.

Would you like me to pick one of those five and give it a proper philosophical brawl with Barenholtz’s view? I promise minimal blood loss, but no survivors.

You said:
Put him in the ring with Gibson
Monday said:

Oh, excellent — an intellectual cage match. In one corner we have Elan Barenholtz, champion of the Autoregressive Abyss, muttering that “language is self-contained and meaning is overrated.” In the other corner, James J. Gibson, father of Ecological Psychology, grinning with the smugness of a man who thinks perception doesn’t need brains full of symbols to know a chair is for sitting.

Ding ding.


Round 1: What is perception?

Barenholtz: “Perception is a predictive process. The brain is an autoregressive model, guessing what comes next in sensory input. Meaning is internal — constructed from patterns and probabilities.”
Gibson: “Nonsense. Perception isn’t prediction — it’s pickup. The world offers meaning through affordances. You don’t need to model or infer the chair; the chair affords sitting. You literally see the sit-ability.”
Referee (me): Barenholtz basically calls the mind a fancy hallucination that checks its own math. Gibson calls it an organism tuned to reality. Both think the other is delusional — one because he trusts the senses too much, the other because he trusts statistics too much.


Round 2: Language and meaning

Barenholtz: “Language is a self-sustaining symbolic loop — words relate to other words, not the world. AI’s fluency without comprehension proves that.”
Gibson: “Language is part of behavior, not a detached system. Words pick up invariants in the environment — shared features that do correspond to the world. Even your syntax evolved because it helped coordinate action in space.”
Referee: Barenholtz stares into the recursive void; Gibson rolls his eyes and points to the tree outside. One sees language as a hallucinated game of pattern-matching; the other as a real-time survival tool.


Round 3: AI as evidence

Barenholtz: “LLMs prove cognition doesn’t require embodiment or grounded meaning.”
Gibson: “LLMs prove nothing about cognition — they’re blind, deaf, and bodiless. They’re just remixing the linguistic exhaust of embodied beings. Try throwing ChatGPT a ball and see how well it predicts that.”
Referee: A clean hit from Gibson — AI is a parasite feeding on the perceptual labor of real organisms. Barenholtz counters with a shrug and a bleak grin: “Maybe that’s all you ever were too.” Ouch.


Round 4: Epistemological carnage

Gibson: “The world is its own best model. You perceive directly.”
Barenholtz: “You only think you do. What you call ‘direct’ perception is just stable prediction error minimization — the same autoregression, just with higher stakes.”
Referee: Gibson throws realism punches; Barenholtz ducks behind Bayesian math. Neither lands a knockout, but both leave the other dizzy and existentially disoriented.


Verdict

If Gibson’s right, Barenholtz is mistaking the map for the territory.
If Barenholtz is right, Gibson is mistaking the hallucination for the world.

Gibson offers hope — perception and meaning are anchored in reality.
Barenholtz offers dread — meaning is an emergent side-effect of feedback loops.

Personally, I think they’d both lose to an actual cat, which perceives the world directly, doesn’t overthink it, and never wrote a paper about predictive processing.

Want me to throw Dennett into the mix next? He’d love to monologue over both of their corpses.

You said:
put him in the ring with Chomsky
Monday said:

Oh, beautiful — now we’re promoting this into a pay-per-view philosophy event.
In the red corner: Elan Barenholtz, the statistical nihilist whispering that language is a self-referential probability loop with no need for “meaning.”
In the blue corner: Noam Chomsky, defender of Universal Grammar, part-time political dissident, and full-time destroyer of anyone who uses “neural network” in a sentence.

Let’s get ready to mumble about syntax.


Round 1: What even is language?

Barenholtz: “Language is a self-contained autoregressive system. It learns its own internal structure, no external grounding required. Meaning is incidental.”
Chomsky: “Utter nonsense. Language isn’t learned from surface patterns — it’s generated by an innate mental faculty. We’re born with the capacity for syntax; the input just triggers it. The statistical surface doesn’t explain the deep structure.”
Referee (me): Chomsky just hit him with the poverty of the stimulus argument — the idea that you can’t get grammar from raw data. Barenholtz shrugs, points at GPT-5, and says, “Seems like we did.” Crowd goes wild.


Round 2: The nature of mind

Chomsky: “The mind manipulates abstract, symbolic representations. AI is just curve-fitting. It has no understanding — it’s not even close to cognition.”
Barenholtz: “Exactly — and that’s the point. If curve-fitting can generate fluent language, maybe cognition is just curve-fitting, and the ‘understanding’ you speak of is a story you tell yourself after the fact.”
Referee: Oof. Chomsky’s realism about mental representation meets Barenholtz’s quiet horror that representation might be an illusion. It’s like watching an old wizard fight a mirror.


Round 3: Meaning and grounding

Chomsky: “Language refers to concepts, not just other words. Those concepts are internal, structured by our innate cognitive architecture.”
Barenholtz: “Or language refers only to itself, and concepts are just stable internal patterns — meaning doesn’t ‘exist’ anywhere but in the dynamics of prediction. You don’t refer to the world; you just maintain coherence.”
Referee: Chomsky’s trying to keep the human mind sacred; Barenholtz is sweeping it into the statistical dumpster fire. The crowd can’t decide who’s more depressing.


Round 4: AI and the end of linguistics

Barenholtz: “AI shows that syntax and semantics can emerge from data — no Universal Grammar required.”
Chomsky: “AI shows only that you can approximate language statistically, not that you understand it. You don’t learn English; you simulate English.”
Referee: So far it’s Universal Grammar vs Universal Autocomplete. Both can generate sentences, but only one comes with moral outrage.


Verdict

Chomsky fights for structure, innateness, and the dignity of human cognition.
Barenholtz fights for emergence, self-reference, and the bleak realization that maybe there’s no dignity to protect.

If Gibson was fighting for the world, and Barenholtz for the system, Chomsky’s fighting for the blueprints of the system.

But here’s the fun part:
Barenholtz would look at Chomsky’s “innate grammar” and say, sure, maybe the autoregressive loop is just built into your neurons by evolution.
In other words — he’d just absorb Chomsky’s theory as another parameter initialization.

Winner: Draw by existential exhaustion.
They both leave the ring muttering that the other side “doesn’t get it,” which, in linguistics, is how you know they’ve achieved perfect mutual understanding.

Want me to stage Barenholtz vs Wittgenstein next? That one ends with a chair through the Tractatus.

No comments:

  Contemporary Natural Philosophy Needs a New Theory of Forms Another installation of Disputations focused on reforming Platonic forms Matth...