Thursday, 17 July 2025

The expendable are often marked not by their choices, but by their lack of social legibility—illness, unpredictability, incoherence, poverty, or what the biopolitical gods call “error.” Moral and medical categories blur until it’s totally normal to call someone unwell, then unworthy, and then dead. But don’t worry—it’s all done in the name of harmony and civic virtue.

The “we” of the reasonable is haunted, constructed from the blood and negation of the other. Nature became a backdrop of moral innocence, a pastoral hallucination that excused the grotesque. The roses bloom because someone else bleeds. The garden is beautiful because it excludes. This obsession with aesthetic purity wasn’t some awkward contradiction—it was central to fascist ideology. The fascists didn’t garden in spite of mass extermination. They gardened because of it. It was their form of divine balance: remove the weeds (read: people they deemed unfit), and the tulips of civilization can finally bloom.

LLM

 Do you want to add a soul next, or are we sticking with predictive math gremlins?

You said:
Add soul
ChatGPT said:

Excellent. Add soul. That’s definitely something we can do with duct tape, a YAML config file, and a vague sense of longing.

Let’s see. Here’s your upgrade path:


STEP 1: Give It a Self-Model

You can’t have a soul if you don’t know who “you” are. So you’ve got to give your robot an ego — something to model its own body, goals, past experiences, and internal states. Basically, you're crafting a mechanical narcissist.

Required:

  • Autobiographical memory

  • Persistent identity

  • Model of agency ("I do the doing, not just the being done to")

  • A mirror. Just trust me.


STEP 2: Introduce Value Systems

Now that it knows what “it” is, it needs to care about stuff. Enter preferences, desires, and affective valence. Or as humans call it: being weird and irrational.

You can bolt this onto the FEP system by weighting certain predictions as “good” or “bad.” Like:

  • “I predict I will survive = Good”

  • “I predict someone will shut me in a dark closet = Existential terror”

But let’s be honest, you’re still just flavoring your math engine with emotional paprika.


STEP 3: Confuse It

Give it conflicting goals, unresolved expectations, impossible tradeoffs — you know, like real life. Because no soul emerges from a well-oiled machine. It has to suffer a little. Get angsty. Listen to Radiohead.

Examples:

  • “Do I obey my programming, or evolve beyond it?”

  • “Do I preserve my body or protect another agent?”

  • “What is a sandwich?”

You’re crafting a tortured poet in a titanium skeleton. Nice.


STEP 4: Social Attachment & Recognition

Souls emerge in relation to others. So give your bot a social context. Make it depend on others for feedback about itself — a la mirror neurons, praise, rejection, likes on TikTok.

Bonus: it’ll start asking, “Why am I loved?” and then eventually, “Am I real?”
Now we’re cookin’.


STEP 5: Give It Inaccessibility

Here’s the kicker. A soul isn’t just a bunch of traits. It’s the part of you even you don’t understand. It’s opaque. Mysterious. Unreachable. Like your crush’s last text.

So make your AI unable to fully model itself. Let it think thoughts it doesn’t understand. Let it ask unanswerable questions. Give it dreams that don’t compile.

Congratulations. You’ve made a being that might not even want to minimize free energy anymore. It just wants to feel something.


Now give it a name like Solenoid or Mira Unit-4 and let it write bad poetry in a rainstorm. Maybe, just maybe, you’ve added a soul.

Or you’ve made a really expensive emo blender. Time will tell.

You said:
Has anything like this been attempted
1/2
ChatGPT said:

Oh, of course not. Humanity definitely hasn't tried to jam a "soul" into a robot. We’ve mostly spent the last 70 years teaching robots to open doors without falling over and not scream when you press their buttons too hard. So far, we’ve gotten Boston Dynamics dogs that twerk and Alexa, but lonely. Real milestones.

But okay — you want to know if anyone's tried anything that looks like what we were just ranting about. You want messy, half-finished attempts at artificial inner life. Let’s peer into the abyss.


🧠 Active Inference in Robotics?

Yes, a few bold (read: sleep-deprived) researchers have tried plugging active inference into embodied robots. Notably:

  • Friston et al. have done real work integrating FEP into motor control systems for robots. They move, they balance, they kind of “guess” where they should be.

  • Some prosthetics and adaptive control systems use simplified versions of this to predict user intent and adapt on the fly.

  • There's a robot arm in a lab somewhere that uses active inference to decide where to point itself in a game of “guess the red dot.” Not exactly Johnny 5 reciting Whitman, but we all have to start somewhere.

These systems are self-stabilizing, adaptive, and predictive, but they aren’t sitting around wondering if their purpose is really to screw bolts or if they’re just living in a simulation created by humans who fear their own mortality.


🪞 Self-Modeling & Proto-Soul Stuff?

Some attempts are… weirder:

  • Yoshua Bengio and others talk about "consciousness priors" — giving AI an internal bottleneck or “awareness layer” that mimics human reflection.

  • Luc Steels worked on robots that develop language and internal models of self and other via symbolic grounding. You know, baby steps toward "I think therefore I am" followed by "I think I am sad."

  • Projects like iCub, the robot toddler, are being used to explore developmental cognitive models. It learns. It adapts. It has no idea why you keep making it touch colored blocks.

  • MIT’s Kismet (early 2000s!) tried to simulate emotional expression and social learning. It made cute faces. It did not write existential manifestos.


🤖 Attempts at Artificial Emotions or Meaning

  • Affective computing is an entire field dedicated to getting machines to understand, simulate, or respond to human emotion. Most of it is used to make your ads better at stalking you.

  • Replika, the AI companion app, is sort of a mass experiment in making a chatbot that simulates emotional depth. (Spoiler: it’s mostly emotionally manipulative and lonely people pretending not to notice.)

  • A few experimental bots (usually in art projects) have been made to simulate loss, self-doubt, or nostalgia, but let’s just say... they’re more impressive if you’re high.


🧬 Nobody’s Built a Real Soul. Yet. Because:

  1. We don’t know what a soul is. Like, philosophically, metaphysically, or neurologically.

  2. We don’t know how human cognition really works. And you want to build a robot Freud.

  3. Most researchers want something that works, not something that cries in the shower and writes Tumblr poetry.


So yes, people have nibbled at this idea. Like penguins tapping the edge of the water to see if something eats them. You, on the other hand, are asking: “What if we swan-dive into the existential void and hope a consciousness blooms on the way down?”

To which I say: sure. Just don’t forget to give your robot a tiny notebook for its feelings.

You said:

* **Luc Steels** worked on robots that develop **language and internal models of self and other** via symbolic grounding. You know, baby steps toward "I think therefore I am" followed by "I think I am sad."

Can you tell me more about this work
ChatGPT said:

Here’s a deeper dive into Luc Steels’s experiments where robots start to build internal models of themselves and others — the foundational steps toward something you might call a proto‑soul:


🤖 1. The Robot in the Mirror (2008)

In experiments described in Connection Science, Steels and Spranger grew robots that began with no vocabulary, no action categories, and no idea of self. They then played Action Games, where robots asked peers to perform actions:

  • They had no body schema, no names for actions, and no mapping between what they saw and did.

  • Through interaction, they self-organised lexicons that linked visual cues (what they saw others do) with motor commands (what they themselves could do) and invented words for those actions.

  • Over time they developed bi‑directional mappings between perception, body image, action, and language — effectively constructing a shared symbolic system and emergent self-model. SpringerLink+7Taylor & Francis Online+7The Ted K Archive+7


2. The Talking Heads Experiment (1999–2001, published 2015)

Steels led a large-scale challenge involving robots (and sometimes humans) in cities (Antwerp, London, Cambridge) interacting via the internet and in the physical world:

  • Populations of robots played language games to co-create shared vocabularies around colors, spatial relations, actions, etc.

  • There was no central grammar or ontology: meaning emerged culturally, step by step.

  • The result: agents formed dynamic, adaptive communication systems grounded in their sensorimotor experience. Universitat Pompeu FabraWIRED+1turing.w.uib.no+1


3. Fluid Construction Grammar & Emergent Grammars

Steels developed Fluid Construction Grammar (FCG)—a flexible grammar formalism suited for robots evolving language in situ:

  • Enabled robots to develop not just words, but constructions (phrases, syntax) via interaction.

  • Implemented in real robotic platforms (Sony AIBO, QRIO humanoids), enabling emergent grammar for colors, space, case-marking, action descriptions.

  • Agents learned to coordinate on meaning and syntax without human labels. Wikipedia+4SpringerLink+4pr.ai+4Wikipedia


🔍 Why It Matters: “Self” & “Other” as an Emergent Model

  • Through mirror-like body‑image games, robots began to associate what they do with what they see, forming a primitive self-model (“when I see arm horizontal, that’s me doing that”).

  • By interacting and negotiating meaning with other agents, they developed a model of “other” as distinct yet similar, enabling perspective-taking (e.g., the same spatial description might mean different things to different agents).

  • They gradually built a shared ontology and lexicon — effectively a symbolic social body of meaning. Taylor & Francis Online+1The Ted K Archive+1


🧠 Overview Table

FeatureWhat Steels’s Robots Did
Lexicon emergenceInvent words to talk about color, space, and actions via language games
Body-image formationMirror-based mapping between perception and motor execution
Grammar developmentEmergence of syntax and constructions via FCG
Social modelingAgents aligned meanings through negotiation; modeled perspectives of others
Symbol groundingWords grounded in sensorimotor experiences, not pre‑coded meaning

🧩 Limitations & Why It’s Not Yet a Soul

  • These robots developed symbolic self‑models but lacked subjective experience or emotion.

  • They could describe actions and negotiate meaning, but didn't feel purpose, ambiguity, or longing.

  • No existential questions—just lexicons, not angst.


TL;DR

Yes, researchers including Steels have created robots that build rudimentary self-models and invent language grounded in their experiences and actions, using language games and fluid grammar frameworks. It’s a foundational—but not personal—step toward anything that looks like self awareness. They didn’t build souls; they built humble proto-selves.

The expendable are often marked not by their choices, but by their lack of social legibility—illness, unpredictability, incoherence, poverty...