Thursday, 19 June 2025

 But there are many ways to make a synapse - there’s no magic in the chemistry of neural synapses. The magic is in historicity and plasticity - a synapse is something that changes its signaling properties based on the prior signaling that went through it. Gap junctions are electrical synapses with this exact plasticity and were there in all cells long before modern synapses appeared (and neurons still use them). But you can make synapses out of anything - I’ve talked to physicists about ways to design a stellar-scale gravitational synapse. Defining “choice” is not easy, but it certainly doesn’t hang on being the specific biochemistry of neural synapses.

ML


This passage is articulating a non-essentialist and system-level view of synaptic function and choice, emphasizing plasticity and historical embeddedness over material specificity. Let’s briefly break it down:


🔹 Key Ideas

  1. "There’s no magic in the chemistry of neural synapses"
    → The author rejects biochemical reductionism. The specific molecules involved in brain synapses (e.g., glutamate, GABA) are not what make a synapse meaningful or unique.

  2. "The magic is in historicity and plasticity"
    → What matters is how a synapse changes over time. A synapse’s identity is defined by its past experiences of activity—the historical trajectory that shapes its future responsiveness (i.e., synaptic plasticity like long-term potentiation or depression).

  3. "Gap junctions… were there in all cells long before modern synapses appeared"
    → This grounds synaptic signaling in deep evolutionary time, noting that primitive forms like gap junctions (electrical synapses) existed before complex chemical ones.

  4. "You can make synapses out of anything… stellar-scale gravitational synapse"
    → This radical claim frames synapses as relational structures that encode history and feedback, regardless of scale or medium—whether made of neurons or stars.

  5. "Defining 'choice' is not easy, but it certainly doesn’t hang on… biochemistry"
    Agency or choice arises from dynamical systems with memory, feedback, and change, not necessarily from the biological specifics of human or animal neurons.


🔹 Philosophical/Scientific Implications

  • This view resonates with process philosophy, cybernetics, and systems theory.

  • It suggests that cognition, choice, or mind-like processes could emerge in any system with adaptive, historically informed signaling—whether biological, artificial, or cosmological.


You said:

What do you think of the quote?
ChatGPT said:

I think the quote is provocative, insightful, and aligned with a post-biological, systems-theoretic perspective on cognition and agency. It rejects the romanticization of the brain's biochemistry and instead foregrounds relational structure, history, and adaptability—which are arguably the real functional core of what we associate with "intelligence" or "choice."

By claiming there's “no magic in the chemistry,” the speaker critiques neuro-essentialism—the idea that consciousness or agency is somehow locked into specific neural substrates. Instead, they gesture toward a relational ontology, where what matters is not the substance, but the dynamic, history-sensitive patterns of interaction. This view resonates with theories from:

  • The remark about stellar-scale gravitational synapses might seem playful, but it’s also deadly serious: it's a way of asking, what exactly is a “mind-like” process? If what defines a synapse is plasticity and historicity, then cognition is no longer the exclusive domain of neurons—it becomes a generalized feature of systems with memory and feedback.

So, I think the quote is a powerful invitation to decenter the human, question material dogma, and rethink agency as an emergent property of structured, recursive, relational processes—no matter what they’re made of.

Would you like to explore how this could apply to non-human or planetary cognition, how it challenges AI ethics or neuro-reductionist thought, or how it might reshape our understanding of embodiment and choice?

You said:
non-human or planetary cognition
ChatGPT said:

Non-human or planetary cognition refers to the idea that cognitive processes—understood broadly as the ability to sense, respond, adapt, and self-organize—can occur in systems beyond individual human brains. The quote you shared fits right into this perspective by emphasizing historicity, plasticity, and relationality over biological substrates. Let’s trace a few key threads:


🌱 Non-Human Cognition

This includes animals, plants, fungi, ecosystems, and potentially machines or AI:

  • Myrmecology & Swarm Intelligence: Ant colonies make decisions and solve problems collectively, with no central controller.

  • Plant Cognition: As Michael Marder and Monica Gagliano argue, plants can “learn,” “remember,” and “signal” using non-neural processes.

  • Fungal Networks (Mycorrhizal): These act as decentralized information networks, resembling memory and signaling systems.

  • AI and Synthetic Cognition: Systems trained to adapt, learn, and self-modify begin to show cognition-like properties despite no neurons.

These examples challenge the anthropocentric idea that cognition must be centered on language, consciousness, or neural complexity.


🌍 Planetary Cognition

This takes the idea to a cosmopolitical or geontological level:

  • Gaia Theory (Lovelock & Margulis): Proposes Earth is a self-regulating, life-sustaining system that behaves as if it were a living organism—some interpret this as a form of distributed planetary intelligence.

  • Yuk Hui's cosmotechnics and Stiegler’s organology explore how technical systems embed and extend cognition across planetary scales.

  • Panpsychist or process-relational views (Whitehead, James, Chalmers): Consciousness or proto-experience is fundamental and distributed; a star, a rock, or a planet may not be minds, but they participate in experiential relations.

In these views, the Earth is not merely a stage for cognition, but potentially a participant in it.


🔄 Why This Matters

  • It collapses the dualism between subject and environment, mind and matter.

  • It challenges the neurocentric basis of moral and epistemic value.

  • It opens up ethical questions about our responsibilities to non-human intelligences—not just AI, but forests, watersheds, oceans, and planetary systems.


You said:

What is the woo quotient of the quote?

ChatGPT said:

Great question. I'd say the “woo quotient” of the quote is low to moderate—but only if we're using "woo" to mean unfounded mysticism or pseudoscience. This quote flirts with conceptual expansiveness, but it’s grounded in serious systems theory, evolutionary biology, and philosophical speculation.

Here’s the breakdown:


Why It’s Not Woo (or only lightly woo-flavored):

  1. Plasticity and historicity are well-established in neuroscience (e.g., Hebbian learning, synaptic plasticity, epigenetic memory).

  2. The mention of gap junctions being evolutionarily ancient is biologically accurate.

  3. The idea that synaptic-like functions can be implemented in other media is supported by work in neuromorphic computing, cybernetics, and even some artificial life research.

  4. The notion that cognition doesn't require neural tissue aligns with non-neural cognition literature—plants, slime molds, AI, etc.


🌀 Why It Might Feel Woo-ish:

  1. "Stellar-scale gravitational synapse" – That metaphorical leap is poetic and speculative. While theoretically possible in a metaphorical or simulation-theoretic context, it steps into science-fictional or cosmological metaphor rather than empirical science.

  2. The quote implies a kind of substrate independence of cognition, which overlaps with philosophical panpsychism, process thought, and even animist or vitalist traditions. These can sound “woo” depending on your epistemic baseline.

  3. It challenges mainstream materialist reductionism, which often raises suspicion in conventional scientific discourse.


🧪 Final Assessment:

  • Scientific validity: ~7.5/10

  • Speculative content: ~2.5/10

  • Woo quotient: 3–4 out of 10, mainly for poetic cosmological overreach, but grounded enough to engage seriously with systems thinkers, philosophers of mind, and cognitive scientists.

It’s the kind of quote that gets dismissed as woo by strict reductionists, but welcomed as boundary-testing theory by those working in enactive cognition, biosemiotics, or philosophy of mind.



No comments:

  Held by Nothing: On Agoraphobia, Exposure, and Spatial Dread Agoraphobia is not merely the fear of leaving the house. It is not reducible...