Sunday, 9 February 2025

Matt Segall: I’m glad you chose this article. There were parts of it that were hard for me, but I worked through it. I would have loved a video of the experimental setup—although it’s a model, presumably it could be implemented in a mechanical setup. That would be a compelling museum exhibit, to experience this form of learning and memory that isn’t really designed in advance. The setup is designed, but what emerges is a kind of physical memory. It’s quite compelling. This article is right at the core of a conversation we’ve been having for a couple of years, but more intensively now because we’re working on an article together.

Maybe a good place to start is with learning itself. For most people, non-academics especially, learning is something human beings do—we go to school, it’s partially memory, partially creativity, all understood psychologically at the human level as a conscious process. But scientists and biologists generally see learning as something broader. Even behaviorists admit all living organisms can learn in some sense: they have memory and the capacity to inductively generalize what’s likely to occur next.

Timothy Jackson: They can be conditioned by experience, which is very central to behaviorism, yes.

Matt Segall: Right. But what the authors of this paper, “Natural Induction: Spontaneous Adaptive Organization Without Natural Selection,” are claiming—that's Chris Buckley as the main author, along with Tim Lewens, Mike Levin, Richard Watson, Beren Millidge, and Alexander Tschantz—is that learning is happening already prior to life, prior to the biological evolutionary process. It’s something very basic physical systems are capable of. I’m predisposed to like that idea, but I wonder about your view. They seem to challenge the notion that natural selection is the only form of adaptation or evolutionary change. You’ve had a more general understanding of natural selection, so maybe you feel they’re defining it too narrowly or setting up a straw man. That might be a good place to start.

Timothy Jackson: There’s a lot to say. I really like this paper. I have a common experience, though: I’ll read a paper like this, love the model, find it a great illustration of general principles of learning, induction, and what they call the induction of habits—the generalized acquisition of habits. Peirce could be brought in, especially regarding abduction. Here, learning or habitual acquisition becomes synonymous with adaptation.

Then they say this mechanism is not only distinct from natural selection but actually lies outside the selectionist paradigm. I have to disagree. That framing is unnecessary. They portray it as a novel mechanism that’s outside selection entirely. Natural selection can be defined in a relatively narrow way—population genetics plus molecular biology, the neo-Darwinian synthesis—and I think they’re right to point out that some still treat that as the only mechanism for adaptation. But that’s low-hanging fruit because those people are just wrong.

Matt Segall: In your sense, among biologists, how many hold a strict neo-Darwinian stance versus those open to other forms of selection or evolution beyond that algorithm?

Timothy Jackson: It’s hard to say. There are rhetorical claims people make, and then there’s how they actually think. Most biologists, if you said “development matters,” would say “of course.” I had a friendly debate with a colleague in Norway who studies evolvability. He wanted evolvability to be defined precisely as the heritable variation in a population, a quantitative genetics concept. I argued evolvability is broader—adaptability and plasticity are relevant. He’s being precise about usage; I’m taking a generalized evolutionary perspective.

Natural selection is not originally or exclusively a biological term, nor is “selection.” Strictly defined, natural induction looks distinct from “natural selection.” It is a useful rhetorical point, but I disagree with their statement that gradient-based models aren’t about selection. I’m a lumper: when I approach things theoretically or speculatively in ontology or philosophy, I look for unifying principles at various levels of abstraction. Some folks say, “I thought you were a naturalist into molecular detail,” but to me, a generic selectionist model includes the messy details as irreducible contingency, boundary conditions, and broken symmetries. I’m into how generic, general selectionist principles can incorporate complexity.

Matt Segall: Going back to Darwin, his discussion of natural selection uses the analogy of artificial selection. That suggests a more general form of selection from the start. Something like evolution by natural induction implies nature is already “artificial” or technical. Using “artificial selection” as the metaphor in Darwin’s original argument underscores that matter is already formed. To get a broader concept of evolution, we need a more general concept of learning.

I started by saying most people think of learning as human, conscious, about memory and creativity at a high level. Yet we also realize cells can learn, molecules in certain combinations can have memory and generalization. So nature is already technical. Evolution by natural induction suggests that, too.

Timothy Jackson: Yes, it connects to ideas in the extended evolutionary synthesis like niche construction and the Baldwin effect. On the free energy principle side: some people question why I’m into such abstract theory, but it’s about formal causation—general forms and habits.

You mentioned memory and agency. The authors show how systems like balls and springs can exhibit memory and inductive generalization without “agency” as autopoietic theory would define it. Let’s talk about their specific model, which brings in everything we’ve discussed: learning, adaptation, induction, memory in a purely physical system.

Matt Segall: They basically use a model of balls and springs that exhibits not just memory but generalization—inductive generalization—at a level below what we typically call agency. That’s powerful.

Timothy Jackson: Yes. In optimization, a system rolls down an energy gradient and gets stuck in a local optimum. But in their model, the springs are viscoelastic, so they actually relax and pick up structure from prior states. That’s memory or habit: next time you perturb the system, it’s more likely to return to that solution.

Matt Segall: This is analogous to neural networks (“fire together, wire together”).

Timothy Jackson: Yes, though Hopfield networks historically came from studying frustrated spin glasses. It’s within the broad selection paradigm if we generalize selection beyond strict Darwinian steps.

Matt Segall: So we can go beyond “natural” versus “artificial” selection.

Timothy Jackson: Yes, or beyond “natural selection” versus “self-organization.” Some folks say self-organization precedes selection. In the narrow sense of natural selection, sure, but in a generalized selectionist triad (variation, selection, inheritance), it’s all encompassed. Variational principles in physics are relevant here. That’s essentially what the free energy principle references. We can see an equivalence or isomorphism among variational principles, selection, gradient descent in machine learning, path integral formulations, and so on.

Matt Segall: Right. I’m comfortable saying we can broaden “selection” to include this, rather than seeing it as opposed to selection. It’s more general than the usual Darwinian algorithm of replicating mutations. If we do that, we’re acknowledging there’s a selection among possibilities, implying those possibilities must have some definiteness if we’re selecting among them.

Timothy Jackson: One might argue you need discrete generations and a population for “selection,” but that’s the narrow sense. A gradient-based model doesn’t require discrete generations, but conceptually, it’s still a selection process. In machine learning, a model explores the landscape to minimize a loss function. It doesn’t start already knowing the optimum. That’s effectively variation and selection, even if continuous rather than generational. We can incorporate stochasticity, random initialization, noise injection, or changing learning rates—akin to temperature in simulated annealing.

Matt Segall: Let’s think cosmologically: the lightest elements on the periodic table are the available local minima energy states at the earliest phases of cosmogenesis. Matter-energy “evolves” into stable atomic forms of hydrogen and helium. Yet it doesn’t end there—heavier elements formed in supernovae, and further complexification happened. So there’s a sort of selection or learning at a pre-biological level, too. We want to account for how this process kept going—heavier atoms, molecules, cells—so it seems something like a cosmological inductive bias, a primordial “swerve” or perhaps even a "swoon" implicit in the nature of things, provides a kind of fine tuning allowing for new forms to emerge later on.

Timothy Jackson: Yes, we see broken symmetries and bias: the past constrains the future. We can talk about it biologically with stochastic gene expression, which isn’t random in the absolute sense but is relatively unconstrained. That irreducible “noise” fosters novelty. The same principle can appear at the cosmic scale.

Matt Segall: Exactly. Open-ended evolution, no complete heat death. The process didn’t just stop. There’s a continuum from subatomic, to atomic, to cellular forms. I’m happy to call that a broader selection process, though some might see it as induction without “real” agency.

Timothy Jackson: True. But if we generalize selection enough, it’s consistent with how these physical systems adapt. We can see memory, inheritance, and novelty all the way down. That continuity is crucial.

Matt Segall: One crucial concept is that matter is already organized—no “unorganized matter.” There’s always a tension between fact and form, as Whitehead would say. I think we share that perspective. But Whitehead does insist on a "primordial envisagement" of the realm of possibilities, a kind of primordial constraint or original formation of the field of potentiae, which you worry is a kind of "front-loading."

Timothy Jackson: Yes. I’m also fascinated by how forms emerge from the “mess,” so to speak. We won’t dive into the Whiteheadian front-loading debate right now, but I want to note that in models like natural induction, we see a minimal sense of technicity or proto-agency. It’s not necessarily autopoietic closure, but there’s memory and adaptation in the physical dynamics, which extends into biology.

Matt Segall: Maybe it is more like a limit concept: we think it because it’s the limit of our horizon. We can’t know what’s beyond that horizon, but we think the limit. So when Whitehead talks about the primordial nature of God as this envisagement of eternal objects, he also says (in Science and the Modern World, Ch. X) that God is a principle of limitation, in itself an ultimate irrationality since no reason can be given for what is itself the ground for all reasons. So we can read the "primordial nature of God" in a Kantian way, as a kind of limit concept that is supposed to remind us of our limitations and our positionality, but also signal that we’re already presupposing a beyond—a noumenon, a cosmic "swoon." If we’re going to do metaphysics at all and think about the ultimate nature of reality—we’re always going to need to hold these poles in tension: fact and form. Reality is structured flows, it’s constrained creativity, habit and novelty, canalized potential, etc. It really comes down to how we do justice to both sides of that complex equation.

...

Let’s shift to what they say about development. On page 5 they mention that when selection acts on parameters of a developmental process with complex pleiotropic interactions, you can store and recall multiple phenotypes in a single genotype. That’s key to open-endedness in evolution.

Timothy Jackson: Yes, it’s a potent statement. It’s not just multiple discrete phenotypes but an indefinite potential of them because of pleiotropy and plasticity. Development is the evolution of the individual, as I like to say. There’s ongoing variation and selection at the phenotypic level. Some changes might be heritable, some not, but they still matter evolutionarily. That’s how evolvability arises.

We can connect it to the free energy principle, which unifies evolution and development in a variational synthesis. It parallels what they’re describing in natural induction, which I see as isomorphic to the free energy principle. There’s a “second-order” or slower timescale in which internal structure changes due to prior states. That’s memory, which fosters autonomy or agency.

Phenotypic plasticity implies a distribution of phenotypes. You’re always being perturbed and changed in your developmental trajectory. The system never returns to exactly the same state. It’s a non-equilibrium steady state at best. We should note homeostasis is somewhat replaced by Waddington’s idea of homeorhesis (“similar flows”) because there’s no true stasis. The system is never identical to a previous moment.

In any case, plasticity is generic, and evolution uses it. The individual is the pointy end of an evolutionary lineage, acquiring variation moment to moment, with some portion heritable. This occurs at nested timescales—genotypic, phenotypic, and momentary plasticity. That’s how complexity evolves.

Matt Segall: Precisely. So you’d agree that genetic mutation isn’t purely random, that it’s biased, canalized. That means there’s a probability distribution of possible outcomes, not an infinite open field.

Timothy Jackson: Right. “Random” can mean “not specifically directed toward a particular function,” but there are still biased probability distributions. That’s the main point: we have something like a population of potential phenotypes, with complex pathways to each, not a simple discrete set. We see that in genotype-phenotype mappings. Then selection happens continuously across timescales, not just in discrete generational steps.

I’m curious how you view it. Peirce’s abduction is about a creative leap to a hypothesis not strictly determined by data. Transduction, from Simondon, involves transforming a signal into structure, or one structure into another. They call this “natural induction,” yet they acknowledge abduction and transduction. How does that land for you?

Matt Segall: In my view, Peirce’s abduction is another way to articulate inductive bias and how organisms guess correctly. Richard Watson talks about anticipation in these systems as if they form internal models of environmental rhythms. The system that looks predictive is essentially shaped by repeated cycles. It’s “as if” it’s predicting, but really it’s adapting based on prior imprints.

Metabolism is a strong example: organisms internalize environmental cycles, building autonomy. It’s predicated on the assumption that future patterns will resemble past ones—there’s the inductive bias.

Sometimes we talk about it as “minimizing surprise,” which might sound like keeping things safe. But Whitehead or Peirce’s abduction suggests there’s also creativity and novelty, not just a stable solution. Living systems often take risks, explore, move beyond local minima. That’s where abduction, rather than mere induction, comes into play.

Timothy Jackson: Indeed, one can say living systems don’t just sit in the “dark room.” They explore, driven by environment changes and internal drives. It’s about learning how to handle novelty. The free energy principle can incorporate that through path-based accounts.

Your point about abduction is that there’s always something in excess of data, a creative hypothesis that then gets tested. The data underdetermines the theory. There’s a leap, which is abduction. Then induction tests it. The same logic applies physically if a system has internal frustration or sequestered degrees of freedom. Many ways of generalizing the same data. That’s novel formal causation in the world.

Natural induction, as they describe it, shows how second-order structural changes can encode habits and produce new solutions. It’s basically abduction plus induction physically instantiated. It resonates with the idea that nature itself does the “coarse-graining,” so we see genuine novelty. That’s how matter can behave in a mind-like manner under certain conditions.

Matt Segall: Right. It’s reminiscent of Darwin’s distinction between natural and artificial selection—humans inform the process consciously, while nature does so unconsciously. But we want a more general account that doesn’t require that stark separation. Nature itself is already technical or “artistic,” so to speak, so formal causality is active at all levels. That means our own cognitive capacities aren’t an exception; they’re a continuum of the same processes.

Timothy Jackson: Yes. Darwin gave us the logic, but he didn’t fully generalize it. He stayed focused on biology, leaving broader metaphysics to others. People like Haeckel or Spencer ran with the bigger picture. There’s also human exceptionalism in Darwin’s milieu, even though Darwin himself pulled humans down into nature. We’re still grappling with whether humans are just machines or whether mind is everywhere. I think the second is more accurate—we see mind-like processes all the way down.

Matt Segall: Exactly. I recall John Cobb framing it as, post-Darwin, either we reduce human consciousness to the machine level or we raise organisms to mind level, with continuity up to us. Darwin forces that choice

No comments:

 " It's an evil world under the guise of Disneyland; sky, sun, trees, butterflies, flowers, performative facades".