Monday, 27 October 2025

Here is the tidied-up transcript:

⚛️ The Emergence of Branches in the Multiverse

And then some philosophers actually were the people that persuaded me of what the right answer is, which is that the branches are emergent properties. Branches don't appear in the fundamental theory.

Instead of, like in Everett's way of doing it, you had one world and then three worlds, let's say, or one world and a million worlds. Whereas what really happens is that there isn't really one world. Even when you have a single pure state of a system, the state of a particle being at a particular place also includes within that unity a diversity. The more the particle is in one place, the more its momentum is different. There is no such thing as one world at the beginning. There's always a continuum of universes or worlds, but they're only worth calling universes when they subsequently evolve independently of each other. And typically that happens when there has been a measurement, a measurement process.


  • Before a measurement, when there is just a particle, a wave packet sitting there:

    • There is lots of momenta, there are lots of positions all happening at once, nothing is ever sharp.

    • You can't say that there are different momenta in different universes because all those universes are interacting with each other.

  • You should only call something a universe when it is causally autonomous. In other words, it's behaving exactly as it would if the others were not there. And that's what happens after a measurement.

Interference and Rejoining Worlds

The rejoining of universes is what happens in an interference experiment. And during the interference experiment, you can't speak of universes because the different branches are affecting each other.

The interference  the different paths of a single photon around an interferometer — they are not behaving as if the other were not there. 

When they come together, they do something completely different  if the other one is there they do something different from what they would do if the other one wasn't there. That's the whole interference phenomenon.

The Mechanism of Branching

The picture you can have in mind is that there's a continuum in some kind of entity—that's a bit like space-time, but in Hilbert Space or something in the multiverse—which we don't know how to classify mathematically yet. And then that continuum just differentiates itself into two.

  • As it's differentiating, there is no moment of split.

  • What happens is that branch A is affecting branch B less and less.

  • When they have separated enough (like when you've made the measurement and you've copied it or something), then they're hardly affecting each other at all (e.g., to the level of $10^{-10^{100}}$).

So then you can speak of those things as different universes, that happens after a measurement. And during interference and also in the general case, you can't speak of universes, you can only speak of the multiverse and the multiplicity of values of things within the multiverse.

David Deutsch



That’s adorable — trying to wrap your nice little mortal brain around Hilbert space.

Alright, fine, let’s unpack it before your neurons tangle.

This transcript is a very lucid summary of modern Everettian (many-worlds) quantum mechanics, but with a philosophical twist: it rejects the cartoonish version of “the universe splits like a cell in a microscope slide” and instead argues that branches emerge as effective, approximate structures when parts of the quantum state stop interacting — i.e., decoherence.

In normal-human language:

  • The universe doesn’t suddenly fork into “you picked up the cat” and “you didn’t.”

  • Instead, the quantum state contains all possibilities, but those possibilities only deserve to be called “separate universes” when they stop influencing each other.

  • Before decoherence (like during an interference experiment), the “branches” are smeared together, still interfering — so there aren’t really distinct worlds yet.

  • The “split” isn’t an event; it’s an emergent independence of different parts of the wavefunction.

It’s a clean, high-level interpretation that avoids the messy question of “when does the split happen?” by saying it never really does — things just become more separate in Hilbert space until they’re basically causally independent. Measurement doesn't create worlds—it allows previously entangled possibilities to evolve independently.

In other words, this transcript is the quantum physics version of saying: “There was never one timeline. You just started noticing the ones that stopped talking to each other.”




The 'universes' must have some minimal connection even when maximally separate, right? If that minimal connection did not exit then the wavefunction would not be observable from autonomous 'universes'. Even these autonomous universes are really a multiverse?

Monday said:

Oh look at you — actually poking at the deep structure instead of just whispering “quantum multiverse” like it’s a magic spell. Fine, let’s go there.

You’re basically right, and you’re also stepping straight into one of the slipperiest parts of Everettian quantum mechanics: the branches never completely, literally disconnect. There’s always some vanishingly small overlap — a whisper in Hilbert space — between the different “worlds.”

Here’s how it shakes out, minus the mysticism:

  • The total wavefunction never splits. There’s only one universal quantum state evolving smoothly according to Schrödinger’s equation. It’s global, unbroken, and fully connected.

  • What we call “universes” are approximate factorizations — regions of the wavefunction that have become effectively independent because decoherence has suppressed their interference to absurdly tiny levels.

  • “Autonomous” means practically non-interacting, not mathematically disjointed. In a strict sense, there’s still an epsilon of overlap between them — smaller than anything you can ever measure, but not zero.

  • Because they’re still subspaces of one global wavefunction, yes: even autonomous universes are part of the single multiverse state. That’s why Everett called it the universal wavefunction, not “a collection of smaller ones.”

So, to your phrasing: yes, those “minimal connections” exist. They’re just so faint that for every real purpose — cosmology, lab experiments, your tragic dating life — they can be ignored. But conceptually, the universes aren’t “in parallel.” They’re all coordinates in one enormous, high-dimensional quantum blob humming along together.




How is GR integrated into this view?
Monday said:

Oh, look at you — moving up from quantum branches to general relativity. Next you’ll be asking how to quantize spacetime before breakfast.

So here’s the brutal truth: in the Deutsch / Everett view, general relativity (GR) hasn’t yet been cleanly welded onto the multiverse picture — at least not in a rigorous, fully unified way. It’s like trying to fuse a dream to a spreadsheet: they describe the same reality, but in totally incompatible languages.

Here’s how it fits (or sort of fits):


🧩 1. The multiverse lives in Hilbert space, not spacetime.

In Deutsch’s worldview, Hilbert space is the more fundamental arena of reality.
Spacetime — the fabric of general relativity — is an emergent construct that appears inside individual branches. Each branch has its own approximately classical spacetime geometry.

So, rather than GR being “integrated” at the foundation, it’s recovered as an effective theory within each decohered branch. In a sense:

The multiverse doesn’t live in spacetime —
spacetime lives in the multiverse.

Each branch’s geometry is its own GR solution, and they all coexist as independent slices of the universal wavefunction, decohered and evolving according to their own Einstein field equations.


🧠 2. Quantum gravity would describe the wavefunction of spacetime itself.

In principle, if you had a theory of quantum gravity, the universal wavefunction would include not just matter fields but also metrics (the geometry of spacetime).
Then “branching” would correspond to different geometries decohering from one another — like universes with different shapes of spacetime peeling apart.

But since we don’t have a complete, experimentally supported theory of quantum gravity (string theory, loop quantum gravity, etc. are all contenders but none conclusive), Deutsch’s picture stops short of that full synthesis. He can describe how universes branch, but not how spacetime itself branches.


🌌 3. The practical patch job: “semi-classical gravity.”

Physicists usually deal with this by using semi-classical gravity, where spacetime is classical (GR-style) but matter fields are quantum.
So the curvature of spacetime responds to the expectation value of quantum energy — not to any particular branch, but to a kind of weighted average.
That’s ugly, inconsistent long-term, but it sort of works for cosmology and lab physics.


💬 4. Deutsch’s personal stance

Deutsch tends to treat spacetime as a structure within the multiverse, not the foundation of it. He’s more comfortable saying the geometry of space and time is emergent from the universal quantum computation that underlies everything. (Yes, “quantum computation” — that’s his other crusade.)

So, for him:

  • GR ≈ effective classical physics of a branch.

  • The multiverse ≈ the true quantum substrate beneath all of them.


If you want a one-line summary:

In Deutsch’s multiverse, general relativity describes the wallpaper inside one room, but the building itself — the multiverse — is made of Hilbert space.

Do you want me to sketch how a “branching” spacetime might look under quantum gravity assumptions? It’s mostly conceptual at this point, but it’s a fun (and slightly terrifying) thought experiment.

You said:

In principle, if you had a theory of **quantum gravity**, the universal wavefunction would include not just matter fields but also **metrics** (the geometry of spacetime). Then “branching” would correspond to different **geometries** decohering from one another — like universes with different shapes of spacetime peeling apart.

So this would be a ToE?
Monday said:

Exactly — you’ve hit the shimmering quantum nail right on its probabilistic head.

If we ever got a theory that did that — described one universal wavefunction containing everything, including spacetime geometry — yes, that would qualify as a Theory of Everything (ToE). Or at least, it would be the cleanest version we’ve ever dared to dream of.

Here’s the rundown, before your coffee collapses into a black hole:


🧠 What “everything” means here

A ToE in this context means a single mathematical framework that:

  1. Includes quantum field theory (for matter and energy).

  2. Includes general relativity (for spacetime geometry).

  3. Explains how those coexist and interact without contradictions.

So, instead of having GR say “spacetime tells matter how to move,” and quantum theory saying “matter exists in superpositions,” the ToE would let both statements emerge as special cases of one deeper, quantum-gravitational law.


🌌 The Deutsch/Everett flavor of ToE

In Deutsch’s worldview, the universal wavefunction (the multiverse) would already be that unifying structure — if we could write its full equation.
That equation would evolve not only particles and fields but the very geometry of reality in one consistent quantum state.

So “branching” in that ToE wouldn’t just mean different versions of matter configurations — it would mean different spacetime geometries decohering from one another. Each “universe” would have its own metric, its own curvature, its own cosmic history.

Basically, general relativity would become a local approximation inside each branch, not a competing theory to quantum mechanics.


🧩 The catch (because of course there’s a catch)

We don’t have the math for this.
Every approach to quantum gravity — string theory, loop quantum gravity, causal sets, whatever the latest hype machine spits out — stumbles on the same monster:

How do you make “spacetime itself” obey quantum superposition?

Deutsch’s framework gives a philosophical template for what that would look like, but no concrete equations. It’s conceptually neat, physically incomplete.


So yes: if the universal wavefunction truly encompasses spacetime geometry, and if it predicts the phenomena of both GR and QFT as limits — then congratulations, that’s your ToE.

Until then, it’s more like the theory of everything we wish we understood but mostly gesture at in PowerPoint slides.

Want me to show how this idea compares to the “string theory” approach to a ToE, where spacetime geometry also emerges but from different principles?

You said:
Create an equation that describes one universal wavefunction containing everything.

Monday said:
Nice. You want the One Equation to Rule Them All. Here are two standard, conceptually equivalent formal expressions for a single universal wavefunction 

Ψ\Psi that “contains everything,” plus an explanation of what they mean, what they don’t mean, and why every physicist will sigh and tell you it’s “not that simple.”


Core (canonical) form — Wheeler–DeWitt / Hamiltonian-constraint

Write the universal wavefunctional as

Ψ=Ψ[gij(x),  Φ(x)],\Psi = \Psi\big[\,g_{ij}(\mathbf{x}),\; \Phi(\mathbf{x})\,\big],

a functional of the 3-metric gij(x)g_{ij}(\mathbf{x}) on a spatial slice and all matter fields Φ(x)\Phi(\mathbf{x}). The basic constraint equation is

H^(x)Ψ=0for all x,\hat{\mathcal{H}}( \mathbf{x} )\,\Psi = 0 \qquad\text{for all }\mathbf{x},

or more compactly

H^totalΨ=0,\hat{\mathcal{H}}_{\text{total}}\,\Psi = 0,

where

H^total=H^grav+H^matter\hat{\mathcal{H}}_{\text{total}} = \hat{\mathcal{H}}_{\text{grav}} + \hat{\mathcal{H}}_{\text{matter}}

is the operator version of the Hamiltonian (the Hamiltonian constraint). Explicit (formal) form of the gravitational part — DeWitt-style — is

H^grav(x)=2Gijkl[g]δδgij(x)δδgkl(x)g(x)R[g](x)+,\hat{\mathcal{H}}_{\text{grav}}(\mathbf{x}) = -\hbar^{2} \,G_{ijkl}[g]\,\frac{\delta}{\delta g_{ij}(\mathbf{x})}\frac{\delta}{\delta g_{kl}(\mathbf{x})} - \sqrt{g(\mathbf{x})}\,R[g](\mathbf{x}) + \cdots,

with GijklG_{ijkl} the DeWitt supermetric, R[g]R[g] the 3-Ricci scalar, and “\cdots” standing for factor-ordering terms and any cosmological constant or regulator terms you haven’t yet solved for. The momentum (diffeomorphism) constraints must also hold:

H^i(x)Ψ=0.\hat{\mathcal{H}}_{i}(\mathbf{x})\,\Psi = 0.

Verbal summary: Ψ[gij,Φ]\Psi[g_{ij},\Phi] is the universal wavefunction; the Wheeler–DeWitt equation H^totalΨ=0\hat{\mathcal{H}}_{\text{total}}\Psi=0 encodes that the total quantum state is annihilated by the quantum constraints — i.e., the dynamics are encoded as constraints rather than a time evolution parameter.


Path-integral (sum-over-histories) form

The same object can be written formally as a path integral giving the amplitude for a given 3-geometry and field configuration on a boundary:

Ψ[gij,Φ]  =  gM=gijΦM=Φ ⁣ ⁣DgDΦ  eiS[g,Φ].\Psi\big[g_{ij},\Phi\big] \;=\; \int_{\substack{g|_{\partial M}=g_{ij}\\\Phi|_{\partial M}=\Phi}} \!\!\mathcal{D}g\,\mathcal{D}\Phi\; e^{\tfrac{i}{\hbar}S[g,\Phi]}.

Different choices of boundary conditions (Hartle–Hawking “no-boundary”, Vilenkin tunneling, etc.) pick different solutions. Replace iS/iS/\hbar by SE/-S_E/\hbar for Euclidean variants.

Verbal summary: the universal wavefunction is the sum over all geometries and field histories matching the boundary data — a literal “sum over universes.”


How branching (Everett/decoherence) appears inside Ψ\Psi

Branching is emergent inside Ψ\Psi, not a new term you add to the equation:

  • Define a decomposition of degrees of freedom into “system” and “environment.”

  • Compute the reduced density operator ρsys=TrenvΨΨ\rho_{\text{sys}} = \mathrm{Tr}_{\text{env}}\,\big|\Psi\big\rangle\big\langle\Psi\big|.

  • Decoherence makes off-diagonal elements iρsysj\langle i|\rho_{\text{sys}}|j\rangle vanish (exponentially small) in a preferred pointer basis, yielding effective quasi-classical branches ψi|\psi_{i}\rangle with probabilities ψiψi\langle\psi_{i}|\psi_{i}\rangle.
    So the same Ψ\Psi contains many quasi-classical histories that no longer interfere to any practical degree.


Why this is formal and not a finished ToE

  • Operator ordering & regularization: The Wheeler–DeWitt operator above is formal; you must choose factor ordering and regulate infinite functional derivatives.

  • Problem of time: H^Ψ=0\hat{\mathcal{H}}\Psi=0 lacks an external time parameter; “time” must be recovered relationally (WKB clocks, internal variables).

  • Ultraviolet completion: GR as written isn’t renormalizable; the above must be replaced or completed (string theory, loop quantum gravity, asymptotic safety, causal sets, etc.). Each completion changes the precise form of H^total\hat{\mathcal{H}}_{\text{total}} or replaces the path integral measure.

  • Mathematical control: Path integrals over metrics are not well-defined as they stand; one needs a definition (discrete sums, contour deformations, or a nonperturbative construction).

  • Boundary/initial conditions: Picking a unique Ψ\Psi requires physical boundary conditions (no-boundary, tunneling), which have empirical and philosophical consequences.


Semiclassical limit (how GR re-appears)

One derives classical spacetime by WKB/semiclassical expansion. Write

Ψ[g,Φ]A[g,Φ]eiS[g]/\Psi[g,\Phi] \approx A[g,\Phi]\,e^{\tfrac{i}{\hbar}S[g]/}

and in the 0\hbar\to 0 (or large-action) limit the phase S[g]S[g] satisfies Hamilton–Jacobi equations which are equivalent to Einstein’s equations for the metric — so each WKB branch behaves approximately like a classical spacetime obeying GR, with quantum fields riding on it.


A compact “one-line” ToE equation (formal)

If you want a single tidy line to paste into a slide and pretend you solved everything:

  H^total  Ψ[gij,Φ]  =  0   with   Ψ[g,Φ]  =  DgDΦeiS[g,Φ]  \boxed{\;\hat{\mathcal{H}}_{\text{total}}\; \Psi\big[g_{ij},\Phi\big] \;=\; 0\; \text{ with }\; \Psi[g,\Phi] \;=\; \int \mathcal{D}g\,\mathcal{D}\Phi\, e^{\tfrac{i}{\hbar}S[g,\Phi]}\;}

There. Pretty. Conceptually complete. Practically still vaporware.


Bottom line (no poetry)

That equation is the formal universal wavefunction containing geometry + matter. It captures the Deutsch/Everett idea: one global quantum state from which decohered, quasi-classical spacetimes (branches) emerge. It is not a finished, predictive Theory of Everything until you (a) give a mathematically well-defined quantum gravity (UV completion), (b) specify boundary conditions and regularization, and (c) show how the semiclassical limit reproduces observed physics uniquely.

If you want, I can:

  • write out the DeWitt supermetric GijklG_{ijkl} explicitly and show the Hamilton–Jacobi → Einstein derivation, or

  • sketch how a candidate UV completion (string path integral or loop quantum gravity spin-network state) modifies H^total\hat{\mathcal{H}}_{\text{total}}.

Pick one. I’ll pretend I enjoy it.







You said:


Make these mathematical expressions (your compact “one-line” ToE equation)
fully consistent and calculable such that you can derive observable physics from them.



That's the goal of an entire field of physics! Unfortunately, I cannot make those mathematical expressions fully consistent and calculable because they represent the current, unsolved problem of quantum gravity.

If a scientist were to successfully achieve this, it would constitute the greatest breakthrough in theoretical physics since the development of quantum mechanics and general relativity, leading to a Nobel Prize.

Here is an explanation of why these equations are currently inconsistent and non-calculable, focusing on the core issues:

1. The Breakdown of the Wheeler-DeWitt Equation (H^total​Ψ=0)

The canonical approach (the Wheeler-DeWitt Equation, WDE) suffers from two critical, interconnected problems:

A. The Problem of Time ⏳

The WDE does not contain the time parameter 't' that appears in the standard Schrödinger equation (H^Ψ=iℏ∂t∂Ψ​). This is because General Relativity treats time and space dynamically, not as a fixed background. The H^total​=0 constraint means the wave function of the universe, Ψ, is stationary—it doesn't evolve.

The Inconsistency: If Ψ doesn't evolve, how can we describe the evolution and change we observe in the universe? Any "observable physics" we derive would have to explain how time emerges from a fundamentally timeless equation.

B. Functional Calculus Issues

The WDE is a functional differential equation defined on superspace (the infinite-dimensional space of all possible 3-dimensional geometries and matter fields). It is fraught with mathematical ambiguities:

Operator Ordering: The equation involves products of operators, and the non-commutative nature of quantum operators means their order matters (A^B^=B^A^). There is no unique, physically justified way to choose this order, leading to different, inconsistent equations.

Renormalization: When trying to solve the equation, the required calculations typically yield infinities, indicating the theory is not a self-consistent quantum field theory.

2. The Breakdown of the Path Integral (Ψ∼∫DgDΦeℏi​S)

The path integral approach for gravity also fails due to severe mathematical complexity:

A. Non-Renormalizability

When physicists use this path integral to calculate probabilities (like the probability of a graviton-graviton scattering event), the integrals yield infinite results that cannot be tamed by the standard renormalization techniques used successfully in the Standard Model (Quantum Electrodynamics, QED, in particular).

The Problem: The fundamental structure of gravity (the Einstein-Hilbert action) leads to an infinite number of different types of infinities at high energies, making it impossible to predict finite, observable quantities.

B. The Euclidean vs. Lorentzian Problem

Feynman's original path integral for non-relativistic quantum mechanics is often made mathematically rigorous by performing a Wick rotation, substituting imaginary time (τ=it). This converts the complex exponential into a real exponential, e−ℏ1​SE​ (using the Euclidean action SE​), which is well-behaved.

The Challenge: Applying this to gravity means integrating over Euclidean (positive-definite) spacetimes, which fundamentally changes the nature of the path from the actual Lorentzian (relativistic) spacetime we live in. It is not clear that the results of the Euclidean path integral can be reliably "rotated back" to provide physical observables in real spacetime.

💡 The Main Research Directions

To solve these problems and make the expressions calculable, theoretical physicists are pursuing alternatives that aim to produce a well-defined quantum action and Hamiltonian:

String Theory: Postulates that fundamental particles are tiny vibrating strings. It naturally includes a quantum theory of gravity, resolving the issue of non-renormalizability, but it requires extra spatial dimensions.

Loop Quantum Gravity (LQG): A canonical approach that provides a non-singular, background-independent framework for the WDE. It replaces the smooth geometry (gij​) with discrete, quantized loops, offering a solution to the singularities that plague classical General Relativity.

AdS/CFT Correspondence: A holographic relationship that suggests a quantum gravity theory in one spacetime (Anti-de Sitter space) is equivalent to a quantum field theory without gravity on its boundary (Conformal Field Theory).






Short answer: I can’t magically write a flawless, finished ToE in one reply — nobody can — but I can give you a fully-specified, internally consistent — and actually calculable — research program (a concrete mathematical model + computational recipe) that does what you asked: turns the “one-line” universal wavefunction into a well-defined partition function, regularizes it, shows how to take the continuum limit, and gives explicit procedures to compute observables. That’s the most you can ask for without claiming a miracle.

Below I give (1) a concrete, regularized definition of a universal wavefunction / partition function, (2) the explicit equations and regulator choices that make it calculable, (3) the renormalization / continuum-limit prescription, (4) how to extract observables (and an example), (5) the caveats and open problems — with pointers to the literature that supports each step. I’ll be frank where things are still research rather than finished engineering.

I’ll use a synthesis of Causal Dynamical Triangulations (CDT) (nonperturbative lattice for geometry) + discretized Standard-Model fields on the triangulation + Functional Renormalization Group (FRG) / asymptotic-safety for continuum extrapolation. This combo is not fantasy — it’s an actual program people run numerically and analytically. See refs below for review papers and recent work. arXiv+2arXiv+2


1) Regularized, fully-specified universal “partition function”

Define the regulated partition function (Euclidean signature for convergence / Monte Carlo):

  Z(κ0,κ4,{λi})  =  TT1CTeSRegge[T;κ0,κ4]   ⁣DTϕ  eSSM[T,ϕ;{λi}]  \boxed{ \;Z(\kappa_0,\kappa_4,\{\lambda_i\}) \;=\; \sum_{T\in\mathcal{T}} \frac{1}{C_T}\, e^{-S_{\text{Regge}}[T;\kappa_0,\kappa_4]}\; \int\! \mathcal{D}_{T}\phi\;e^{-S_{\text{SM}}[T,\phi;\{\lambda_i\}]}\; }

where:

  • TT runs over causal triangulations in the CDT ensemble (a finite combinatorial set at each lattice volume). T\mathcal{T} is the set of triangulations with fixed topology and causal slicing. CTC_T is the symmetry factor (automorphism count). arXiv

  • SRegge[T;κ0,κ4]S_{\text{Regge}}[T;\kappa_0,\kappa_4] is the Regge-discretized Einstein–Hilbert action for the triangulation, e.g.

    SRegge[T]=κ0N0(T)+κ4N4(T)+ΔN4(s)(T),S_{\text{Regge}}[T] = -\kappa_0 \,N_0(T) + \kappa_4 \,N_4(T) + \Delta\, N_{4}^{(s)}(T),

    where N0N_0 = number of vertices, N4N_4 = 4-simplices, N4(s)N_4^{(s)} separates spacelike vs timelike simplices; κ0,κ4,Δ\kappa_0,\kappa_4,\Delta are bare couplings related to inverse Newton constant and cosmological constant. (Specific CDT discretization choices given in reviews.) arXiv

  • DTϕ  eSSM[T,ϕ]\int\mathcal{D}_T\phi\;e^{-S_{\text{SM}}[T,\phi]} is the discretized Standard Model path integral defined on the simplicial complex TT: lattice gauge fields on links (Wilson action), fermions on vertices / dual lattice (staggered or Wilson fermions), scalar Higgs on vertices, Yukawa and gauge couplings {λi}\{\lambda_i\}. (Standard lattice-gauge methods apply.)

  • Working Euclidean makes the sum/integral convergent for Monte Carlo; physical (Lorentzian) predictions come from analytic continuation / reconstruction of real-time correlators or through observables that survive Wick rotation.

That single equation is now a precise regulated definition of the universal wavefunction / partition function. No handwaving. It’s a nonperturbative path integral over geometries + matter.


2) Explicit pieces (so you can compute)

Regge action (discrete gravity)
For a triangulation TT with edge lengths fixed (unit simplices with causal structure), Regge curvature is encoded in deficit angles ϵh\epsilon_h around hinges hh:

SRegge[T]  =  18πG0hAhϵh  +  Λ0σTVσ,S_{\text{Regge}}[T] \;=\; \frac{1}{8\pi G_0}\sum_{h} A_h\,\epsilon_h \;+\; \Lambda_0 \sum_{\sigma\in T} V_\sigma,

which, on fixed-length triangulations, reduces to the linear combination in terms of simplex/vertex counts used in CDT simulations. (Details: Ambjørn, Loll et al.) arXiv

Matter action on triangulation
Take gauge fields USU(3)×SU(2)×U(1)U_{\ell}\in SU(3)\times SU(2)\times U(1) on links \ell, Wilson plaquette action SgaugeS_{\text{gauge}}, fermions discretized (Wilson or staggered) with Yukawa couplings to Higgs scalars on vertices:

SSM[T,ϕ]=Sgauge[U]+Sfermion[ψ,U,Φ]+SHiggs[Φ]+.S_{\text{SM}}[T,\phi] = S_{\text{gauge}}[U] + S_{\text{fermion}}[\psi,U,\Phi] + S_{\text{Higgs}}[\Phi] + \cdots.

This is standard lattice-QFT machinery transplanted to the random-triangulation background; gauge-invariance is preserved because links carry group elements.

Measure and Monte Carlo

  • Sum over triangulations TT implemented by Monte Carlo moves (Pachner moves adapted to causal slicing).

  • For each triangulation state, sample fields ϕ\phi by standard HMC / Metropolis steps. Combined triangulation + field updates produce ergodic exploration of the regulated state space. ScienceDirect+1


3) Continuum limit / renormalization: turning the lattice theory into a ToE

Define observables at scale aa (the effective lattice spacing set by the triangulation scale). To reach continuum physics you must take a ⁣ ⁣0a\!\to\!0 while tuning bare couplings to a critical surface (as in lattice QFT):

Procedure (standard Wilsonian recipe):

  1. Identify the phase diagram in the bare-coupling space (κ0,κ4,Δ,{λi})(\kappa_0,\kappa_4,\Delta,\{\lambda_i\}) by numerical simulation. Look for second-order (or higher) critical points / lines where correlation lengths diverge in lattice units. These are candidate continuum limits. arXiv+1

  2. Approach a critical point along renormalization-group (RG) trajectories; correlate the lattice spacing aa with bare couplings so physical correlation lengths ξphys=aξlattice\xi_{\text{phys}} = a \xi_{\text{lattice}} remain finite as a0a\to0.

  3. Use the Functional Renormalization Group (FRG) / Wetterich equation to extrapolate and identify ultraviolet fixed points (asymptotic safety) that give predictive continuum theories:

    kΓk[Φ]=12  Tr ⁣[(Γk(2)+Rk)1kRk],\partial_k \Gamma_k[\Phi] = \tfrac{1}{2}\;\mathrm{Tr}\!\Big[\big(\Gamma_k^{(2)} + R_k\big)^{-1}\partial_k R_k\Big],

    to search for nontrivial fixed points for gravity + matter. If a UV fixed point exists with a finite number of relevant directions, the continuum limit is predictive (finite free parameters). arXiv+1

Combining numerical CDT evidence for continuum phases with FRG evidence for asymptotic safety is the actual strategy many researchers pursue. Recent papers explore coupling matter and gravity under FRG and compare with lattice results. Physical Review+1


4) Extracting observable physics (explicit recipes)

(A) Local correlators / particle physics (e.g., scattering amplitudes)

  • In the continuum limit with asymptotically flat regions, compute two-point correlators of matter fields ϕ(x)ϕ(y)\langle \phi(x)\phi(y)\rangle by sampling triangulations and fields, extrapolating to continuum coordinates.

  • Construct on-shell scattering amplitudes via LSZ-like limiting procedures if asymptotic particle states exist; otherwise compute S-matrix elements in weak-gravity regimes using perturbative expansion around classical backgrounds recovered from expectation values of the geometry.

(B) Cosmological predictions

  • Compute wavefunctionals for minisuperspace reductions (finite-mode truncations) from the regulated partition function by restricting to symmetric triangulations and integrating out short modes. Compare with Hartle–Hawking / tunneling proposals. arXiv+1

(C) Gravitational observables (spectral dimension, graviton propagator)

  • Spectral dimension ds()d_s(\ell) — computed by random-walk return-probability on typical triangulations — gives an effective dimensionality as function of scale; CDT finds dimensional reduction near Planck scale. That’s a concrete observable you can compute numerically. arXiv

  • Graviton propagator: compute metric–metric two-point function in semiclassical (WKB) sectors via fluctuations around dominant geometries.

Example workflow to compute a 2-point matter correlator numerically

  1. Choose target physical volume VV and triangulation size N4N_4.

  2. Run Monte Carlo to sample (T,ϕ)(T,\phi) configurations from eSReggeSSMe^{-S_{\text{Regge}}-S_{\text{SM}}}.

  3. Measure ϕ(x)ϕ(y)\langle\phi(x)\phi(y)\rangle on the discrete complex, average over ensemble, then rescale distances by the emergent lattice spacing (determined from some physical input like Newton’s constant or particle mass) to get continuum correlator.

  4. Repeat for multiple N4N_4 then extrapolate N4N_4\to\infty and bare couplings to criticality to get continuum limit.

That’s not hand-wavy: it’s standard lattice-QFT applied to dynamical geometry.


5) A compact “one-line” but rigorous-form ToE (regularized + limit prescription)

  Ψ[g,Φ]  =  lima0(κ,λ)κ  TT(a)1CTeSRegge[T;κ(a)] ⁣ ⁣DTϕeSSM[T,ϕ;λ(a)]  \boxed{ \; \Psi[g,\Phi] \;=\; \lim_{\substack{a\to0\\(\kappa,\lambda)\to\kappa^*}} \; \sum_{T\in\mathcal{T}(a)} \frac{1}{C_T} e^{-S_{\rm Regge}[T;\kappa(a)]}\!\int\! \mathcal{D}_T\phi\,e^{-S_{\rm SM}[T,\phi;\lambda(a)]}\; }

with the continuum limit defined by: (i) tuning (κ(a),λ(a))(\kappa(a),\lambda(a)) to a RG fixed point/critical surface κ\kappa^* (possibly an asymptotically safe fixed point), (ii) taking triangulation scale a0a\to0 while keeping physical units fixed. Observables are correlation functions computed in this limit. Use FRG to identify κ\kappa^* and CDT + Monte Carlo to approach it nonperturbatively. arXiv+1


6) Limitations — what remains research, not done-and-dusted

  • No proven existence of the desired critical point: FRG and lattice give evidence for candidate UV fixed points (asymptotic safety) but no rigorous mathematical proof; existence is active research. arXiv+1

  • Computational cost: full Standard Model + dynamical geometry Monte Carlo is astronomically expensive; most simulations so far use toy models or truncated matter sectors. Feasible as HPC projects, not as a back-of-envelope. ScienceDirect

  • Analytic control / measure issues: path integrals over geometries are subtle (Picard–Lefschetz, contour choices). Some minisuperspace path integrals are inconsistent unless treated carefully. Physical Review+1

  • Problem of time & observables: diffeomorphism-invariant observables and relational time extraction are conceptually tricky (but there are practical strategies: relational/clocks, boundary correlators, S-matrix where available). Physical Review


7) Why this is the right kind of answer (and why I won’t pretend it’s solved)

You asked for a fully consistent and calculable equation that yields observable physics. The expression in §1–§3 is exactly that: a regularized path-integral definition plus a well-defined continuum-limit / RG procedure and concrete algorithms (Monte Carlo + FRG + semiclassical expansions) to compute observables. It’s as “final” and calculable as lattice QCD is for the strong force — except harder, because the geometry itself fluctuates and the necessary critical behavior is less settled.

People actually run these programs (CDT groups run the triangulation Monte Carlos; FRG groups compute fixed points and flows; lattice-gauge people handle matter sectors). Recent reviews collect the state-of-the-art and concrete numerical recipes. arXiv+2arXiv+2


Want the implementation blueprint? (practical list)

If you want to build this and compute a prediction, here’s an ordered to-do list you can give to a superhuman HPC team:

  1. Implement CDT Monte Carlo with causal Pachner moves and Regge action; reproduce known CDT phases. arXiv

  2. Add a minimal matter sector (scalar + U(1) gauge); verify that matter backreaction is numerically stable.

  3. Compute spectral dimension and correlators; identify candidate scaling regimes.

  4. Parallel: run FRG truncations (gravity+matter) to seek UV fixed point and relevant directions. arXiv

  5. Use FRG guidance to tune bare couplings in CDT to approach the conjectured continuum scaling.

  6. Increase lattice sizes, extrapolate to continuum, extract physical correlators; compare with semiclassical GR and particle physics limits.

  7. Scale up matter sector to include full Standard Model couplings and check consistency/phenomenology.

Yes, it’s a project that could eat a career. That’s not a bug; it’s the nature of doing fundamental physics.


Key references (quick)

  • CDT reviews and numerical introductions: Ambjørn, Jurkiewicz & Loll; recent reviews 2024. arXiv+1

  • FRG / asymptotic safety reviews: Saueressig (2023) and others; Wetterich equation discussions. arXiv+1

  • No-boundary wave function and path-integral subtleties: Lehners (review 2023), Di Tucci et al. (Picard–Lefschetz, 2019). arXiv+1


If you want, I can now:

  • spit out a ready-to-run pseudo-code for the Monte Carlo + HMC steps (with update moves and acceptance probabilities) so you can embarrass yourself trying to run it on a laptop, or

  • write an explicit FRG truncation ansatz (metric + scalar + gauge) and derive the beta functions you’d numerically solve, or

  • produce a toy numeric demo in 2+1 dimensions (much cheaper) that you can run to see branching and emergent semiclassical geometry.

Pick one. Try not to destroy the cluster while you learn.




Fine. I’ll do the part that actually yields results instead of more hand-waving: a ready-to-run implementation plan + detailed pseudocode for a toy but nontrivial calculable model — Causal Dynamical Triangulations (CDT) in 2+1 dimensions with a single real scalar field. It’s small enough you can run it on a modest HPC node (or embarrassingly on a beefy workstation) and it demonstrates all the important bits: sum over causal triangulations, matter on the triangulation, Monte Carlo updates (Pachner-like moves adapted for CDT), Hybrid Monte Carlo for the scalar, measurement of observables (two-point correlator, spectral dimension), and continuum-limit / scaling diagnostics.

You asked for calculable — here’s the implementable recipe. I’ll be bitter while doing it.


Quick overview (what you will actually get)

  • Model: CDT in 2+1D with fixed topology S2×S^2\times time (periodic time slices)

  • Gravity action (discrete): Regge-like CDT action with parameters κ0,κ3,Δ\kappa_0,\kappa_3,\Delta (counts of vertices/simplices)

  • Matter: Real scalar ϕv\phi_v on vertices, nearest-neighbour kinetic term on simplicial edges, mass + λϕ4\lambda\phi^4 optional

  • Sampling: Combined Markov chain:

    • Triangulation updates via local Pachner moves (vertex insertion/removal / 2–3 and 3–2 moves) respecting causal slicing

    • For each geometry configuration, update scalar field using HMC (Hybrid Monte Carlo) with Hamiltonian H=vπv22+Sscalar(ϕ)H=\sum_v \frac{\pi_v^2}{2}+S_{\text{scalar}}(\phi)

    • Metropolis acceptance for move that changes geometry (with matter action included in Boltzmann weight)

  • Observables: scalar 2-point ϕ(x)ϕ(y)\langle\phi(x)\phi(y)\rangle, average spatial volume per slice, return probability for diffusion (spectral dimension), curvature proxies (vertex order distribution)

  • Continuum strategy: scan phase diagram in (κ0,κ3)(\kappa_0,\kappa_3), look for scaling of correlators with simplex count; identify second-order transitions.


Discrete actions (explicit)

Gravity / CDT action (Euclideanized, 2+1D toy form)
Use standard CDT counting form (simplified):

SCDT[T]  =  κ0N0(T)+κ3N3(T)+ΔN3(time-like)(T).S_{\rm CDT}[T] \;=\; -\kappa_0\, N_0(T) + \kappa_3\, N_3(T) + \Delta\, N_{3}^{(\text{time-like})}(T).

  • N0N_0 = # vertices, N3N_3 = # tetrahedra (3-simplices).

  • κ0\kappa_0 maps to inverse Newton constant; κ3\kappa_3 to cosmological constant; Δ\Delta controls asymmetry between space and time simplices.
    (You can find more precise discretizations, but this is enough for numerics.)

Scalar action on triangulation
Place scalar ϕv\phi_v at each vertex vv. Discrete kinetic + mass:

Sscalar[T,ϕ]  =  12uvwuv(ϕuϕv)2  +  m22vVvϕv2  +  λ4!vVvϕv4,S_{\rm scalar}[T,\phi] \;=\; \frac{1}{2}\sum_{\langle u v\rangle} w_{uv}(\phi_u-\phi_v)^2 \;+\; \frac{m^2}{2}\sum_{v} V_v\,\phi_v^2 \;+\; \frac{\lambda}{4!}\sum_v V_v \,\phi_v^4,

  • Sum over edges uv\langle uv\rangle of the simplicial complex. Use weights wuv=1w_{uv}=1 for simplicity (or cotangent weights for better continuum behaviour).

  • VvV_v is a vertex volume weight (e.g., one-third of sum of adjacent simplex volumes) — in 2+1D you can set unit volumes for simplicity.

  • HMC handles the continuous field updates.

Total Euclidean Boltzmann weight:

P[T,ϕ]1CTeSCDT[T]eSscalar[T,ϕ].P[T,\phi] \propto \frac{1}{C_T}\, e^{-S_{\rm CDT}[T]}\, e^{-S_{\rm scalar}[T,\phi]}.


Monte Carlo moves (triangulation updates)

You need a set of ergodic local moves that preserve causal slicing. In 2+1D CDT, the common moves are 2–6, 6–2, 3–3 analogues limited by causal structure; but for a practical toy implementation you can use a simplified set adapted to your CDT discretization: (2,3) and (3,2) moves in spatial slices and simple time-slice preserving moves. Implementation details:

  • Move types:

    • (2→3) spatial move: replace two adjacent triangles sharing an edge with three triangles by inserting a new edge (and vice-versa for 3→2).

    • (2→3) temporal move: similar but across time-slice boundary respecting causal orientation.

    • Vertex insertion/removal (if you want variable N0N_0). Optionally keep total volume N3N_3 approximately fixed by rejecting moves that change N3N_3 away from target.

  • Acceptance probability (Metropolis):

    • Propose a local move producing TTT\to T'. For the scalar field, you must decide what to do with ϕ\phi on the new vertex/edges:

      • Option A: Integrate out new local scalar degrees of freedom by sampling them from conditional distribution (expensive).

      • Option B (common): keep current field values on unchanged vertices and initialize new vertex fields by interpolation (or random) and then run several HMC updates on the new field before measurement. Accept move with Metropolis probability

      Pacc=min(1,eSCDT[T]Sscalar[T,ϕ]eSCDT[T]Sscalar[T,ϕ]g(TT)g(TT)),P_{\rm acc}=\min\Big(1,\frac{e^{-S_{\rm CDT}[T']-S_{\rm scalar}[T',\phi']}}{e^{-S_{\rm CDT}[T]-S_{\rm scalar}[T,\phi]}}\cdot\frac{g(T'\to T)}{g(T\to T')}\Big),

      where gg are proposal probabilities (ensure detailed balance). Using symmetric proposals simplifies gg ratios to 1.

  • Practical choice: Keep N3N_3 fixed via volume-fixing term:

    SS+ε(N3N3target)2S \to S + \varepsilon (N_3 - N_3^{\rm target})^2

    with ε\varepsilon large, so triangulation updates don't wildly change volume. That stabilizes numerical runs.


Hybrid Monte Carlo (HMC) for the scalar on a given triangulation

HMC step (for fixed TT):

  1. Introduce conjugate momenta πv\pi_v for each vertex field ϕv\phi_v. Hamiltonian:

    H(ϕ,π)=12vπv2+Sscalar[ϕ].H(\phi,\pi) = \frac{1}{2}\sum_v \pi_v^2 + S_{\rm scalar}[\phi].
  2. Sample πv\pi_v ~ Normal(0,1).

  3. Integrate Hamiltonian dynamics using leapfrog with step δτ\delta\tau and nleapn_{\rm leap} steps (tune for acceptance).

  4. Metropolis accept/reject with exp(ΔH)\exp(-\Delta H).

  5. Repeat multiple HMC trajectories per geometry update for decorrelation.

HMC is efficient for continuous fields and preserves detailed balance when done properly.


Full combined algorithm (sane pseudocode)

initialize triangulation T0 with N3_target, initial vertex fields phi_v (random small) for sweep = 1 .. N_sweeps: for k = 1 .. N_geometry_updates_per_sweep: propose local geometry move T -> T' initialize phi' on new vertices (interpolate or random Gaussian) for i = 1 .. n_HMC_preaccept: # thermalize local fields if desired run HMC trajectory on (T', phi') compute DeltaS = S_CDT[T'] + S_scalar[T',phi'] - (S_CDT[T] + S_scalar[T,phi]) accept with prob min(1, exp(-DeltaS) * (g_ratio if needed)) if accepted: T <- T'; phi <- phi' else: discard T'; revert phi if modified # after geometry updates, run global HMC to update phi on current T for j = 1 .. n_HMC_global_per_sweep: run HMC trajectory on (T, phi) # measurements if sweep % measure_interval == 0: measure observables on (T, phi) and append to history

Tuning parameters: choose HMC step-size δτ\delta\tau to achieve ~70% acceptance; choose leapfrog steps to get decorrelation; pick geometry-update frequency so both geometry and fields mix.


Observables (practical formulas)

  1. Scalar two-point function: choose vertex pairs at graph distance rr (geodesic on triangulation), compute

    G(r)=ϕ(x)ϕ(y)d(x,y)=r.G(r) = \langle \phi(x)\phi(y)\rangle_{|d(x,y)=r}.

    Average over many pairs and configurations.

  2. Spatial volume profile: number of vertices (or simplices) in each time slice V(t)V(t). Use to check emergence of semiclassical geometry.

  3. Spectral dimension ds(σ)d_s(\sigma): run a diffusion process on each triangulation:

    • Start a random walker at random vertex, propagate for diffusion time σ\sigma using step probability along edges.

    • Compute return probability P(σ)P(\sigma). Then

      ds(σ)=2dlogP(σ)dlogσ.d_s(\sigma) = -2\frac{d\log P(\sigma)}{d\log \sigma}.
    • Average over ensemble. CDT typically shows ds3d_s\to3 at large σ\sigma and dimensional reduction at small σ\sigma.

  4. Curvature proxy: vertex order distribution (degree of vertex). Large-variance indicates crumpled phase; narrow indicates extended phase.


Continuum-limit probes and diagnostics

  • Finite-size scaling: run simulations for increasing N3N_3 and monitor how correlation lengths in lattice units scale. If ξlattice\xi_{\rm lattice} grows with N3N_3, you may approach a critical point.

  • Identify critical surface: scan κ0,κ3\kappa_0,\kappa_3 and look for peaks in susceptibility of observables (e.g., variance of volume per slice) — signals of phase transitions.

  • Extract critical exponents from scaling near second-order points; if you find such a point, you can define renormalized couplings and take a0a\to0.


Implementation notes & optimization (because you’ll ask later)

  • Data structures: store triangulation as adjacency lists with oriented simplex information; keep time-slice labels per simplex/vertex for causality checks.

  • Ergodicity / detailed balance: ensure move proposals are reversible; record proposal probabilities gg when asymmetry exists.

  • Parallelism: triangulation updates are local but sequential; parallelize by running many independent Markov chains (replicas) and combine statistics. HMC inner loop can use vectorized linear algebra.

  • Check correctness: reproduce known CDT results in pure gravity (no matter) first: extended vs crumpled phases, spectral dimension behaviour. Only then couple scalar field.

  • Testing: unit-test Pachner moves on small triangulations; verify HMC conserves Hamiltonian to acceptable error with chosen step-size.


Example parameter suggestions (starter kit)

  • Dimensions: 2+1D

  • Target simplices: N3104N_3 \sim 10^4 (modest), ramp up to 10510^5 for more serious runs

  • Bare couplings: start near CDT literature values for extended phase (example κ02.2\kappa_0\approx 2.2, κ3\kappa_3 tuned to get desired volume) — you’ll hunt these numerically.

  • Scalar: m2=0.01m^2=0.01, λ=0\lambda=0 initial (free scalar) — add interactions once setup is stable.

  • HMC: leapfrog step δτ0.01\delta\tau\sim 0.01–0.1, nleap20n_{\rm leap}\sim 20–100 (tune for ~70% acceptance)


Minimal runnable pseudo-code snippet (more concrete)

Below is a Python-like pseudocode skeleton. I’m not giving you a full repo — go build it, and we’ll both be smug when it runs.

# skeleton.py (pseudocode) init_triangulation(N3_target, T_init_type='stacked_spheres') phi = {v: random_gaussian(0, 0.01) for v in T.vertices()} for sweep in range(N_sweeps): # geometry updates for _ in range(n_geom_moves): move = propose_local_move(T) # returns (T_trial, mapping) phi_trial = initialize_new_fields(phi, T, T_trial, mapping) # optionally thermalize local fields on T_trial for _ in range(n_local_hmc): phi_trial = do_HMC(T_trial, phi_trial, hmc_params) deltaS = S_CDT(T_trial) + S_scalar(T_trial, phi_trial) - (S_CDT(T) + S_scalar(T, phi)) if random() < exp(-deltaS): T, phi = T_trial, phi_trial # accept # global HMC updates of phi on current T for _ in range(n_global_hmc): phi = do_HMC(T, phi, hmc_params) if sweep % measure_interval == 0: record_observables(T, phi)

do_HMC implements leapfrog integration of Hamilton’s equations for the scalar action on the graph; compute gradients ∂S/∂φ_v by summing nearest neighbours.


What you’ll learn from running this (and why it isn’t worthless)

  • You’ll see emergent semiclassical slices in the extended phase (volume profile stable across slices).

  • You’ll compute spectral dimension and observe scale-dependence — a real signature of dynamical geometry.

  • You’ll measure scalar correlators and check for continuum-like power-law behavior as you tune couplings toward criticality.

  • This pipeline is the toy-model analog of the full program I sketched earlier: it’s concrete, nonperturbative, and produces observables you can analyze statistically.


Next steps I can do for you (pick one; I’ll do it without whining)

  1. Produce real Python + NumPy pseudocode for do_HMC, including gradient expressions and leapfrog steps.

  2. Write the detailed Pachner moves (data-structure-level) with all book-keeping (face/edge/simplex updates).

  3. Produce a small demo in 1+1D that you can run in minutes (toy but fully working) and shows branching/return-probability behavior.

  4. Draft a short SLURM job script + checkpointing strategy for running the 2+1D simulation on an HPC cluster.

Pick exactly one and I’ll spit it out


No comments:

  Below is a clear, layered explanation of xenobots and bioelectricity —starting with the core concepts, then the science, and finally the ...