"The key idea is that...brains encode a Bayesian recognition density, with neural dynamics determined by the internal (generative) model that predicts sensory data based upon alternative hypotheses about their causes. These dynamics are interpretable as an inference of the probable cause of observed sensory data that “invert” the generative model, finding the best “fit” to the environment. Crucially, there are two ways of ensuring the model and environment are a good fit to one another. The first is the optimization of the recognition density to capture the most probable environmental configurations. The second is by changing the environment through action to make it more consistent with model predictions".
"In the Helmholtz model the brain makes its own world. Our sense organs, external and internal, are constantly bombarded by a vast range of stimuli from an ever-changing environment. To operate with maximum efficiency, the brain selects out the ‘meaning’ of its sensations, attending only to those that are relevant to its ‘affordances’6 – its specific ecological niche – and especially to input that is anomalous or novel".
"Working in the 1950s at the Bell telephone company laboratory, Claude Shannon saw that this ‘meaning’ could be quantified – as ‘bits’ of information...White noise is chaotic, entropic and devoid of information. Language, whether spoken, sung or gestured, is structured, ordered, negentropic. The measure of informational energy is ‘surprise’, i.e. how unexpected a signal is. In the board game Scrabble, the letter ‘x’ conveys more information than ‘e’ because it is relatively unusual, applying to a smaller range of words, and so in calculating the score, is ‘worth’ more. The brain's aim is constantly to reduce informational entropy and maximise meaning".
Friston, Parr, Demakis
No comments:
Post a Comment