"The point is precisely NOT to anthropomorphize other agents, but rather to de-mystify human capacities and ask where else and to what extent they might be found (and could be implemented).
The worry over anthropomorphizing arises from the belief that humans have some sort of special “true cognition” (or true decision-making, or true minds, etc.) and other systems don’t; it also expresses the view that true cognition is a binary thing (you have it or you don’t), and that it would be bad to assign it wrongly to systems that don’t have it. There are several major problems with this view.
It is a form of magical thinking; no one has a good account of what this special ingredient might be, that it would be totally inaccessible to other types of agents - simple or complex, evolved or designed, pure or integrated with human brain tissue. Given what we know about evolution, it is clear that humans didn’t suddenly wink into existence with these amazing capacities, and that brains didn’t suddenly appear (but instead are simply elaborations of things cell networks have always done). Just start with us, and work backwards through early hominids and before, and ask where ‘true cognition’ appeared. There is no single Rubicon. Moreover, developmental biology and bioengineering data show that it is possible to make hybrids – combination constructs that are some percentage human brains and some percentage something else (non-human animals, electronic interfaces, etc.). There is a huge diversity of possible agents in all sorts of combinations of bodies and brains. What percentage human brain/body does an agent need to have, in order to have this true cognition? In light of modern understanding of evolutionary history and the advances of bioengineering, it is a totally untenable view to think that humans have some sort of unique oomph that doesn’t have its origins in humbler, ancient versions of the same capacity. Cognition is a continuum, with an evolutionary history like every other aspect of life. The only question is: *how much* cognition does it makes sense to attribute to a given system, to better explain and predict its function, and what features of physics and computation did evolution exploit to implement it. A ball on top of a hill is well-described and readily controlled by knowing Newton’s laws. A living mouse on top of a hill is not; it is best described and controlled by a story about its past experiences and motivations. A Roomba or a thermostat are somewhere inbetween. This is an empirical question, not pure philosophy – how much agency to attribute to a system is determined by this: which level of agency story gets us to better experiments at the bench, with better explanatory and control capacity and useful applications.
Thus, there is no such thing as anthropomorphizing, because the belief that humans have unique magical cognition (which could be mistakenly bestowed) is a fairy tale. The real question, to be taken on a case by case basis and decided based on empirical success, is how much agency does a given system warrant. And we would be as badly mistaken to attribute too little as we would be in attributing too much. Pretending cognition doesn’t exist where it would surely be useful to admit (and make use of) it, leaves a lot on the table in science, engineering, and philosophy. This teleophobia is unbalanced in the direction of preferring to miss agency rather than attribute too much (and there are many examples in the history of civilization where failing to attribute agency based on trivial differences of bodily implementation or behavior has led to massive ethical lapses).
Our piece is part of a larger series of works, by both of us, that try to do 3 things.
1) To de-mystify cognition as a continuum, and get comfortable with various degrees of cognitive powers in a wide range of material embodiments (whether evolved, engineered, or a mix of the two). This is critical because as a society, we are soon going to be surrounded by an incredible variety of agents with novel bodies and minds, mixing biological structures and engineered technology into “endless forms most beautiful”. We have to drop the comforting, persistent, but pre-technological notions that some systems are “mindful” and others are “not”, in an easy binary choice - that leads to conceptual, technological, and moral dead ends.
2) To shift the “benefit of the doubt” more toward the middle, and show off some cases where agency is not normally used, but could very profitably be applied. I believe the true promises of regenerative medicine will not arrive until we apply this mode of thinking in the biosciences. The genomics winter will be a harsh one, if we don’t work to understand how cell groups know what to do, in addition to the mechanisms by which they do things.
3) To discover some of the mechanisms that evolution exploited to scale agency, from very simple forms to very complex ones (multi-scale autonomy is one key aspect). After all, human minds are unlikely to be the top of the line agents. Much as we know what “diminished capacity” means (in a legal, moral sense), what will creatures with “expanded capacity” be like? We will surely keep moving along the continuum, so let’s think about what even more impressive cognitive agents would be like, at the same time as we think about the simpler ones that got us here.
And a final point is that many of the ideas discussed in this piece are tightly connected to science - empirical experiments at the bench, focused on information processing by cells and tissue-level agents which are done in M.L.’s laboratory and many others in an attempt to reach practical healing and repair modalities in our lifetime. They make specific predictions, they are falsifiable, and they are giving rise to discoveries that we hope will advance regenerative medicine and other applications toward a better-informed, more ethical, and healthy future".
No comments:
Post a Comment