The need for complementarity in models of cognitive behavior

The need for complementarity in models of cognitive behavior
of 35
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  The  Need  for  Complementarity in  Models  of  Cognitive Behavior:  A  Response  to owler  and  Turvey H.  H.  Pattee epartment  of  Systems  Science State University  of New  York  at  Binghamton The principal  themes  Fowler and Turvey  have presented  in their  chapter  are: (1) the  need  for commensurability of observed  events  and the representation of  these events,  and (2) the  need  for compatibility of representation of intentions and representation of the corresponding actions. The  observer's  differing  perspec tives and intentions in  differing  situations are  assumed  to lead to  multiple  repre sentations  and classifications of  events,  often  forming  hierarchical layers of resolution or function and ordered levels of coordination and constraint; but Fowler  and Turvey  stress  the  need  for  compatibility  of  these  representations  as the overriding constraint. The authors support this view  with  commonsense functional  arguments of  simplicity  and efficiency. Why do we  need  more detail in  our descriptions of  events  than we  need  to respond appropriately to  events? Why  should intentions be  represented  in more or  less  detail than the actions to be controlled? Why should the description of the  ends  not always be compatible with  the description of the  means? This  sounds  like  a  reasonable  argument  from  the functional or metaphorical perspective, but as Fowler and Turvey point out, other partitionings of our experience are possible; and I  argue  that  incompatible  representations also  play an essential role in our models and explanations. What  does  it mean to explain a "coordinative structure,"  which  Fowler and Turvey invoke to achieve compati bility  between goals and  actions?  One part of the explanation is to describe the function;  but surely another  necessary  part of our explanation is to show a structure that  will  actually  execute  the functional  activity.  These  two  modes  of descriptions  will  generally not be compatible. For example, the keyboard of a calculator has key labels that are function descriptions, but to explain what the function  means  or how it is executed in detail requires a new description that is e companon ower-urvey-attee scusson cp. an te target artce, servatona Perspective and Descriptive Level in Perceiving and Acting" by Carol A. Fowler & Michael T. Turvey (same volume) are appended.  not compatible  with  the  simplicity  of  function  labels. Fowler and Turvey are correct in emphasizing the need for such macrofunctional control and for the coordinative microstructures that autonomously execute the functions, but I  dif fer  with  them on how  these  different  modes  of description are epistemologically related, and on whether one can  claim  an explanation of  cognitive  behavior  only by  compatible  sets  of descriptions. outline my arguments at three levels—at the  level  of physics, where I explain  in what  sense  description of laws and description of rules (constraints, measurements,  or controls) are incompatible; at the  level  of  biology,  where I show in what  respect  description of structure and description of  function  are incompatible;  and at the  level  of epistemology, where I  argue  that description of intentions  (mind)  and description of action (body) are  incompatible.  These  examples are illustrations of a generalized complementarity principle asserting that explanation or understanding requires the  articulation  of  two  formally  incompatible representations or  modes  of description. 1  How do  these  ideas  apply to the  area of  cognitive  psychology? I am not opposing the ecological attitude elaborated by Fowler  and Turvey, nor am I supporting it in opposition to the  information/ processing  approaches.  I am  claiming  that  these  are two complementary  modes of  description that  have  not yet  been  completely articulated and, more funda mentally,  have  not  been  recognized as essentially complementary, in the  sense that any explanation of cognitive behavior  will  require both  modes  of descrip tion. COMPLEMENTARITY  IN  PHYSICAL DESCRIPTIONS Among  the principles of physics, there is no  explicit  recognition of the need for the concept of control; and even constraints are not considered fundamental. Neither fundamental particles nor  stars  nor galaxies are  controlled  or constrained. When  a physicist  uses  the concept of constraint, it is either as a boundary condition  or as a practical alternative representation of the structures that interact with  the variables of  primary  interest. However, there is one fundamental excep tion,  the "measuring device,"  which  is a  form  of  control  constraint, but one that physicists avoid  like  psychologists avoid the homunculus, and for very much the same  reason;  namely, both devices raise more questions than they answer.  Like E.  B. White's Golux, they are not "mere devices." Cognition and  volition  are unavoidable  aspects  of measuring devices and homunculi. Control  constraints  arose  in the context of design and engineering, where there is always a cognitive organism or homunculus  lurking  in the background. This  is  Michael  Polanyi's primary  criticism  of Laplacean  reductionism,  which  as serts  total  predictability  or determinism when total  information  of  initial  conditions is supplied. But this total  information  requries perfect measuring devices, which  in  turn  implies perfect control constraints and,  hence,  perfect design.  Although  this  line  of argument may have been convincing to scholastics as a proof  of the existence of God, it is not much help to psychologists in explaining how  we  walk  and  talk. The basic  distinction  that must be made is between  laws  of nature and  rules  of constraints. One cannot usefully apply the concept of  control  to laws nor to all types of constraints. The  same  is true for  information. Informational  and  control constraints must be describable in terms of alternative  states  that are not dynami cally  related. That is, they must  change  in time but not  change  as a  function  of rates,  as do the laws of nature. Such constraints are nonintegrable or nonholonomic,  and their behaviors can be said to execute a  control  rule. 2 agree  that the commonsense engineering concepts of  control  are useful for modeling  many  aspects  of  life  where  evolution  has replaced the  cognitive  design ^activity—or,  of you  wish,  where  evolution  is the design—but the engineering concepts are not enough to explain cognitive  control.  The basic  distinction  be tween  laws and rules can be made by  these  incompatible  sets  of  criteria:  laws are inexorable incorporeal and  universal;  rules are  arbitrary structure-dependent and  local.  In other words, we cannot alter or  evade  laws of nature, whereas we can redesign or  eliminate  a  rule;  laws do not need a device or structure to execute them,  whereas rules can  only  be executed by specific physical structures that we call  control  constraints; and  finally,  laws  hold  at all times and all  places  in the universe, whereas rules  hold  only  where and when there is a  physical  structure to execute them. Since laws are inexorable and universal, no physical structures can exist without  laws; nor can structures, however complex,  evade  laws in any  sense. Therefore all rules and their  control  constraints depend on laws. Futhermore, an "unconstrained  degree  of freedom" is not an "undetermined  degree  of free dom,"  for it must always  follow  the laws of nature. In fact, good  biological  as well  as good engineering design makes the  maximum  use of  natural  (noninforma- tional)  constraints and laws of nature, so that the  control information  can be kept to  a  minimum.  The laws are inescapable, and many natural constraints are there, anyway,  like  the surface of the earth; so we should expect that the  evolution  of coordinated  structures has made  full  use of  these  laws of nature and natural constraints. Nevertheless, our mode of description of natural laws is fundamentally in compatible  with  our mode of description of  control  or measurement, since con trol  and measurement require a  volitional  coordination  of events as distinct  from the events themselves  (i.e.,  the events as described by natural laws). How  do  these  views  differ  from  Fowler and Turvey's? Why can we not say with  them that laws and  measurements  are  simply  an example of the brain's "partitioning  the  world in two ways that "must be compatible, but they need not  be deducible  from,  nor translatable  into,  one another"? The problem is that our  description of laws requires the concepts of  rate-dependence,  reversibility (time  symmetry),  continuity,  and causality (determinism or the inexorable  flow  of  events);  whereas  our description of  measurements  requires the  concepts  of rate-independence,  irreversibility,  discrete  events,  and acausality (natural  selec tion  or, at the highest cognitive levels, selection by free  will).  In other words, in order to predict or control  events  successfully, the brain of the physicist has  been led  to partition the  world  into  formally contradictory  languages.  It is  from  the apparent  inescapability of this situation that the principle of complementarity became  acceptable,  even though its appreciation  still  produces  a profound cogni tive  dissonance. COMPLEMENTARITY  IN  BIOLOGICAL  STRUCTURE AND  FUNCTION One of the most elementary examples of this minimum control strategy that we mentioned earlier is found, as one might expect, at the molecular genetic level where natural selection of control  began,  so to  speak.  The enzyme molecule is the paradim  case  of a  control  constraint, where we see only  some  hundreds  of bits of  genetic information specifying a  highly  coordinated structure,  with  over 20,000 atomic  degrees  of freedom. This machine can recognize a specific type of molecule among  thousands  of similar types and  speed  up a particular dynamic reaction by a factor of 10 6  to 10 12  persistently and reliably. 3  This incredible performance is not explained only by an information-processing model in  which each degree  of freedom is simulated in detail,  like  a computer program. I  have  no doubt that such a simulation could be  done  as a technical exercise, and it  would be an impressive computational task for even the smallest enzyme and the largest computer. But very  little  would  be learned about  nature's  design and control of enzymes. The point is that the cell's control strategy,  which  is what we want to explain,  does  not use this approach at  all.  The  cell  specifies only the information necessary  to string together a few hundred amino acids. From then on, the noninformational  constraints and the laws of nature do the  rest.  Without further instruction  or control, this string folds into a three-dimensional machine that recognizes its unique  substrate  molecules and  catalyzes  a particular bond,  repeat ing  this  process  until  it  receives  additional  control  signals or  until  it is poisoned or denatured. The  enzyme's  folding,  recognizing, and catalyzing activities are not explainable only as information processing. They are "autonomous behavior patterns" that  have been  very  efficiently  built  in" by the  sequence  of its primary  structure specified in the genetic description.  All  this is a  consequence  of natural selection over  some  3  billion  years  of evolution. Why  are the descriptions of information processing in the genetic program incompatible  with  the description of the  enzyme's  catalytic action? The fundamental  incompatibility  between information and action is that the rules of symbol manipulation,  like  measurement,  are inherently rate-independent,  whereas  the laws of action are inherently  rate-dependent.  The  sequence  of amino  acids  in an  enzyme's  primary structure is not influenced by the  rate  at  which  the DNA is read. On the other hand, the action of the enzyme is largely defined by its  rate  of activity.  More generally, as we  have  said, all physical laws are expressible as functions of  rates—that  is, as derivatives of  some  variable  with  respect  to time. Rules, on the other hand, depend on order or  sequence  but cannot be  expressed  as functions of  rates.  All  linguistic  operations and all computations, insofar as they are defined by rules, cannot be functions of the  rate  of reading,  writing,  or computing. What we mean by a  statement  or calculation cannot depend on how fast we  speak  or calculate. Structure-function complementarity  arises  from  this incompatibility,  since the structural  basis  of functional organization requires  some informational  constraints that are rate-independent,  whereas  the function  depends to  some  extent on  rate-dependent  dynamics. must  emphasize  that the amount of  measurement  m  information  constraint is not relevant for the complementarity principle. The amount of information is therefore incommensurable  with  the amount of action it controls. This is  because the description of information is in a  language  that is different  from  and incompatible  with  the  language  describing the action. 4  Fowler and Turvey's demand that the resolutions or grain-sizes of descriptor  sets  be commensurate is  reasonable provided  that descriptor  sets  are entirely in one informational mode. In this  case, Ashby's (1956) law of requisite variety 5  will  hold,  precisely  because  of this compatiblity  of descriptor  sets  or  codes.  Similarly,  Fowler and Turvey's analogy of  automobile steering linkages to a coordinated structure is sound provided one remains in a dynamical constraint mode of description. The  incompatiblity  I am speaking about  would  occur when describing  intentional  control  policy  and how this interacts  with  the  deterministic  dynamical  activity  of what is  controlled.  For example, the informational content of an  ignition  key is incommensurable  with the  degrees  of freedom of the automobile that it  starts.  This is  because  the information  in  the key is determined  by  the number  of  people  you  do not  wish  to use your automobile  (i.e.,  an intentional  policy)  and has  virtually  nothing to do  with the complexities of the deterministic  mechanisms  of automobiles. This example also  illustrates the complementarity principle in the  following  sense:  It is obvious that an  ignition  key cannot be explained as nothing but a slotted  brass  object  with bumps on its  edge,  no matter how detailed the structural description may be. Nor can the key be explained as nothing but an informational representation of a population of potential car thieves, no matter how  accurate  our statistics on stolen cars. Returning to more elementary biological behavior, at the simplest level of biological  control,  we see that the  DNA  molecule cannot be explained as nothing but an ordinary chemical structure obeying the laws of quantum mechanics; nor can it be explained as nothing but a  data  structure that is fed into an information processor.  At the other extreme of biological complexity, we cannot explain a newborn mammal  finding  a nipple as nothing but a physical structure  finding  an optimum  binding site, nor can we explain the action as nothing but information-
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks