Pages

Saturday, June 2, 2012

Meaning Like We Mean It


by Benny Mattis


This is a response to John R. Searle's "Chinese Room" argument that I originally wrote for Louise Antony's Philosophy of Mind class in Fall 2011.  This paper also won the UMASS Amherst Philosophy Department's Jonathan Edwards Prize in 2012.
*****

Meaning Like We Mean It

The Chinese Room
Philosopher of mind John R. Searle, in an article entitled “Can Computers Think?” argues that minds are not merely programs that can be run on digital computers, but rather effects of some “causal powers” of the brain.  Searle makes clear that, while the causal powers of the brain are capable of producing something called “semantic meaning” or “intentionality” or “mental states,” mere programs for digital computers have no way of producing those phenomena, since they are only syntactical abstractions and devoid of any semantic content.
He makes this point by performing a thought experiment called the “Chinese Room.”  The experiment places Searle in a room with a “large batch of Chinese writing” (417), which is indistinguishable to him (unable to read Chinese) from “ meaningless squiggles” (418).  This assortment of squiggles, unbeknownst to Searle, is called a “story” by the organizers of the experiment (418).  Additionally, he is given a second “batch” of squiggles along with rules in English on “How to correlate the second batch with the first” batch” (this set is secretly called “questions), as well as English instructions on how to reply to combinations of squiggles from the second batch with members of a third batch of squiggles; this third batch is called “answers” by the hypothetical organizers of this experiment (418).  In short, Searle is given all the tools he needs to answer questions in Chinese about a story written and read in Chinese, all without actually “understanding” a single character of Chinese.
Searle compares this “Chinese subsystem” with an “English subsystem” wherein he is simply given a story in English, followed by questions about the story, which he can answer with the full knowledge of what his words are describing.  Searle maintains that no matter how complex the English instructions (analogous to a computer program) are, the Chinese squiggles will be no more “understood” in their semantic meaning in the same way that the story, questions, and answers in English are understood (418).  This notion is set in opposition to the thesis of Strong A.I., which views the problem of creating artificial mental states as simply a problem of designing the correct program; if Strong A.I. is correct, the Chinese subsystem could be said to “understand” the semantic meaning of the conversation if only the program were written properly.  Searle suggests that the essence of intentionality is not in the realization of a computer program, but rather in some other “causal powers” of the brain (424).  This conclusion evokes a plethora of responses, to which he replies somewhat unsatisfactorily at times, especially regarding the specific requirements for semantic meaning to obtain in any given biological or mechanical machine.
Clarifying the Problem
Programs as syntactical structures could very well exist as thinking minds in the same way that shapes “exist” as real objects or groups of objects.  Shapes do not have any “causal powers” necessary to realize them, other than functional organization; likewise, there is no reason to believe that there must be some “causal powers” in addition to mere functional organization in order to produce an actual mind.  The sun, for example, might be an example of a sphere.  Does this mean that ‘anything which causes a sphere to exist must have causal powers at least equal to those of the sun?’  Of course not; the only thing needed to be a sphere is for some matter or energy to fit a certain functional pattern.  Likewise, on the view of Strong A.I., the only thing needed to realize intentionality is for a machine to run a certain functional program.
If nothing realizes a mind-program, then there are no minds instantiated, and quite obviously no meaning attached to any symbols at all.  The program is just an abstract syntactical structure, which in fact is incapable of referring to anything in reality.  Even if another step is taken, and this program is realized, it is incapable of semantic meaning if it is not given anything real to attach meaning to—a string of inputs, for example, via a keyboard, light sensor, or other object sensitive to the outside world.  Bruce Bridgeman thoughtfully remarked in his reply to Searle that, “A program lying on a tape spool in a corner is no more conscious than a brain preserved in a glass jar” (427).  A computer program is not an existing mind until it has been instantiated and given inputs.  However, once it starts getting tangled up in reality, semantics could become not only possible but even inevitable.
The Secret Life of Thermostats
When a calculator makes a calculation given some inputs, it is actually performing certain cognitive processes (regardless of whether it is self-aware) and giving an output that really means something about reality.  It may only manipulate symbols based on how they are represented in relation to each other (much like the squiggles Searle manipulated in the thought experiment), but that is all that mathematicians are doing when they manipulate quantities (which are only known in relation to each other) according to memorized rules.  In this way, the warming and cooling activity of a thermostat also means something about the belief- and desire-states of the thermostat, which are directly related to the world around it.  In Searle’s Chinese room, there is semantic meaning attached by virtue of the system itself to input and output symbols, regardless of whether anyone actually understands the meaning.
It may not be surprising that Searle does not agree with this ascription of semantic ability to an object as rudimentary as a calculator.  In his rebuttals, Searle differentiates between “intrinsic” and “observer-relative” ascriptions of intentionality (452).  Human beings have intrinsic intentionality, because they know what their actions and words will mean to an observer, but the thermostat does not understand its own actions at all; an intelligent observer is necessary for its actions to be understood in relation to the temperature and settings in objective reality.  Thus, Searle would say that any intentionality a calculator has is strictly observer-relative.  Searle remarks that we speak as if thermostats have beliefs “Not because we suppose they have a mental life much like our own; on the contrary, we know they have no mental life at all” (452).  It seems quite obvious to Searle that the beliefs of simple mechanisms are not only different from those of people, but entirely nonexistent.
The dichotomy between “a mental life much like our own” and “no mental life at all” is questionable at best.  No functionalist or believer in Strong A.I. would say that the experience of a thermostat is similar to that of a human; under those views, the experiences would only be similar to the extent that the thermostat’s functional organization resembles that of the human brain.  The thermostat is not aware of how its actions appear to an outside observer, and therefore cannot intend to express an idea the same way one person can intend to communicate with another; it does, however, by its very nature compute thermal inputs and respond appropriately, and this could very well be considered a proper type of mentality.
At this point, it makes sense to question whether full consciousness is necessary for intention or semantics.  An intoxicated, barely-conscious individual presents a problem for this assumption:  if “Bob” becomes too acquainted with his favorite substance, for example, he may begin to say things without actually intending to express those ideas beforehand.  It is clear that a certain type of intention is missing in Bob’s case; he is incapable of assessing himself through an approximated observer’s perspective and acting appropriately to express his feelings.  I do not think, however, that Searle would suggest that Bob lacks the ability to express semantically meaningful statements. 
Bob’s words semantically mean the same thing that they would mean if he uttered them sober.  Bob could not change this fact even if he wanted to; thus, the inevitability of semantic meaning.  If we picture Bob doing math, his mathematics would not be meaningless or unintentional merely because he is not in a state of perpetual reflection on the concept of quantity and the axioms on which his math is based.  Likewise, a computer’s calculations or the calculated heat adjustments of a thermostat are not rendered semantically meaningless by the mere fact that they are not conscious of their own cognitive acts, or what those acts are ‘about.’  It would be absurd to claim that a calculator has the same mind as a mathematician, but it is plausible that the aspect of a mathematician’s mind involved in adding and subtracting numbers could be instantiated in a calculator with sufficient operational similarity to the their mental problem-solving method.
The fact that there is a fixed causal relation between the inputs and the outputs in the Chinese room, akin to the relation between the problems given to a drunken mathematician and his or her reflexively scribbled solutions, is the source of the semantic content of what it says; it does not merely make it possible for people with brains to ascribe meaning to them, as Searle suggests, but rather it is the source of the meaning itself.
From Calculators to Terminators
Searle would likely respond to this idea in the way he responded to a number of other replies to his article in Behavioral and Brain Sciences (452):
Even if formal tokens in the program have some causal connection to their alleged referents in the real world, as long as the agent has no way of knowing that, it adds no intentionality whatever to the formal states.
There are two possibilities I find likely that Searle refers to when he says, “The agent has no way of knowing that.”  The first is that the symbol does not inspire a mental representation of the referent when it reaches the person in the room, or the central processing unit in a computer.  The second interpretation is that the person in the room has no idea that the symbols given to him are even connected to the outside world at all.
            The former interpretation suggests that Searle’s problem, and indeed the problem with common intuition, is that “The English subsystem knows that ‘hamburger’ means hamburger.  The Chinese subsystem knows only that squiggle squiggle is followed by squoggle squoggle” (453).  I do not believe that there is as big a difference as Searle suspects between these two semantic relations.  The Chinese system gets a squiggle squiggle, relates it to a series of rules on the grammar of “squiggle squiggle” and the squiggle’s relation to others recorded in the system’s memory (for example, “squiggle squiggle has relation X to snuggle duggle) and outputs “squoggle squoggle” in an appropriate way.  The human sees “hamburger,” decodes the visual stimuli and matches them with an associated cluster of stimuli (the “hamburger-concept”), and outputs a response based on its memory or “belief-state.”  Thus, it appears that concepts of concrete objects, like formal mathematical quantities and Chinese “squiggles,” are known only in relation to each other.
Could not the networked relations of squiggles in the Chinese room (or, indeed, those of binary symbols in a digitally programmed computer) be made in a way similar to those that the English-speaking human draws upon intuitively upon hearing “hamburger?”  The Chinese room is disposed to relate inputs to outputs with the use of past inputs; the human is disposed to relate sense-data to actions based on past sense-data (for example, the memory of eating or hearing about a hamburger).  There is no reason not to believe that an isomorphic similarity between the two would be sufficient to produce intentionality, whether such similarity be realized in a death-dealing android or a Turing machine made of toilet paper.
With that counterintuitive notion, we can move on to the second interpretation of Searle’s response: how, in fact, does the person in the Chinese room know that a Chinese hamburger squiggle even refers to anything concrete at all?  Putting oneself in the shoes of the man in the Chinese room, it seems quite obvious that a certain sense of what is “real” is lost—everything in the Chinese subsystem is mediated through strange symbols.
            That sense of what is “real” may be just that—a sense.  Rene Descartes noted the possibility that all of one’s senses are an illusion, and Searle himself even remarks how intentional states are not a matter of the outside world, but rather private mental phenomena: “It is the operation of the brain and not the impart of the outside world that matters for the content of our intentional states, at least in one important sense of the word ‘content’” (452).  As many modern philosophers are all too aware, we do not know for a fact that our words really refer to anything outside of arrangements of related concepts in our minds; the fact that Searle acknowledges this suggests that this is not what he meant by “The agent has no way of knowing that,” but it is a topic worth bringing up anyway, and sheds light on why the Chinese subsystem seems to be different from the English one.  We act according to the inputs we are given, and produce outputs based on our memory and tendency to pursue certain goals, just like the man in the Chinese room.
            Why, then, does the English subsystem seem so different from the Chinese subsystem in the Chinese Room experiment?  The subject actually does not know as much in the English subsystem as he does in the Chinese subsystem.  In the Chinese subsystem, the alienation of Searle’s responses from their referents is made explicit; he is only acting by relating formal symbols to another, such as the squiggle for “hamburger” with the squoggle for “ketchup,” and as a result the absurdity of his formal operations are brought to the fore.  In the English subsystem, Searle is faced with a feeling that he ‘knows what he is talking about.’  He is still only manipulating formal packets of quantitative sense-data (the probability of concurrence of taste-value K and taste-value H, perhaps), but his intimate familiarity with those packets produces in him the illusion of a special connection to the outside universe. 
Programs Writing Programs
Now, it’s possible that one can never know for sure whether what it’s like to be a robot is similar to what it’s like to be a person; we can never verify beyond doubt that a computer feels “thoughts, feelings, and the rest of the forms of intentionality” (450) in the same way we do, just as we can never verify beyond a doubt that even John R. Searle has such feelings.  The fact that a thermostat probably lacks a mental life like our own, however, does not necessitate that it has nothing resembling mentality at all.  There is no reason to believe that functional organization is not, in fact, the property that gives the brain its “causal powers” to produce consciousness and intention.  This leads to seemingly ridiculous results, such as the possibility of a water-pump computer gaining consciousness, but I would dare say that it is not harder to believe than the suggestion that intentionality is “Likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena” (424).  This is weird stuff.
            Neither human nor computer actually knows whether their internal tokens refer to anything external.  Both human and computer ascribe semantic meaning to tokens gathered from reality by associating them with each other according to rule-bound functional relations.  The question of whether their mental states are similar remains open.  Strong A.I. may still be wrong when it comes to consciousness, but it also may be right—when it comes to semantics, however, a denial of such power to artificial intelligence would also put our own abilities in question, as Bob’s example demonstrates.  By virtue of being instantiated in the real world and being given input, a computer ascribes meaning to that input (by a fixed relation between interconnected symbols with their own hidden causal connections to an external reality) and produces an output, which also means something relative to the input and the program’s internal states.  This is, for our purposes, indistinguishable from a brain taking sense-data from the real world, relating it to other packs of sense-data, and producing an output which means something relative to the input and the person’s internal states.  A computer, like a person, ascribes meaning to its inputs and outputs by virtue of the fact that it exists and is consistently causally connected with the real world—it may be challenging to learn the language they are processing symbols in, but there is a language in both cases nonetheless, semantic reference included.  Most importantly, all of these similarities apply regardless of what material the machine is made of; whether gray matter or water pipes, a program instantiated produces equally semantically meaningful output, whether it “means to” or not.

Works Cited
Searle, John R.  “Minds, Brains, and Programs.” Behavioral and Brain Sciences. 1980: 3, 417-457. Print.

No comments:

Post a Comment

What did you think?