500 likes | 716 Views
The Chinese Room Argument. The language of thought. The Language of Thought. The Language of Thought. If the mind has representational states, then there is some format the representations are in.
E N D
The Language of Thought If the mind has representational states, then there is some format the representations are in. One idea is that the format is a language that is a lot like a computer language for an electronic computer or a natural, spoken human language: the language of thought (sometimes: “Mentalese”).
The Language of Thought The idea would be that when you think “dogs hate cats,” there are discrete ‘words’ of the language of thought, DOGS, HATE, CATS. These are your ideas. The thought is a ‘sentence’ that is made out of those ideas: DOGS HATE CATS
Systematicity You can use those same ideas in different combinations: CATS HATE DOGS The LOT hypothesis thus predicts mental systematicity: that people who can think that cats hate dogs can think that dogs hate cats.
Systematicity Thought is systematic := For any thought T containing a concept (idea) C, and any concept C* of the same category as C: anyone who can think T(C) can think T(C*). Categories: concepts that represent individuals (“names”), concepts that represent properties (“adjectives,” “intransitive verbs”) concepts that represent logical relations (“connectives”), etc.
Systematicity Sometimes Fodor just says: Thought is systematic := anyone who can think aRb can think bRa.
The Argument from Systematicity • If the LOT hypothesis is true, then thought should be systematic. • It seems like thought is systematic. • The best explanation of the systematicity of thought is that LOT is true.
Compositionality A representational system is compositional := what complex representations represent is determined completely by what their basic symbols represent.
Basic Symbol A basic symbol is just a symbol that has no meaningful parts. Classic example ‘cattle’ contains the part ‘cat,’ but that part of it has no meaning in the expression ‘cattle.’
FROM POLICE RUNS MICHAEL
Novel Utterance “Yesterday, on my way to the plastic cow hat factory, I witnessed on two separate occasions police selling cupcakes out of empty space shuttles that had been painted in red and blue stripes.”
Compositionality and Natural Language Many linguists think that the only way we can understand an infinite number of different sentences with different meanings is if those sentences are compositional. This way we can learn a finite number of meanings (for individual words) and use those to calculate the meanings for all the more complicated expressions (like sentences).
Productivity A representational system is productive := that system contains an infinite number of representations with an infinite number of distinct meanings.
The Argument from Productivity • Thought appears to be productive. We can think a potential infinitude of different things. There will be no point at which humans have “thought all the thoughts.” • If thought occurs in a language, we can use a compositional meaning theory to assign meanings to each thought on the basis of the meanings of their simple parts (concepts).
The Argument from Productivity Therefore, the best explanation for the productivity of thought is that thought involves a language-like representational medium, and has a compositional semantics. LOT is true.
The Turing Test Turing didn’t just discover the theory of computation, he also proposed a test for deciding whether a machine could think.
Chatterbots ELIZA, 1966 http://nlp-addiction.com/eliza/ (Joseph Weizenbaum, creator)
Simon & Newell The heuristic search hypothesis says: “The solutions to problems are represented as symbol structures. A physical symbol system exercises its intelligence in problem solving by search--that is, by generating and progressively modifying symbol structures until it produces a solution structure.” (Computer Science as Empirical Enquiry, 1976)
Searle • Professor of Philosophy at UC Berkeley • Jean Nicod Prize (2000) • National Humanities Medal (2004) • Mind and Brain Prize (2006)
Searle • Doesn’t know any Chinese language. • Never heard of China. • Never seen a Chinese character. • Doesn’t even know that there are languages other than English.
Searle’s New Job Searle takes a job. He’s told that he works for a company that makes funny squiggles for decorations. Currently, they need to update their squiggles, so Searle’s job is to receive “input” squiggles, and update them to the new squiggles.
The Room From Outside This guy is so smart!
What’s Going On? • Searle is “running the program” of a real Chinese speaker’s mind. • The states on the blackboard correspond to different states that speaker could be in: tired, hungry, in a hurry, bored… • Each volume contains what that speaker would say, given the state he’s in, in response to any question.
The Argument • According to CTM, all the mechanisms underlying human cognitive abilities and functions are computational. • So the cognitive ability to understand Chinese is a computational process realized by a program in the brain. • Therefore, someone like Searle in his room could realize this same program and thus understand Chinese.
The Argument BUT, obviously, Searle in his room does not understand Chinese. He doesn’t know what any of the characters mean, or even that they have meanings. Therefore, the computational theory of mind is false.
The Systems Reply One standard reply to the Chinese room argument is the “Systems Reply.” This reply concedes that Searle doesn’t understand Chinese, but maintains that the entire room, with Searle as its CPU, does understand Chinese.
Searle’s Response Searle argues that in theory, he could just memorize all the rules, and get rid of the rest of the system. Now the entire system = Searle, but Searle still does not understand Chinese.
Understanding and Action One thing that supports Searle’s response is the fact that if you hold up a sign saying “you’re going to get hit by that bus!” (in Chinese), Searle can write down an appropriate Chinese response (“Ahhh!”), but what he won’t do is jump out of the way.
The Robot Reply The robot reply says that in order for the system to understand Chinese, it has to appropriately control behavior. If told he’s going to get hit by a bus, Searle has to jump out of the way. If told his mother is a dog, he has to get angry. If told a funny joke, he has to laugh.
The Robot Reply So, on the robot reply, mere computers can never understand a language, only computers controlling robot bodies (in an appropriate manner) can understand a language. If you build a computer-controlled robot that behaved exactly like a native Chinese speaker, then it would in fact understand Chinese.
Last Word Fodor counters Searle’s response as follows: whether any computer’s internal states actually represent the outside world or not depends on how those states are connected with action and experience. Searle has shown one way that is not the right connection: him in a room. But he has not proven that no such connections exist.