430 likes | 585 Views
How Cognition Could Be Computing Semiotic Systems, Computers, & the Mind. William J. Rapaport Department of Computer Science & Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science rapaport@buffalo.edu http://www.cse.buffalo.edu/~rapaport.
E N D
How Cognition Could Be ComputingSemiotic Systems, Computers, & the Mind William J. Rapaport Department of Computer Science & Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science rapaport@buffalo.edu http://www.cse.buffalo.edu/~rapaport
Summary Computationalism = cognition is computable. Mental processes can be the result of algorithmic procedures… …that can be affected by emotions/attitudes/individual histories. Computers that implement these (cognitive) procedures really exhibit those mental processes. They are “semiotic” (= sign-using) systems. They really think. “Syntactic semantics” explains how all this is possible.
What Is “Computationalism”? “Computationalism” =? cognition is computation Hobbes, McCulloch/Pitts, Putnam, Fodor, Pylyshyn, … interesting, worth exploring, possibly true BUT: Not what “computational”/“computable” usually mean! What should “computationalism” be? Must preserve crucial insight: cognition is explainable via mathematical theory of computation
“Computable” Task / goal / field of study G is computable iff algorithm(s)formal for G
The Proper Treatment of Computationalism Computationalism ≠ Cognition is computation
The Proper Treatment of Computationalism Computationalism = Cognition is computable i.e., algorithm(s) that compute cognitive functions Working assumption of computational cognitive science: All cognition is computable Basic research question of computational cognitive science: How much of cognition is computable? …
Proper Treatment of Computationalism • Implementational implication(multiple realization): • If cognition is computable, then:anything that implements cognitive computationswould be cognitive (would really think) • even if humans don’t do it that way! • Turing: • brain might be analog, but digital computer can still pass TT • Piccinini: • neural spike trains are not representable as digit strings; not computational / brain does not compute • BUT: • cog. functions whose O/P they produce are computable • : human cognition is computable but not computed
II. Syntactic Semanticsas a theory underlying computationalism Cognition is internal Cognitive agents have direct access only to internal representatives of external objects Semantics is syntactic Words, meanings, & semantic relations between them are all syntactic items Understanding is recursive Recursive Case: We understand one thing in terms of another that must already be understood; Base Case: We understand something in terms of itself(syntactic understanding)
Syntactic Semantics Internalism:Cognitive agents have direct access only to internal representatives of external objects A cognitive agent understands the world by “pushing the world into the mind” (Jackendoff 2002) “Output of sensory transducers is the only contact the cognitive system ever has with the environment” (Pylyshyn 1985) Both words & their meanings (including external objects) are represented internally in a single LOT Humans: biological neural network Computers: artificial neural network symbolic knowledge-representation & reasoning system
Syntactic Semantics (Internalism ) Syntacticism Words, meanings, & semantic relations between them are all syntactic syntax ⊋ grammar
Syntactic Semantics (Internalism ) Syntacticism Words, meanings, & semantic relations between them are all syntactic syntax = study of relations among members of a single set set of signs / uninterpreted marks / neuron firings / … semantics = study of relations between members of two sets set of signs / marks / neuron firings / … & set of (external) meanings / … (with its own syntax!) “Pushing” meanings into same set as symbols for them allows semantics to be done syntactically turns semantic relations between 2 sets (internal signs, external meanings) into relations among the marks of a single (internal) LOT/ syntax e.g.: truth tables & formal semantics are both syntactic e.g.: neuron firings representing both signs & external meanings Symbol-manipulating computers can do semantics by doing syntax
SYN DOM Syntax • •
SYN DOM Syntax • • SEM DOM SYN DOM • Semantics • • •
SYN DOM Syntax • • SEM DOM SYN DOM • Semantics • • • Syntactic semantics • • • •
Syntactic Semantics Understanding must be understood recursively: Recursive cases: We understand a syntactic domain (SYN-1) indirectlyby interpreting it in terms of a semantic domain (SEM-1) e.g.) understanding relevance logic in terms of Routley-Meyer ternary relation on points. but SEM-1 must be antecedently understood SEM-1 can be understood by considering it as a syntactic domain SYN-2 interpreted in terms of yet anothersemantic domain e.g.) understanding RM ternary relation in terms of situation semantics which also must be antecedently understood, etc. Base case: A domain that is understood directly (i.e., not “antecedently”) in terms of itself (in terms of relations among its symbols) i.e., syntactically & holistically
III. Rapaport’s Thesis Syntax suffices for semantic cognition cognition is computable & computers are capable of thinking James H. Fetzer’s Thesis It doesn’t, it isn’t, & they aren’t
Fetzer’s Thesis Computers differ from cognitive agents in 3 ways: statically (symbolically) dynamically (algorithmically) affectively (emotionally) Simulation is not the real thing
Fetzer’s Static Difference ARGUMENT 1:Computers are mark-manipulating systems, minds are not. Premise 1: Computers manipulate marks on the basis of their size, shapes, and relative locations. Premise 2: (a) These shapes, sizes, and relative locations exert causal influence upon computers but (b) do not stand for anything for those systems. Premise 3: Minds operate by utilizing signs that stand for other things in some respect or other for them as sign-using (or “semiotic”) systems. __________________________________________________________________ Conclusion 1: Computers are not semiotic (or sign-using) systems. ___________________________________________________________________ Conclusion 2: Computers are not the possessors of minds. Figure 10. The Static Difference
Fetzer |- Computers Are Not Semiotic Systems In a “semiotic system” (e.g., a mind): something (S) is a sign of something (x) for somebody (z) x “grounds” sign S x “is an interpretant w.r.t. a context” to sign-user z S is in a “causation” relation with z
Fetzer |- Computers Are Not Semiotic Systems In a computer (I/O) system: input i (playing role of sign S)is in a “causation” relation with computer c (playing role of sign-user z) output o (playing role of thing x)is in an “interpretant” relation with computer c BUT: No “grounding” relation between i & o
Fetzer |- Computers Are Not Semiotic Systems Computers only have causal relationships, no mediation between I/P & O/P (?!) But semiotic systems require such mediation Peirce:interpretant is “mediately determined by” the sign [ “interpretant” is really the sign-user’s mental concept of the thing x (!!) ] Computers are not semiotic systems But minds are. Minds are not computers& computers can’t be minds.
Three Arguments against Static Difference • Incardona |- Computers are semiotic systems! • Xis a semiotic system iff X carries out a process that mediates between a sign & its interpretant • Semiotic systems interpret signs • Algorithms describe processesthat mediate between I/Ps & O/Ps • An algorithm’s O/P is an interpretation of its I/P • Algorithms ground the I/O relation • Computers are algorithm machines • Computers are semiotic systems
Three arguments against the Static Difference Argument that computers are semiotic systems from embedding in the world: Fetzer’s (counter?)example: “A red light at an intersection stands for applying the brakes and coming to a complete halt, only proceeding when the light turns green, for those who know ‘the rules of the road’.” Can such a red light stand for applying the brakes, etc., for a computer? It could, if the computer “knows the rules of the road” But a computer can “know” those rules… if it has those rules stored in a knowledge base and if it uses those rules to drive a vehicle cf. 2007 DARPA Urban Grand Challenge Parisien & Thagard 2008, “Robosemantics: How Stanley Represents the World”, Minds & Machines 18
Three Arguments against the Static Difference • Goldfain |- Computer’s marks stand for something for it • Does a calculator that computes GCDs understand them? • Fetzer & Rapaport: No • Could a computer that computes GCDs understand them? • Fetzer: No • Goldfain & Rapaport: Yes, it could… • as long as it had enough background / contextual / supporting info • a computer with a full-blown theory of mathat the level of an algebra student learning GCDscould understand GCDs as well as the student
Summary: No “Static Differences” Both computers & minds manipulate marks The marks can “stand for something” for both computers & minds Computers (and minds) are “semiotic systems” Computers can possess minds
Fetzer’s Dynamic Difference ARGUMENT 2: Computers are governed by algorithms, but minds are not. Premise 1: Computers are governed by programs,which are causal models of algorithms. Premise 2:Algorithms are effective decision procedures for arriving at definite solutions to problems in a finite number of steps. Premise 3: Most human thought processes, including dreams, daydreams, and ordinary thinking, are not procedures for arriving at solutions to problems in a finite number of steps. ______________________________________________________________________ Conclusion 1: Most human thought processes are not governed by programs as causal models of algorithms. _______________________________________________________________________ Conclusion 2: Minds are not computers. Figure 11. The Dynamic Difference
The Dynamic Difference Premises 1 & 2: Def of ‘algorithm’ is OK But algorithms may be the wrong entity may need a more general notion of “procedure” (Shapiro) like an algorithm, but: need not halt need not yield “correct” output can access external KB (Turing “oracle” machine)
The Dynamic Difference Premise 3: Most human thinking is not algorithmic Dreams are not algorithms Ordinary stream-of-consciousness thinking is not “algorithmic” BUT: Some human thought processes may indeed not be algorithms Consistent with “proper” treatment of computationalism Real issue is… Could there be algorithms/procedures that produce these(or other mental states or processes)? If dreams are our interpretations of random neuron firings during sleep, as if they were due to external causes… …then: if non-dream neuron-firings are computable (& there’s every reason to think they are) then so are dreams Stream of consciousness might be computable e.g., via spreading activation in a semantic network
The Dynamic Difference Whether a mental state/process is computable is at least an empirical question Must avoid the Hubert Dreyfus fallacy: one philosopher’s idea of a non-computable processis another computer scientist’s research project what no one has yet written a program for is not thereby necessarily non-computable In fact: Mueller, Erik T. (1990), Daydreaming in Humans & Machines: A Computer Model of the Stream of Thought (Ablex) Cf. Edelman, Shimon (2008), Computing the Mind (Oxford) burden of proof is on Fetzer!
The Dynamic Difference Dynamic Conclusion 2: Are minds computers? Maybe, maybe not I prefer to say (with Shimon Edelman, et al.): The (human) mind is a virtual machine,computationally implemented (in the nervous system)
Summary: No “Dynamic Difference” All (human) thought processes are/might be describable by algorithms/procedures = computationalism properly treated
Fetzer’s Affective Difference ARGUMENT 3: Mental thought transitions are affected by emotions, attitudes, and histories,but computers are not. Premise 1: Computers are governed by programs, which are causal models of algorithms. Premise 2: Algorithms are effective decisions, which are not affected by emotions, attitudes, or histories. Premise 3: Mental thought transitions are affected by values of variables that do not affect computers. _____________________________________________________________________ Conclusion 1: The processes controlling mental thought transitions are fundamentally different than those that control computer procedures. _____________________________________________________________________ Conclusion 2: Minds are not computers. Figure 12. The Affective Difference
Contra Affective Premises 2 & 3: Programs can be based on (idiosyncratic)emotions, attitudes, & histories Rapaport-Ehrlich contextual vocabulary acquisition program Learns a meaning for an unfamiliar word from: the word’s textual context integrated with the reader’s idiosyncratic … “denotations”, “connotations”, emotions, attitudes, histories, & prior beliefs Sloman, Picard, Thagard Developing computational theories of affect, emotion, etc. Emotions, attitudes, & histories can affect computers that model them.
Summary: No “Affective Differences” Processes controlling mental thought transitions are not fundamentally different from those controlling algorithms/procedures. Algorithms can take emotions/attitudes/histories into account. Both computers & minds can be affected by emotions/attitudes/histories
The Matter of Simulation ARGUMENT 4: Digital machines can nevertheless simulate thought processes and other forms of human behavior. Premise 1: Computer programmers and those who design the systems that they control can increase their performance capabilities, making them better and better simulations. Premise 2: Their performance capabilities may be closer and closer approximations to the performance capabilities of human beings without turning them into thinking things. Premise 3: Indeed, the static, dynamic, and affective differences that distinguish computer performance from human performance preclude them from being thinking things. ______________________________________________________________________________ Conclusion: Although the performance capabilities of digital machines can become better and better approximations of human behavior, they are still not thinking things. Figure 15. The Matter of Simulation
Argument from Simulation Agreed:A computer that “simulates” some process P is not necessarily “really” doing P But what is “really doing P” vs. “simulating P”? What is the “scope” of a simulation? Computer simulations of hurricanes don’t get real people really wet Real people are outside the scope of the simulation BUT: a computer simulation of a hurricane could get simulated people simulatedly wet Computer simulation of the daily operations of a bank is not thereby the daily operations of a (real) bank BUT: I can do my banking online Simulations can be used as if they were real
Argument from Simulation Some simulations of Xs are real Xs: scale model of a scale model of X is a scale model of X Xeroxed/faxed/PDF copies of documents are those documents A computer that simulates an “informational process” is thereby actually doing that informational process Because a computer simulation of information is information…
Argument from Simulation Computer simulation of a picture is a picture digital photography Computer simulation of language is language computers really do parse sentences (Woods) IBM’s Watson really answers questions Computer simulation of math is math “A simulation of a computation and the computation itself are equivalent: try to simulate the addition of 2 and 3, and the result will be just as good as if you ‘actually’ carried out the addition—that is the nature of numbers” (Edelman) Computer simulation of reasoning is reasoning automated theorem proving, computational logic,…
Argument from Simulation Computer simulation of cognition is cognition “if the mind is a computational entity, a simulation of the relevant computations would constitute its fully functional replica” (Edelman) cf. “implementational implication”
Summary:Simulation Can Be(come) the Real Thing Close approximation to human thought processes can turn computers into thinking things actually? only asymptotically? merely conventionally? Turing said…
“the use of words and general educated opinionwill alter so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Turing 1950) “general educated opinion” changes when we abstract & generalize “the use of words” changes when reference shifts from word’s initial / narrow application to more abstract / general phenomenon cf. “fly”, “compute”, “algorithm” ditto for “cognition” / “think”
Summary Computers are “semiotic (sign-using) systems”. Computationalismproperly treated = cognition is computable… …not necessarily computational. Any non-computable residue will be negligible Mental processes are describable (governable) by algorithmic procedures… …that can be affected by emotions/attitudes/individual histories. Computers that implement these cognitive procedures really exhibit those cognitive behaviors. They really think. Computers can possess minds. “Syntactic semantics” explains how all this is possible.
Rapaport, William J. (2012), “Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing”, International Journal of Signs and Semiotic Systems 2(1) (January-June): 32–71.