FACTOID # 29: 73.3% of America's gross operating surplus in motion picture and sound recording industries comes from California.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Chinese Room

The Chinese Room argument is a thought experiment and associated arguments designed by John Searle (Searle 1980) to show that a symbol processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave. In philosophy, physics, and other fields, a thought experiment (from the German Gedankenexperiment) is an attempt to solve a problem using the power of human imagination. ... Look up argument in Wiktionary, the free dictionary. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... For other uses, see Mind (disambiguation). ... Intentionality, originally a concept from scholastic philosophy, was reintroduced in contemporary philosophy by the philosopher and psychologist Franz Brentano in his work Psychologie vom Empirischen Standpunkte. ...


Searle asks his audience to imagine that many years from now, people have constructed a computer that behaves as if it understands Chinese. The computer takes Chinese characters as input and, following a program, produces other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of artificial intelligence would like to draw is that the computer understands Chinese, just as the person does. A computer program is a collection of instructions that describe a task, or set of tasks, to be carried out by a computer. ... For the Doctor Who novel named after the test, see The Turing Test (novel). ... AI redirects here. ...


Now, Searle asks the audience to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the computer program, and processes the Chinese characters according to the instructions in the book. Searle notes that he does not understand a word of Chinese. He simply manipulates what to him are meaningless squiggles, using the book and whatever other equipment is provided in the room, such as paper, pencils, erasers, and filing cabinets. After manipulating the symbols, Searle will produce the answer in Chinese. Since the computer passed the Turing test, so does Searle running its program by hand: "Nobody just looking at my answers can tell that I don't speak a word of Chinese," Searle writes.[1]


Searle argues that his lack of understanding goes to show that computers do not understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is. They don't understand what they're "saying", just as he doesn't. Since they do not have conscious mental states like "understanding", they can not properly be said to have minds. For other uses, see Mind (disambiguation). ...

Contents

History

Searle's argument originally appeared in his paper "Minds, Brains, and Programs", published in the journal Behavioral and Brain Sciences in 1980.[2] It would eventually become the journal's "most influential target article"[3] and considerable literature has grown up around it. Most of the discussion consists of attempts to interpret and refute it: as editor Stevan Harnad notes, "the overwhelming majority still think that the Chinese Room Argument is dead wrong."[4] Pat Hayes quipped that the field of cognitive science should be defined as "the ongoing research program of showing Searle's Chinese Room Argument to be false."[3] Behavioral and Brain Sciences (BBS), founded in 1978 and published by Cambridge University Press, is a journal of Open Peer Commentary modeled on the journal Current Anthropology (which was founded in 1959 by the University of Chicago anthrolologist, Sol Tax). ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... Patrick Michael Hayes is a politician in Ontario, Canada. ... Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ...


Searle's targets: "strong AI" and computationalism

Although the Chinese Room argument was originally presented to refute the statements of artificial intelligence researchers, philosophers have come to see it as a part of the philosophy of mind—a challenge to functionalism and the computational theory of mind,[5] and related to such questions as the mind-body problem,[6] the problem of other minds,[7] the symbol grounding problem and the hard problem of consciousness.[8] AI redirects here. ... A phrenological mapping of the brain. ... Functionalism is a term with several senses: For functionalism in sociology, see Functionalism (sociology). ... The computational theory of mind is the view that the human mind is best conceived as an information processing system very similar to or identical with a digital computer. ... To meet Wikipedias quality standards, this article may require cleanup. ... The problem of other minds has traditionally been regarded as an epistemological challenge raised by the skeptic. ... The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. ... Unsolved problems in cognitive science: How is it possible to resolve the Hard Problem? The term hard problem of consciousness, coined by David Chalmers[1][2], refers to the hard problem of explaining why we have qualitative phenomenal experiences. ...


AI founder Herbert Simon announced in 1955 that "there are now in the world machines that think, that learn and create"[9] and claimed they had "solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind."[10] John Haugeland summarizes the philosophical position of early AI researchers as follows: Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... To meet Wikipedias quality standards, this article may require cleanup. ... For other uses, see Mind (disambiguation). ... John Haugeland (born in 1945), is a philosopher and Professor of Philosophy at the University of Chicago. ...

"AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."[11]

Statements like these assume a philosophical position that Searle calls "strong AI":

  • "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[12]

Searle also ascribes these positions to proponents of strong AI:

  • AI systems can be used to explain the mind.[13]
  • The brain is irrelevant to understanding the mind.[14]
  • The Turing test is definitive.[15]

Stevan Harnad argues that these positions can be reformulated as "recognizable tenets of computationalism, a position (unlike 'strong AI') that is actually held by many thinkers, and hence one worth refuting."[16] He characterizes the key components of strong AI as "mental states are computational states" (which is why computers can have mental states, and why computers can help explain the mind), "computational states are implementation-independent" (which is how the brain is irrelevant), and, since the implementation is not important, the only empirical data that matters is how the system functions (which is why the Turing test is definitive). This last point is a version of functionalism.[17] For the Doctor Who novel named after the test, see The Turing Test (novel). ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... A computer is a device or machine for processing information from data according to a program — a compiled list of instructions. ... For the Doctor Who novel named after the test, see The Turing Test (novel). ... Functionalism is a term with several senses: For functionalism in sociology, see Functionalism (sociology). ...


Searle's argument centers on the question of whether computers can be programmed to have mental states like understanding (that is, mental states with what philosophers call "intentionality") and thus have a "mind" in the same way people do. Although Searle only addresses "mind", "mental states", "intentionality" and "understanding", David Chalmers has argued that "it is fairly clear that consciousness is at the root of the matter".[18] Searle disagrees and maintains that intentionality is independent of consciousness. Look up understanding in Wiktionary, the free dictionary. ... Intentionality, originally a concept from scholastic philosophy, was reintroduced in contemporary philosophy by the philosopher and psychologist Franz Brentano in his work Psychologie vom Empirischen Standpunkte. ... For other uses, see Mind (disambiguation). ... For the oil company owner, see David B. Chalmers. ... Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and ones environment. ...


Searle's argument does not limit how intelligent machines can behave. (Searle's "strong AI" should not be confused with strong AI, a term used by futurists to describe artificial intelligence that rivals human intelligence.) The Chinese room argument does not address this issue directly, and leaves open the possibility that a machine could be built that acts intelligently, but doesn't have a mind or intentionality in the same way brains do.[19] Since the primary mission of AI research is only to create useful systems that act intelligently, Searle's arguments are not considered an issue for AI research. As Stuart Russell and Peter Norvig write, "most AI researchers ... don't care about the strong AI hypothesis."[20] For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ... For other uses, see Mind (disambiguation). ... Intentionality, originally a concept from scholastic philosophy, was reintroduced in contemporary philosophy by the philosopher and psychologist Franz Brentano in his work Psychologie vom Empirischen Standpunkte. ... For other uses, see Brain (disambiguation). ... Stuart Russell (born 1962) is a computer scientist known for his contributions to artificial intelligence. ... Peter Norvig is currently the Director of Research (formerly Director of Search Quality) at Google Inc. ...


Replies

The replies to Searle's argument can be classified by what they claim to show.[21]

  • Those that identify who it is who speaks Chinese.
  • Those that demonstrate how meaningless symbols can become meaningful.
  • Those that suggest that the Chinese room should be redesigned more along the lines of a brain.
  • Those that demonstrate ways that Searle's argument is misleading.

Some of the arguments (robot and brain simulation, for example) fall into multiple categories.


System and virtual mind replies: finding the mind

These two replies attempt to answer the question: since the man in the room doesn't speak Chinese, where is the "mind" that does?


These replies address the key ontological issues of mind vs. body and simulation vs. reality. This article is about the philosophical meaning of ontology. ... A phrenological mapping of the brain. ...


Systems reply.[22] The "systems reply" argues that it is the whole system that understands Chinese, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets. While Searle can only understand English, the complete system can understand Chinese. The system doesn't understand English, just as Searle doesn't understand Chinese. The man is part of the system, just as the hippocampus is a part of the brain. The fact that the man understands nothing is irrelevant, and is no more surprising than the fact that the hippocampus understands nothing by itself. For other uses, see Hippocampus (disambiguation). ...


Searle's response is to consider what happens if the man memorizes the rules and keeps track of everything in his head. Then the only component of the system is the man himself. Since the man still doesn't understand Chinese and since Searle believes that it is obvious that there is nothing else there, he concludes that nothing understands Chinese, and the fact that the man appears to understand Chinese proves nothing.[23] Since his critics insist that there is something else there, Searle accuses them of dualism, at least in the limited sense that the Chinese mind does not seem connected to the brain the same way a normal mind is.[24] For other uses, see Dualism (disambiguation). ...


Virtual mind reply.[25] A more precise response is that there is a Chinese speaking mind in Searle's room, but that it is virtual. A fundamental property of computing machinery is that one machine can "implement" another: any (Turing complete) computer can do a step-by-step simulation of any other machine.[26] In this way, a machine can simultaneously be two machines at once: for example, it can be a Macintosh and a word processor at the same time. A virtual machine depends on the hardware (in that if you turn off the Macintosh, you turn off the word processor as well), yet is different from the hardware. (This is how the position resists dualism, the idea that the mind is a separate "substance". There can be two machines in the same place, both made of the same substance, if one of them is virtual.) A virtual machine is also "implementation independent" in that it doesn't matter what sort of hardware it runs on: a PC, a Macintosh, a supercomputer, a brain or Searle in his Chinese room.[27] Cole extends this argument to show that a program could be written that implements two minds at once -- for example, one speaking Chinese and the other Korean. While there is only one system and only one man in the room, there may be an unlimited number of "virtual minds."[28] In computability theory a programming language or any other logical system is called Turing-complete if it has a computational power equivalent to a universal Turing machine. ... For other uses, see Macintosh (disambiguation) and Mac. ... A word processor (also more formally known as a document preparation system) is a computer application used for the production (including composition, editing, formatting, and possibly printing) of any sort of viewable or printed material. ... In computer science, a virtual machine is software that creates a virtualized environment between the computer platform and its operating system, so that the end user can operate software on an abstract machine. ... For other uses, see Dualism (disambiguation). ... In computer science, a virtual machine is software that creates a virtualized environment between the computer platform and its operating system, so that the end user can operate software on an abstract machine. ... Please wikify (format) this article as suggested in the Guide to layout and the Manual of Style. ...


Searle would respond that such a mind is only a simulation. He writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[29] Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter."[30] The question is, is the human mind like the pocket calculator, essentially composed of information? Or is it like the rainstorm, which can't be duplicated using digital information alone? (The issue of simulation is also discussed in the article synthetic intelligence.) It has been suggested that this article or section be merged into Artificial Intelligence. ...


What they do and don't prove. These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.[31]


However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."[32] Without additional evidence both Searle and his critics are left with the intuitions they had at the start: Searle can't imagine that a simulated mind can "understand" while his critics can. A hypothesis (= assumption in ancient Greek) is a proposed explanation for a phenomenon. ... For the Doctor Who novel named after the test, see The Turing Test (novel). ...


Robot and semantics replies: finding the meaning

As far as the man in the room is concerned, the symbols he writes are just meaningless "squiggles." But if the Chinese room really "understands" what it's saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize.


These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics. Intentionality, originally a concept from scholastic philosophy, was reintroduced in contemporary philosophy by the philosopher and psychologist Franz Brentano in his work Psychologie vom Empirischen Standpunkte. ... The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. ... For other uses, see Syntax (disambiguation). ... In general, semantics (from the Greek semantikos, or significant meaning, derived from sema, sign) is the study of meaning, in some sense of that term. ...


Robot reply.[33] Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[34] This article is about causality as it is used in many different fields. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ...


Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robots eyes."[35] (See Mary's Room for a similar thought experiment.) Marys room (also known as Mary the super-scientist) is a philosophical thought experiment proposed by Frank Jackson in his article Epiphenomenal Qualia (1982) and extended in What Mary Didnt Know (1986). ...


Derived meaning.[36] Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols he manipulates are already meaningful, they're just not meaningful to him. To meet Wikipedias quality standards, this article or section may require cleanup. ...


Searle complains that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.[37]


Commonsense knowledge / contextualist reply.[38] Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning. Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ... In philosophy, contextualism describes a collection of views in the philosophy of language which emphasize the context in which an action, utterance or expression occurs, and argues that, in some important respect, the action, utterance or expression can only be understood within that context. ...


Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.[39] Hubert Lederer Dreyfus (born October 15, 1929 in Terre Haute, Indiana to Stanley S. and Irene Lederer Dreyfus), is a professor of philosophy at the University of California, Berkeley. ...


What they do and don't prove. To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."[40] For other uses, see Syntax (disambiguation). ...


However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.


Brain simulation and connectionist replies: redesigning the room

These arguments are all versions of the systems reply that identify a particular kind of system as being important. They try to outline what kind of a system would be able to pass the Turing test and give rise to conscious awareness in a machine. (Note that the "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.)


Brain simulator reply.[41] Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.


Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains."[42] His position, that (only) "brains cause minds" is called "biological naturalism" (as opposed to alternatives like behaviorism, functionalism, identity theory or dualism).[43] Searle is a surname, and may refer to John Rogers Searle (1932– ), American philosopher, famous for work on consciousness Charles Edward Searle, British academic; Vice-Chancellor of Cambridge University in 1888-89. ... Biological Naturalism states that consciousness is a higher level function of the human brains physical capabilities. ... Behaviorism (also called learning perspective) is a philosophy of psychology based on the proposition that all things which organisms do — including acting, thinking and feeling—can and should be regarded as behaviors. ... Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. ... identity theory is a regularly published, webzine of literature and culture edited by Matt Borondy from Austin, TX, established in 2000. ... For other uses, see Dualism (disambiguation). ...


Two variations on the brain simulator reply are:

Chinese nation.[44] What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.
Brain replacement scenario.[45] In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.[46]

Connectionist replies.[47] Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding. An axon or nerve fiber, is a long, slender projection of a nerve cell, or neuron, that conducts electrical impulses away from the neurons cell body or soma. ... Dendrites (from Greek dendron, “tree”) are the branched projections of a neuron that act to conduct the electrical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. ...


Combination reply.[48] This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body would surely be able think.


What they do and don't prove. Arguments such as these (and the robot and commonsense knowledge replies above) recommend that Searle's room be redesigned. They can be interpreted in three ways:

  1. The room as Searle describes it can't pass the Turing test. However, if some improvements are made to the design of the room or the program, a room can be constructed that would both pass the test and have a "mind", "understanding" and "consciousness".[49]
  2. The room can pass the Turing test, but it would not have a mind. However (as with the first case) with some improvements, a room can be constructed that would.[49]
  3. The room does, in fact, have a mind, but it's difficult to see—Searle's description is correct, but misleading. Redesigning the room more realistically will make this more obvious.

Searle's replies all point out that, however the program is written or however it is connected to the world, it is still being simulated by a simple step by step Turing complete machine (or machines). Every one of these machines is still, at the ground level, just like Searle in the room: it understands nothing and doesn't speak Chinese. In computability theory a programming language or any other logical system is called Turing-complete if it has a computational power equivalent to a universal Turing machine. ...


Searle also argues that, if features like a robot body or a connectionist architecture are required, then strong AI (as he understands it) has been abandoned.[50] Either (1) Searle's room can't pass the Turing test, because formal symbol manipulation is not enough,[51] or (2) Searle's room could pass the Turing test, but the Turing test is not sufficient to determine if the room has a "mind." Either way, it denies one or the other of the positions Searle thinks of "strong AI", proving his argument. The brain arguments also suggests that computation can't provide an explanation of the human mind (another aspect of what Searle thinks of as "strong AI"). They assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."[52]


In the third case, these arguments being used as "appeals to intuition" (which are discussed in more detail in the next section). By making the program more realistic, they help AI researchers to visualize how the program might work. Searle's intuition, however, is never shaken. He writes: "I can have any formal program you like, but I still understand nothing."[53]


In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's "blockhead" argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". Any program can be rewritten (or "refactored") into this form, even a brain simulation.[54] It is hard for most to imagine that such a program would give rise to a "mind" or have "understanding". In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of our conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. Ned Block (born 1942) is a philosopher of mind who has made important contributions to matters of consciousness and cognitive science. ... In computer science, a lookup table is a data structure, usually an array or associative array, used to replace a runtime computation with a simpler lookup operation. ... A production system consists of a collection of productions (rules), a working memory of facts and an algorithm, known as forward chaining, for producing new facts from old. ... A code refactoring is any change to a computer program which improves its readability or simplifies its structure without changing its results. ... In computer science, a memory address is a unique identifier for a memory location at which a CPU or other device can store a piece of data for later retrieval. ...


Speed, complexity and other minds: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies.


The central point of these replies is that Searle's description of the Chinese room is profoundly misleading. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[55] Daniel Dennett describes the Chinese room argument as an "intuition pump"[56] and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."[57] Ned Block (born 1942) is a philosopher of mind who has made important contributions to matters of consciousness and cognitive science. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ...


Speed and complexity replies.[58] The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second.[59] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.


An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment: Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ...

Churchland's luminous room.[60] Suppose a philosopher finds it inconceivable that light is caused by waves of electromagnetism. He could go into a dark room and wave a magnet up down. He would see no light, of course, and he could claim that he had proved light is not a magnetic wave and that he has refuted Maxwell's equations. The problem is that he would have to wave the magnet up and down something like 450,000,000,000 times a second in order to see anything.

Several of the replies above address the issue of complexity. The connectionist reply emphasizes that a working artificial system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge," as Daniel Dennett explains.[61] For thermodynamic relations, see Maxwell relations. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ...


Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"[62] Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... This diagram shows the nomenclature for the different phase transitions. ... Ad hoc is a Latin phrase which means for this [purpose]. It generally signifies a solution that has been tailored to a specific purpose, such as a tailor-made suit, a handcrafted network protocol, and specific-purpose equation and things like that. ...


Other minds reply.[63] Searle's argument is just a version of the problem of other minds, applied to machines. Since it's difficult to decide if people are "actually" thinking, we shouldn't be surprised that it's difficult to answer the same question about machines. The problem of other minds has traditionally been regarded as an epistemological challenge raised by the skeptic. ...


The most radical view is that the Chinese room argument actually proves that humans don't have minds, at least not in the sense that Searle insists that we do. Searle argues that there are "causal properties" in our neurons that give rise to the mind. What if these properties don't exist? How could we tell? Perhaps each neuron in the brain is just like Searle, following his rules, utterly unable to give rise to what Searle calls "understanding." Searle's argument suggests that the human mind is epiphenomenal: that it "casts no shadow."[64] To make this point clear, Daniel Dennett suggests this version of the "other minds" reply: An epiphenomenon is a secondary phenomenon that occurs alongside a primary phenomenon. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ...

Dennett's reply from natural selection.[65] Suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. (This sort of animal is a called a "zombie" in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it's most likely that human beings (as we see them today) are actually "zombies," who nevertheless insist they are conscious. This suggests it's unlikely that Searle's "causal properties" would have ever evolved in the first place. Nature has no incentive to create them.

In philosophy, a philosophical zombie or p-zombie is a hypothetical person that, despite a strong likeness to normal human beings, lacks conscious experience or (in other words) has no qualia at all. ... A phrenological mapping of the brain. ...

Formal arguments

In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:

  1. Brains cause minds.
  2. Syntax is not sufficient for semantics.
  3. Computer programs are entirely defined by their formal, or syntactical, structure.
  4. Minds have mental contents; specifically, they have semantic contents.

The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese. Searle posits that these lead directly to four conclusions:

  1. No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
  2. The way that brain functions cause minds cannot be solely in virtue of running a computer program.
  3. Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
  4. The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain.

Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program. An example of parsing a mathematical expression. ...


Notes

  1. ^ Searle 1980, p. 2-3
  2. ^ Searle 1980
  3. ^ a b (Harnad 2001, p. 1) Harnad edited the journal BBS during the years the Chinese Room argument was introduced.
  4. ^ Harnad 2001, p. 2
  5. ^ Harnad (2005) writes that the Searle's argument is against the thesis that "has since come to be called "computationalism," according to which cognition is just computation, hence mental states are just computational states". Cole (2004) writes "the argument also has broad implications for functionalist and computational theories of meaning and of mind".
  6. ^ See the "Systems reply" below
  7. ^ See "Other minds reply" below.
  8. ^ The relationship between Searle's argument and consciousness is detailed in Chalmers 1996
  9. ^ Quoted in Russell & Norvig 2003, p. 21. Simon, along with Alan Newell and Cliff Shaw, had just completed the first true AI program, the Logic Theorist.
  10. ^ Quoted in Crevier 1993, p. 46 and Russell & Norvig 2003, p. 17.
  11. ^ Haugeland 1986, p. 2. (Italics his)
  12. ^ This version is from Searle (1998), also quoted in Dennett 1991, p. 435 and at AI Topics. His original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis.". An equivalent definition is given at Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia")
  13. ^ For example, Searle writes "Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it." (Searle 1980, p. 2)
  14. ^ Searle writes, "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." (Searle 1980, p. 8) The phrasing of this position is due to Harnad (2001).
  15. ^ Searle writes, "One of the points at issue is the adequacy of the Turing test." (Searle 1980, p. 6) The phrasing of this position is due to Harnad (2001).
  16. ^ (Harnad 2001, p. 3). Computationalism is associated with Jerry Fodor and Hilary Putnam. (Horst 2005, p. 1) Harnad also cites Allen Newell and Zenon Pylyshyn.
  17. ^ Harnad 2001, pp. 3-5
  18. ^ Chalmers 1996, p. 322, quoted in Larry Hauser's annotated bibliography
  19. ^ Cole (2004, p. 14) attributes to AI researchers Simon and Eisenstadt this view: "whereas Searle refutes "logical strong AI", the thesis that a program that passes the Turing Test will necessarily understand, Searle's argument does not impugn "Empirical Strong AI" — the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding."
  20. ^ (Russell Norvig, p. 947)
  21. ^ Cole (2004, pp. 5-6). He combines the middle two categories.
  22. ^ Searle 1980, pp. 5-6, Cole 2004, pp. 6-7, Hauser 2006, p. 2-3, Russell & Norvig 2003, p. 959, Dennett 1991, pp. 439, Hearn 2007, p. 44, Crevier 1993, p. 269. Among those who hold to this position (according to Cole (2004, p. 6)) are Ned Block, Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey
  23. ^ Searle 1980, p. 6
  24. ^ Searle 1980, p. 13
  25. ^ Cole (2004, p. 7-9) ascribes this position to Marvin Minsky, Tim Maudlin, David Chalmers, and David Cole.
  26. ^ This is the point of the universal Turing machine and the Church-Turing thesis: what makes a system Turing complete is its ability to do a step-by-step simulation of any other machine.
  27. ^ The terminology "implementation independent" is due to Harnad (2001, p. 4).
  28. ^ Cole 2004, p. 8
  29. ^ Searle 1980, p. 12
  30. ^ Hearn 2007, p. 47
  31. ^ Cole (2004, p. 21) writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."
  32. ^ Searle 1980, p. 6
  33. ^ Searle 1980, p. 7, Cole 2004, p. 9-11, Hauser 2006, p. 3, Hearn 2007, p. 44. Cole (2004, p. 9) ascribes this position to Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey
  34. ^ Quoted in Crevier 1993, p. 272. Cole (2004, p. 18) calls this the "externalist" account of meaning.
  35. ^ Searle 1980, p. 7
  36. ^ Hauser 2006, p. 11, Cole 2004, p. 19. This argument is supported by Daniel Dennett and others.
  37. ^ Searle distinguishes between "intrinsic" intentionality and "derived" intentionality. "Intrinsic" intentionality is the kind that involves "conscious understanding" like you would have in a human mind. Daniel Dennett doesn't agree that there is a distinction. Cole (2004, p. 19) writes "derived intentionality is all there is, according to Dennett."
  38. ^ Cole 2004, p. 18 (where he calls this the "internalist" approach to meaning.) Proponents of this position include Roger Schank, Doug Lenat, Marvin Minsky and (with reservations) Daniel Dennett, who writes "The fact is that any program [that passed a Turing test] would have to be an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge." (Dennett 1997, p. 438)
  39. ^ Dreyfus 1979. See "the epistemological assumption".
  40. ^ Searle 1984. He also writes "Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them" Searle 1989, p. 45 quoted in Cole 2004, p. 16.
  41. ^ Searle 1980, pp. 7-8, Cole 2004, p. 12-13, Hauser 2006, pp. 3-4, Churchland & Churchland 1990. Cole (2004, p. 12) ascribes this position to Paul Churchland, Patricia Churchland and Ray Kurzweil.
  42. ^ Searle 1980, p. 13
  43. ^ Hauser 2006, p. 8
  44. ^ Cole 2004, p. 4, Hauser 2006, p. 11. Early versions of this argument were put forward in 1974 by Lawrence Davis and in 1978 by Ned Block. Block's version used walky talkies and was called the "Chinese Gym". Churchland & Churchland (1990) described this scenario as well.
  45. ^ Russell Norvig, pp. 956-8, Cole 2004, p. 20, Moravec 1988, p. ? CHECK, Kurzweil 2005, p. 262 CHECK, Crevier 1993, pp. 271 and 279 CHECK. An early version of this argument was put forward by Clark Glymour in the mid-70s and was touched on by Zenon Pylyshyn in 1980. Moravec (1988) presented a vivid version of it, and it is now associated with Ray Kurzweil's version of transhumanism.
  46. ^ Searle predicts that, while going through the brain prosthesis, "you find, to your total amazement, that you are indeed losing control of you external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; pleas tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely out your control, 'I see a read object in front of me.' ... [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same." Searle 1992 quoted in Russell & Norvig 2003, p. 957.
  47. ^ Cole (2004, pp. 12 & 17) ascribes this position to Andy Clark and Ray Kurzweil. Hauser (2006, p. 7) associates this position with Paul and Patricia Churchland.
  48. ^ Searle 1980, pp. 8-9, Hauser,
  49. ^ a b This is how Cole (2004, p. 6) characterizes some of these arguments.
  50. ^ Searle (1980, p. 7) writes that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation." Harnad (2001, p. 14) makes the same point, writing: "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the 'right' kind of implementation, whereas Searle's is the 'wrong' kind."
  51. ^ Note that Searle-in-the-room is a Turing complete machine
  52. ^ Searle 1980, p. 8
  53. ^ Searle 1980, p. 3
  54. ^ That is, any program running on a machine with a finite amount memory.
  55. ^ Quoted in Cole 2004, p. 13.
  56. ^ Dennett 1991, pp. 437 & 440
  57. ^ Dennett 1991, p. 438
  58. ^ Cole 2004, p. 14-15, Crevier 1993, pp. 269-270, Pinker, pp. 95. Cole (2004, p. 14) ascribes this "speed" position to Daniel Dennett, Tim Maudlin, David Chalmers, Steven Pinker, Paul Churchland, Patricia Churchland and others. Dennett (1991, p. 438) points out the complexity of world knowledge.
  59. ^ Crevier 1993, p. 269
  60. ^ Churchland & Churchland 1990, Cole 2004, p. 12, Crevier 1993, p. 270, Hearn 2007, pp. 45-46, Pinker 1997, p. 94
  61. ^ (Dennett 1997, p. 438)
  62. ^ Harnad 2001, p. 7 and Tim Maudlin (Cole 2004, p. 14) both criticize these replies, which are versions of strong emergentism (what Daniel Dennett derides as "Woo woo West Coast emergence" (Crevier 1993, p. 275)). Harnad ascribes this view to Churchland and Patricia Churchland. Kurzweill (2005) also makes this kind of argument.
  63. ^ Searle 1980, Cole 2004, p. 13, Hauser 2006, p. 4-5. Turing (1950) makes this reply to what he calls "The Argument from Consciousness." Cole (2004, p. 12-13) ascribes this position to Daniel Dennett, Ray Kurzweil and Hans Moravec.
  64. ^ Russell & Norvig 2003, p. 957
  65. ^ Cole 2004, p. 22, Crevier 1993, p. 271, Harnad 2004, p. 4

Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... Behavioral and Brain Sciences (BBS), founded in 1978 and published by Cambridge University Press, is a journal of Open Peer Commentary modeled on the journal Current Anthropology (which was founded in 1959 by the University of Chicago anthrolologist, Sol Tax). ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation. ... J.C. (Cliff) Shaw was a systems programmer at the RAND Corporation. ... Logic Theorist is a computer program written in 1955 by Alan Newell, Herbert Simon and J. C. Shaw. ... A computer is a device or machine for processing information from data according to a program — a compiled list of instructions. ... Jerry Alan Fodor (born 1935) is a philosopher at Rutgers University, New Jersey. ... Hilary Whitehall Putnam (born July 31, 1926) is an American philosopher who has been a central figure in Western philosophy since the 1960s, especially in philosophy of mind, philosophy of language, and philosophy of science. ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie-Mellon’s School of Computer Science. ... Zenon Pylyshyn (born 1937) is a Canadian cognitive scientist and philosopher. ... Ned Block (born 1942) is a philosopher of mind who has made important contributions to matters of consciousness and cognitive science. ... B. Jack Copeland is Professor of Philosophy at the University of Canterbury, Christchurch, New Zealand. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Jerry Alan Fodor (born 1935) is a philosopher at Rutgers University, New Jersey. ... John Haugeland (born in 1945), is a philosopher and Professor of Philosophy at the University of Chicago. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Georges Rey is a professor of philosophy at the University of Maryland. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... For the oil company owner, see David B. Chalmers. ... Please wikify (format) this article as suggested in the Guide to layout and the Manual of Style. ... The Turing machine is an abstract machine introduced in 1936 by Alan Turing to give a mathematically precise definition of algorithm or mechanical procedure. As such it is still widely used in theoretical computer science, especially in complexity theory and the theory of computation. ... In computability theory the Church-Turing thesis, Churchs thesis, Churchs conjecture or Turings thesis, named after Alonzo Church and Alan Turing, is a hypothesis about the nature of mechanical calculation devices, such as electronic computers. ... In computability theory a programming language or any other logical system is called Turing-complete if it has a computational power equivalent to a universal Turing machine. ... Margaret Boden did not know the man she married was a sick fuck who was responsible for the destruction of a fine race. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Jerry Alan Fodor (born 1935) is a philosopher at Rutgers University, New Jersey. ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ... Georges Rey is a professor of philosophy at the University of Maryland. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Roger Schank (* 1946) is president and CEO of Socratic Arts, and a leading visionary in artificial intelligence. ... Douglas B. Lenat is the CEO of Cycorp, Inc. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... This article or section should include material from Episteme Epistemology (from the Greek words episteme=science and logos=word/speech) is the branch of philosophy that deals with the nature, origin and scope of knowledge. ... Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Ned Block (born 1942) is a philosopher of mind who has made important contributions to matters of consciousness and cognitive science. ... Zenon Pylyshyn (born 1937) is a Canadian cognitive scientist and philosopher. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Posthuman Future, an illustration by Michael Gibbs for The Chronicle of Higher Educations look at how biotechnology will change the human experience, has become one of the secular icons representing transhumanism. ... Andy Clark was director of the Cognitive Science Program at Indiana University in Bloomington. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ... In computability theory a programming language or any other logical system is called Turing-complete if it has a computational power equivalent to a universal Turing machine. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... For the oil company owner, see David B. Chalmers. ... Steven Pinker Steven Arthur Pinker (born September 18, 1954) is a prominent Canadian-born American experimental psychologist, cognitive scientist, and popular science writer known for his spirited and wide-ranging advocacy of evolutionary psychology and the computational theory of mind. ... Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ... Strong emergence is a type of emergence in which the emergent property is irreducible to its individual constituents. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ...

References

For the oil company owner, see David B. Chalmers. ... Paul Churchland is a philosopher noted for his studies in neurophilosophy and the philosophy of mind. ... Patricia Smith Churchland (born July 16, 1943 in Oliver, British Columbia, Canada) is a Canadian-American philosopher working at the University of California, San Diego (UCSD) since 1984. ... Please wikify (format) this article as suggested in the Guide to layout and the Manual of Style. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... Daniel Crevier (born 1947) is a Canadian entrepreneur and artificial intelligence and image processing researcher. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Cover of Consciousness Explained Consciousness Explained (published 1991) is a controversial book by the American philosopher Daniel Dennett which attempts to explain how consciousness arises from interaction of physical and cognitive processes in the brain. ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... Professor Stevan Harnad Professor Stevan Harnad (Hernád István, Hesslein István) - born in Budapest - is a Hungarian-born cognitive scientist. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Cover of the book The Singularity Is Near: When Humans Transcend Biology (Viking Penguin, ISBN 0-670-03384-7) is a 2005 update of Raymond Kurzweils 1999 book, The Age of Spiritual Machines and his 1987 book The Age of Intelligent Machines. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ... Stuart Russell is a computer scientist known for his contributions to artificial intelligence. ... Peter Norvig is currently the Director of Research (formerly Director of Search Quality) at Google Inc. ... Steven Pinker Steven Arthur Pinker (born September 18, 1954) is a prominent Canadian-born American experimental psychologist, cognitive scientist, and popular science writer known for his spirited and wide-ranging advocacy of evolutionary psychology and the computational theory of mind. ... How the Mind Works is a book by American cognitive scientist Steven Pinker, published in 1996. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... For the oil company owner, see David B. Chalmers. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... Alan Mathison Turing, OBE, FRS (23 June 1912 – 7 June 1954) was an English mathematician, logician, and cryptographer. ... ISSN, or International Standard Serial Number, is the unique eight-digit number applied to a periodical publication including electronic serials. ... A digital object identifier (or DOI) is a standard for persistently identifying a piece of intellectual property on a digital network and associating it with related data, the metadata, in a structured extensible way. ... PDF is an abbreviation with several meanings: Portable Document Format Post-doctoral fellowship Probability density function There also is an electronic design automation company named PDF Solutions. ...

Further reading

Zompist. ... For the Doctor Who novel named after the test, see The Turing Test (novel). ...

  Results from FactBites:
 
Chinese Room Argument [Internet Encyclopedia of Philosophy] (3035 words)
Searle's Chinese Room experiment parodies the Turing test, a test for artificial intelligence proposed by Alan Turing (1950) and echoing René Descartes' suggested means for distinguishing thinking souls from unthinking automata.
To the Chinese room's champions - as to Searle himself - the experiment and allied argument have often seemed so obviously cogent and decisively victorious that doubts professed by naysayers have seemed discreditable and disingenuous attempts to salvage "strong AI" at all costs.
Not Strong AI (by the Chinese room argument).
Chinese Room Argument [Internet Encyclopedia of Philosophy] (3035 words)
Searle's Chinese Room experiment parodies the Turing test, a test for artificial intelligence proposed by Alan Turing (1950) and echoing René Descartes' suggested means for distinguishing thinking souls from unthinking automata.
To the Chinese room's champions - as to Searle himself - the experiment and allied argument have often seemed so obviously cogent and decisively victorious that doubts professed by naysayers have seemed discreditable and disingenuous attempts to salvage "strong AI" at all costs.
Not Strong AI (by the Chinese room argument).
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m