Your Gay Friend

The Georgicks of Virgil, with an English Translation and Notes Virgil, John Martyn Ipsi in defossis specubus secura sub alta Otia agunt terra, congestaque robora, Pierius says it is confecto in the Roman manuscript. And Tacitus also says the Germans used to make caves to defend them from the severity of winter, .

Free download. Book file PDF easily for everyone and every device. You can download and read online Why the Chinese Room argument may work, but doesnt matter file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Why the Chinese Room argument may work, but doesnt matter book. Happy reading Why the Chinese Room argument may work, but doesnt matter Bookeveryone. Download file Free Book PDF Why the Chinese Room argument may work, but doesnt matter at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Why the Chinese Room argument may work, but doesnt matter Pocket Guide.

Searle argued that the test is fallible, in that a machine without intelligence is able to pass such a test. It was proposed by Searle as a way of illustrating his understanding that a machine will never logically be able to possess a mind. Searle suggests that we envisage ourselves as a monolingual speaking only one language English speaker, locked inside a room with a large group of Chinese writing in addition to a second group of Chinese script. We are also presented with a set of rules in English which allow us to connect the initial set of writings, with the second set of script.

The set of rules allows you to identify the first and second set of symbols syntax purely by their presenting form. Furthermore, we are presented with a third set of Chinese symbols and additional English instructions which makes it feasible for you to associate particular items from the third batch with the preceding two. However, Searle suggests that your responses to the questions become so good, that you are impossible to differentiate from a native Chinese speaker; yet you are merely behaving as a computer.

Searle argues that whilst in the room and delivering correct answers, he still does not know anything. He cannot speak Chinese yet is able to produce the correct answers without an understanding of the Chinese language. The machine is not producing intuitive thought; it is providing a programmed answer.

This argues that we are encouraged to focus on the wrong agent; the individual in the room. This implies that the man in the room does not understand Chinese as a single entity, but the system in which he operates the room , does. However, an evident opposition to such claim is that the system the room again has no real way of connecting meaning to the Chinese symbols any more than the individual man did in the first instance. Even if the individual were to internalize memorise the entire instructional components, and be removed from the system room , how would the system compute the answers, if all the computational ability is within the man.

Yet, Searle's defence is that if we were to further imagine a computer inside a robot, producing a representation of walking and perceiving, then according to Harnard, the robot would have understanding of other mental states. Again, the system simply follows a computational set of rules installed by the programmer and produces linear answers, based upon such rules. There is no spontaneous thought or understanding of the Chinese symbols, it merely matches with that already programmed in the system.

It is hereby proposed that the computer man in the room signifies neurons firing at the synapse of a Chinese narrator. It is argued here that we would have to accept that the machine understood the stories. If we did not, we would have to assume that native Chinese speakers also did not understand the stories since at a neuronal level there would be no difference. Searle argues yes. He asks us to imagine a man in the room using water pipes and valves to represent the biological process of neuronal firing at the synapse. The input English instructions now informs the man, which valves to turn on and off and thus produce an answer a set of flowing pipes at the end of the system.

Again, Searle argues that neither the man, nor the pipes actually understand Chinese. Yes, they have an answer and yes, the answer is undoubtedly correct, but the elements which produced the answer the man and the pipes still do not understand what the answer is; they do not have semantic representation for the output. Here, the representation of the neurons is simply that; a representation. A representation which is unable to account for the higher functioning processes of the brain and the semanticist understanding therein.

According to Strong AI, these computers really play chess intelligently, make clever moves, or understand language. But weak AI makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that no machine can think—Searle says that brains are machines, and brains think.

The argument is directed at the view that formal computations on symbols can produce thought. We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. A computing system is any system, human or otherwise, that can run a program. The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot endow the system with language understanding.

Searle's wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics meaning from syntax formal symbol manipulation. That and related issues are discussed in the section The Larger Philosophical Issues. Criticisms of the narrow Chinese Room argument against Strong AI have often followed three main lines, which can be distinguished by how much they concede:. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created.

There might be understanding by a larger, or different, entity. These replies hold that the output of the room reflects understanding of Chinese, but the understanding is not that of the room's operator. Thus Searle's claim that he doesn't understand Chinese while running the room is conceded, but his claim that there is no understanding, and that computationalism is false, is denied. But these critics hold that a variation on the computer system could understand. These critics hold that the man in the original Chinese Room scenario might understand Chinese, despite Searle's denials, or that the scenario is impossible.

For example, critics have argued that our intuitions in such cases are unreliable. Others e. Sprevak object to the assumption that any system e. Searle in the room can run any computer program. The objection is that we should be willing to attribute understanding in the Chinese Room on the basis of the overt behavior, just as we do with other humans and some animals , and as we would do with extra-terrestrial Aliens or burning bushes or angels that spoke our language.

In addition to these responses specifically to the Chinese Room scenario and the narrow argument to be discussed here, some critics also independently argue against Searle's larger claim, and hold that one can get semantics that is, meaning from syntactic symbol manipulation, including the sort that takes place inside a digital computer, a question discussed in the section below on Syntax and Semantics. In the original BBS article, Searle identified and discussed several responses to the argument that he had come across in giving the argument in talks at various places.

As a result, these early responses have received the most attention in subsequent discussion. The Systems Reply, which Searle says was originally associated with Yale, concedes that the man in the room does not understand Chinese. But, the reply continues, the man is but a part, a central processing unit CPU , in a larger system. The larger system includes the huge database, the memory scratchpads containing intermediate states, and the instructions—the complete system that is required for answering the Chinese questions.

So the Sytems Reply is that while the man running the program does not understand Chinese, the system as a whole does. Rey says the person in the room is just the CPU of the system. Kurzweil says that the human being is just an implementer and of no significance presumably meaning that the properties of the implementer are not necessarily those of the system. Margaret Boden raises levels considerations. Boden points out that the room operator is a conscious agent, while the CPU in a computer is not—the Chinese Room scenario asks us to take the perspective of the implementer, and not surprisingly fails to see the larger picture.

Searle's response to the Systems Reply is simple: in principle, the man can internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. The man would now be the entire system, yet he still would not understand Chinese.

For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax. See below the section on Syntax and Semantics. But there is no entailment from this to the claim that the simulation as a whole does not come to understand Chinese. Copeland denies that connectionism implies that a room of people can simulate the brain. According to Haugeland, his failure to understand Chinese is irrelevant: he is just the implementer. The larger system implemented would understand—there is a level-of-description fallacy. Shaffer examines modal aspects of the logic of the CRA and argues that familiar versions of the System Reply are question-begging.

But, Shaffer claims, a modalized version of the System Reply succeeds because there are possible worlds in which understanding is an emergent property of complex syntax manipulation. Nute is a reply to Shaffer. Stevan Harnad has defended Searle's argument against Systems Reply critics in two papers. Or, more specifically, if a computer program simulates or imitates activities of ours that seem to require understanding such as communicating in language , can the program itself be said to understand in so doing?

The Virtual Mind reply concedes, as does the System Reply, that the operator of the Chinese Room does not understand Chinese merely by running the paper machine. However the Virtual Mind reply holds that what is important is whether understanding is created, not whether the Room operator is the agent that understands. Unlike the Systems Reply, the Virtual Mind reply VMR holds that a running system may create new, virtual, entities that are distinct from both the system as a whole, as well as from the sub-systems such as the CPU or operator.

In particular, a running system might create a distinct agent that understands Chinese. This virtual agent would be distinct from both the room operator and the entire system. The psychological traits, including linguistic abilities, of any mind created by artificial intelligence will depend entirely upon the program and the Chinese database, and will not be identical with the psychological traits and abilities of a CPU or the operator of a paper machine, such as Searle in the Chinese Room scenario.

A familiar model of virtual agents are characters in computer or video games, and personal digital assistants, such as Apple's Siri and Microsoft's Cortana. These characters have various abilities and personalities, and the characters are not identical with the system hardware or program that creates them. A single running system might control two distinct agents, or physical robots, simultaneously, one of which converses only in Chinese and one of which can converse only in English, and which otherwise manifest very different personalities, memories, and cognitive abilities.

Thus the VM reply asks us to distinguish between minds and their realizing systems.

Searle’s Argument

Minsky and Sloman and Croucher suggested a Virtual Mind reply when the Chinese Room argument first appeared. His discussion revolves around his imaginary Olympia machine, a system of buckets that transfers water, implementing a Turing machine. Maudlin's main target is the computationalists' claim that such a machine could have phenomenal consciousness. However in the course of his discussion, Maudlin considers the Chinese Room argument. Maudlin citing Minsky, and Sloman and Croucher points out a Virtual Mind reply that the agent that understands could be distinct from the physical system Penrose is a critic of this strategy, and Stevan Harnad scornfully dismisses such heroic resorts to metaphysics.

Perlis pressed a virtual minds argument derived, he says, from Maudlin. Cole , develops the reply and argues as follows: Searle's argument requires that the agent of understanding be the computer itself or, in the Chinese Room parallel, the person in the room. However Searle's failure to understand Chinese in the room does not show that there is no understanding being created. Searle is not the author of the answers, and his beliefs and desires, memories and personality traits are not reflected in the answers and, apart from his industriousness!

Hence if there is understanding of Chinese created by running the program, the mind understanding the Chinese would not be the computer, nor, in the Chinese Room, would the person understanding Chinese be the room operator. The person understanding the Chinese would be a distinct person from the room operator, with beliefs and desires bestowed by the program and its database. Hence Searle's failure to understand Chinese while operating the room does not show that understanding is not being created.

Cole offers an additional argument that the mind doing the understanding is neither the mind of the room operator nor the system consisting of the operator and the program: running a suitably structured computer program might produce answers submitted in Chinese and also answers to questions submitted in Korean. Yet the Chinese answers might apparently display completely different knowledge and memories, beliefs and desires than the answers to the Korean questions—along with a denial that the Chinese answerer knows any Korean, and vice versa.

Thus the behavioral evidence would be that there were two non-identical minds one understanding Chinese only, and one understanding Korean only. Since these might have mutually exclusive psychological properties, they cannot be identical, and ipso facto, cannot be identical with the mind of the implementer in the room. Analogously, a video game might include a character with one set of cognitive abilities smart, understands Chinese as well as another character with an incompatible set stupid, English monoglot. These inconsistent cognitive traits cannot be traits of the XBOX system that realizes them.

The implication seems to be that minds generally are more abstract than the systems that realize them see Mind and Body in the Larger Philosophical Issues section. In short, the Virtual Mind argument is that since the evidence that Searle provides that there is no understanding of Chinese was that he wouldn't understand Chinese in the room, the Chinese Room Argument cannot refute a differently formulated equally strong AI claim, asserting the possibility of creating understanding using a programmed digital computer.

Maudlin says that Searle has not adequately responded to this criticism. Penrose is generally sympathetic to the points Searle raises with the Chinese Room argument, and has argued against the Virtual Mind reply. Christian Kaernbach reports that he subjected the virtual mind theory to an empirical test, with negative results.

References:

The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger—Searle's example of something the room operator would not know. It seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting.

Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot—a computer with a body—could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language.

This can agree with Searle that syntax and internal connections are insufficient for semantics, while holding that suitable causal connections with the world can provide content to the internal symbols. Searle does not think this reply to the Chinese Room argument is any stronger than the Systems Reply. All the sensors do is provide additional input to the computer—and it will be just syntactic input. We can see this by making a parallel change to the Chinese Room scenario.

Suppose the man in the Chinese Room receives, in addition to the Chinese characters slipped under the door, a stream of binary digits that appear, say, on a ticker tape in a corner of the room. The instruction books are augmented to use the numerals from the tape as input, along with the Chinese characters. Unbeknownst to the man in the room, the symbols on the tape are the digitized output of a video camera and possibly other sensors. Searle argues that additional syntactic inputs will do nothing to allow the man to associate meanings with the Chinese characters.

It is just more work for the man in the room. Jerry Fodor, Hilary Putnam, and David Lewis, were principle architects of the computational theory of mind that Searle's wider argument attacks. A computer might have propositional attitudes if it has the right causal connections to the world—but those are not ones mediated by a man sitting in the head of the robot. We don't know what the right causal connections are. Since this time, Fodor has written extensively on what the connections must be between a brain state and the world for the state to have intentional representational properties, while most recently emphasizing that computationalism has limits because the computations are intrinsically local and so cannot account for abductive reasoning.

He claims that precisely because the man in the Chinese room sets out to implement the steps in the computer program, he is not implementing the steps in the computer program. He offers no argument for this extraordinary claim. In a paper, Georges Rey advocated a combination of the system and robot reply, after noting that the original Turing Test is insufficient as a test of intelligence and understanding, and that the isolated system Searle describes in the room is certainly not functionally equivalent to a real Chinese speaker sensing and acting in the world.

Nor is it committed to a conversation manual model of understanding natural language. Rather, CRTT is concerned with intentionality, natural and artificial the representations in the system are semantically evaluable—they are true or false, hence have aboutness. To explain the behavior of such a system we would need to use the same attributions needed to explain the behavior of a normal Chinese speaker.

Rapaport presses an analogy between Helen Keller and the Chinese Room. But Searle's assumption, none the less, seems to me correct … the proper response to Searle's argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But of course, this concedes that thinking cannot be simply symbol manipulation. Margaret Boden also argues that Searle mistakenly supposes programs are pure syntax. Where does the capacity to comprehend Chinese begin and the rest of our mental competence leave off?

And he thinks this counts against symbolic accounts of mentality, such as Jerry Fodor's, and, one suspects, the approach of Roger Schank that was Searle's original target. Harnad Other Internet Resources argues that the CRA shows that even with a robot with symbols grounded in the external world, there is still something missing: feeling, such as the feeling of understanding. Consider a computer that operates in quite a different manner than the usual AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program simulates the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese—every nerve, every firing.

Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below. In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker's brain.

The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. Note however that the basis for this claim is no longer simply that Searle himself wouldn't understand Chinese — it seems clear that now he is just facilitating the causal operation of the system and so we rely on our Leibnizian intuition that water-works don't understand see also Maudlin Searle concludes that a simulation of brain activity is not the real thing.

However, following Pylyshyn , Cole and Foelber , Chalmers , we might wonder about hybrid systems. Pylyshyn writes:. These cyborgization thought experiments can be linked to the Chinese Room. Suppose Otto has a neural disease that causes one of the neurons in my brain to fail, but surgeons install a tiny remotely controlled artificial neuron, a synron, along side his disabled neuron.

Tiny wires connect the artificial neuron to the synapses on the cell-body of his disabled neuron. When his artificial neuron is stimulated by neurons that synapse on his disabled neuron, a light goes on in the Chinese Room. Searle then manipulates some valves and switches in accord with a program. That, via the radio link, causes Otto's artificial neuron to release neuro-transmitters from its tiny artificial vesicles.

If Searle's programmed activity causes Otto's artificial neuron to behave just as his disabled natural neuron once did, the behavior of the rest of my nervous system will be unchanged. Alas, Otto's disease progresses; more neurons are replaced by synrons controlled by Searle. Ex hypothesis the rest of the world will not notice the difference; will Otto? Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems Reply.

Some e. Rey argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would be reasonable to attribute understanding to such an android system—but only as long as you don't know how it works. As soon as you know the truth—it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning—you would cease to attribute intentionality to it.

One assumes this would be true even if it were one's spouse, with whom one had built a life-long relationship, that was revealed to hide a silicon secret. Science fiction stories, including episodes of Rod Serling's television series The Twilight Zone , have been based on such possibilities; Steven Pinker mentions one episode in which the android's secret was known from the start, but the protagonist developed a romantic relationship with the android. On its tenth anniversary the Chinese Room argument was featured in the general science periodical Scientific American.

Leading the opposition to Searle's lead article in that issue were philosophers Paul and Patricia Churchland. The Churchlands agree with Searle that the Chinese Room does not understand Chinese, but hold that the argument itself exploits our ignorance of cognitive and semantic phenomena. The Churchlands advocate a view of the brain as a connectionist system, a vector transformer, not a system manipulating symbols according to structure-sensitive rules.

The system in the Chinese Room uses the wrong computational strategies. In his book, Microcognition. But Searle thinks that this would apply to any computational model, while Clark, like the Churchlands, holds that Searle is wrong about connectionist models. Clark's interest is thus in the brain-simulator reply. The brain thinks in virtue of its physical properties. What physical properties of the brain are important? But that doesn't mean computationalism or functionalism is false. It depends on what level you take the functional units to be.

Clark cites William Lycan approvingly contra Block's absent qualia objection—yes, there can be absent qualia, if the functional units are made large. But that does not constitute a refutation of functionalism generally. So Clark's views are not unlike the Churchlands', conceding that Searle is right about Schank and symbolic-level processing systems, but holding that he is mistaken about connectionist systems.

Similarly Ray Kurzweil argues that Searle's argument could be turned around to show that human brains cannot understand—the brain succeeds by manipulating neurotransmitter concentrations and other mechanisms that are in themselves meaningless. Certainly, it would be correct to say that such a system knows Chinese. And we can't say that it is not conscious anymore than we can say that about any other process. We can't know the subjective experience of another entity…. Only by their behavior. Now the computer can pass the behavioral tests as well as they can in principle , so if you are going to attribute cognition to other people you must in principle also attribute it to computers.

Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting extra-terrestrial alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computers similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance.

Searle raises the question of just what we are attributing in attributing understanding to other minds, saying that it is more than complex behavioral dispositions. For Searle the additional seems to be certain states of consciousness, as is seen in his summary of the CRA conclusions. We attribute limited understanding of language to toddlers, dogs, and other animals, but it is not clear that we are ipso facto attributing unseen states of subjective consciousness — what do we know of the hidden states of exotic creatures?

Ludwig Wittgenstein the Private Language Arugment and his followers pressed similar points. The underlying problem of epiphenomenality is one familiar from inverted spectrum problems — it is difficult to see what subjective consciousness adds if it is not itself functionally important.

In the 30 years since the CRA there has been philosophical interest in zombies — creatures that look like and behave just as normal humans, including linguistic behavior, yet have no subjective consciousness. A difficulty for claiming that subjective states of consciousness are crucial for understanding meaning will arise in these cases of absent qualia: we can't tell the difference between zombies and non-zombies, and so on Searle's account we can't tell the difference between those that really understand English and those that don't.

But then there appears to be a distinction without a difference. In any case, Searle's short reply to the Other Minds Reply may be too short. Descartes argued famously that speech was sufficient for attributing minds and consciousness to others, and argued infamously that it was necessary.

Turing was in effect endorsing Descartes' sufficiency condition, at least for intelligence, while substituting written for oral linguistic behavior. Since most of us use dialog as a sufficient condition for attributing understanding, Searle's argument, which holds that speech is a sufficient condition for humans in all states of sleep-walking, stroke?

Further, if being con-specific is key on Searle's account, a natural question arises as to what circumstances would justify us in attributing understanding or consciousness to extra-terrestrial aliens who do not share our biology? Offending ET's by withholding attributions of understanding until after a post-mortem may be risky. Hans Moravec, director of the Robotics laboratory at Carnegie Mellon University, and author of Robot: Mere Machine to Transcendent Mind, argues that Searle's position merely reflects intuitions from traditional philosophy of mind that are out of step with the new cognitive science.

Moravec endorses a version of the Other Minds reply. Moravec goes on to note that one of the things we attribute to others is the ability to make attributions of intentionality, and then we make such attributions to ourselves. It is such self-representation that is at the heart of consciousness. These capacities appear to be implementation independent, and hence possible for aliens and suitably programmed computers.

Presumably the reason that Searle thinks we can disregard the evidence in the case of robots and computers is that we know that their processing is syntactic, and this fact trumps all other considerations. Indeed, Searle believes this is the larger point that the Chinese Room merely illustrates. This larger point is addressed in the Syntax and Semantics section below. Similarly Margaret Boden points out that we can't trust our untutored intuitions about how mind depends on matter; developments in science may change our intuitions.

Indeed, elimination of bias in our intuitions was what motivated Turing to propose the Turing Test, a test that was blind to the physical character of the system replying to questions. Some of Searle's critics in effect argue that he has merely pushed the reliance on intuition back, into the room.

Critics argue that our intuitions regarding both intelligence and understanding may be unreliable, and perhaps incompatible even with current science. Pinker ends his discussion by citing a science fiction story in which Aliens, anatomically quite unlike humans, cannot believe that humans think when they discover that our heads are filled with meat. The Aliens' intuitions are unreliable—presumably ours may be so as well. Other critics are also concerned with how our understanding of understanding bears on the Chinese Room argument. They cite W. Quine's Word and Object as showing that there is always empirical uncertainty in attributing understanding to humans.

The Chinese Room is a Clever Hans trick Clever Hans was a horse who appeared to clomp out the answers to simple arithmetic questions, but it was discovered that Hans could detect unconscious cues from his trainer. Similarly, the man in the room doesn't understand Chinese, and could be exposed by watching him closely. Simon and Eisenstadt do not explain just how this would be done, or how it would affect the argument. Paul Thagard proposes that for every thought experiment in philosophy there is an equal and opposite thought experiment. Thagard holds that intuitions are unreliable, and the CRA is an example and that in fact the CRA has now been refuted by the technology of autonomous robotic cars.

Dennett has elaborated on concerns about our intuitions regarding intelligence. The operator of the Chinese Room may eventually produce appropriate answers to Chinese questions. But slow thinkers are stupid, not intelligent—and in the wild, they may well end up dead. The other significant black art of learning methods is choosing the right parameters. These black arts require significant human expertise and experience, which can be quite difficult to obtain without significant apprenticeship Domingos Another bigger issue is that the task of feature engineering is just knowledge representation in a new skin.

Such methods are based mostly on what are now termed deep neural networks. Such networks are simply neural networks with two or more hidden layers. The general form of learning in which one learns from the raw sensory data without much hand-based feature engineering has now its own term: deep learning. A general and yet concise definition Bengio et al. Though the idea has been around for decades, recent innovations leading to more efficient learning techniques have made the approach more feasible Bengio et al.

Deep-learning methods have recently produced state-of-the-art results in image recognition given an image containing various objects, label the objects from a given set of labels , speech recognition from audio input, generate a textual representation , and the analysis of data from particle accelerators LeCun et al. Despite impressive results in tasks such as these, minor and major issues remain unresolved. A minor issue is that significant human expertise is still needed to choose an architecture and set up the right parameters for the architecture; a major issue is the existence of so-called adversarial inputs , which are indistinguishable from normal inputs to humans but are computed in a special manner that makes a neural network regard them as different than similar inputs in the training data.

The existence of such adversarial inputs, which remain stable across training data, has raised doubts about how well performance on benchmarks can translate into performance in real-world systems with sensory noise Szegedy et al. Interestingly enough, it is Eugene Charniak himself who can be safely considered one of the leading proponents of an explicit, premeditated turn away from logic to statistical techniques.

His area of specialization is natural language processing, and whereas his introductory textbook of gave an accurate sense of his approach to parsing at the time as we have seen, write computer programs that, given English text as input, ultimately infer meaning expressed in FOL , this approach was abandoned in favor of purely statistical approaches Charniak Just as in the case of FOL, in probability theory we are concerned with declarative statements, or propositions , to which degrees of belief are applied; we can thus say that both logicist and probabilistic approaches are symbolic in nature.

Both approaches also agree that statements can either be true or false in the world. In building agents, a simplistic logic-based approach requires agents to know the truth-value of all possible statements. More specifically, the fundamental proposition in probability theory is a random variable , which can be conceived of as an aspect of the world whose status is initially unknown to the agent.

For example, in a particular murder investigation centered on whether or not Mr. The detective may be interested as well in whether or not the murder weapon — a particular knife, let us assume — belongs to Black. We say that an atomic event is an assignment of particular values from the appropriate domains to all the variables composing the idealized world. Note that atomic events have some obvious properties. For example, they are mutually exclusive, exhaustive, and logically entail the truth or falsity of every proposition. Usually not obvious to beginning students is a fourth property, namely, any proposition is logically equivalent to the disjunction of all atomic events that entail that proposition.

Prior probabilities correspond to a degree of belief accorded to a proposition in the complete absence of any other information. It is often convenient to have a notation allowing one to refer economically to the probabilities of all the possible values for a random variable. The full joint probability distribution covers the distribution for all the random variables used to describe a world. The final piece of the basic language of probability theory corresponds to conditional probabilities.

Andrei Kolmogorov showed how to construct probability theory from three axioms that make use of the machinery now introduced, viz. These axioms are clearly at bottom logicist. The remainder of probability theory can be erected from this foundation conditional probabilities are easily defined in terms of prior probabilities. We can thus say that logic is in some fundamental sense still being used to characterize the set of beliefs that a rational agent can have. But where does probabilistic inference enter the picture on this account, since traditional deduction is not used for inference in probability theory?

Probabilistic inference consists in computing, from observed evidence expressed in terms of probability theory, posterior probabilities of propositions of interest. For a good long while, there have been algorithms for carrying out such computation. These algorithms precede the resurgence of probabilistic techniques in the s. Chapter 13 of AIMA presents a number of them. Since the probability of a proposition i. Unfortunately, there were two serious problems infecting this original probabilistic approach: One, the processing in question needed to take place over paralyzingly large amounts of information enumeration over the entire distribution is required.

And two, the expressivity of the approach was merely propositional. It was by the way the philosopher Hilary Putnam who pointed out that there was a price to pay in moving to the first-order level. The issue is not discussed herein. Everything changed with the advent of a new formalism that marks the marriage of probabilism and graph theory: Bayesian networks also called belief nets.

The pivotal text was Pearl For a more detailed discussion, see the. Before concluding this section, it is probably worth noting that, from the standpoint of philosophy, a situation such as the murder investigation we have exploited above would often be analyzed into arguments , and strength factors, not into numbers to be crunched by purely arithmetical procedures. For example, in the epistemology of Roderick Chisholm, as presented his Theory of Knowledge , , Detective Holmes might classify a proposition like Black committed the murder. Argument-based approaches to uncertain and defeasible reasoning are virtually non-existent in AI.

This approach is Chisholmian in nature. It should also be noted that there have been well-established formalisms for dealing with probabilistic reasoning as an instance of logic-based reasoning. Formalisms marrying probability theory, induction and deductive reasoning, placing them on an equal footing, have been on the rise, with Markov logic Richardson and Domingos being salient among these approaches.

Machine learning, in the sense given above , has been associated with probabilistic techniques. Probabilistic techniques have been associated with both the learning of functions e. Naive Bayes classification and the modeling of theoretical properties of learning algorithms.

For example, a standard reformulation of supervised learning casts it as a Bayesian problem. Bayes theorem gives us:. From at least its modern inception, AI has always been connected to gadgets, often ones produced by corporations, and it would be remiss of us not to say a few words about this phenomenon. While there have been a large number of commercial in-the-wild success stories for AI and its sister fields, such as optimization and decision-making, some applications are more visible and have been thoroughly battle-tested in the wild.

In , one of the most visible such domains one in which AI has been strikingly successful is information retrieval, incarnated as web search. Another recent success story is pattern recognition. The state-of-the-art in applied pattern recognition e. As of mid , several corporations and research laboratories have begun testing autonomous vehicles on public roads, with even a handful of jurisdictions making self-driving cars legal to operate.

Computer games provide a robust test bed for AI techniques as they can capture important parts that might be necessary to test an AI technique while abstracting or removing details that might beyond the scope of core AI research, for example, designing better hardware or dealing with legal issues Laird and VanLent One subclass of games that has seen quite fruitful for commercial deployment of AI is real-time strategy games. Real-time strategy games are games in which players manage an army given limited resources. Real-time strategy games differ from strategy games in that players plan their actions simultaneously in real-time and do not have to take turns playing.

Such games have a number of challenges that are tantalizing within the grasp of the state-of-the-art. This makes such games an attractive venue in which to deploy simple AI agents. An overview of AI used in real-time strategy games can be found in Robertson and Watson Some other ventures in AI, despite significant success, have been only chugging slowly and humbly along, quietly. For instance, AI-related methods have achieved triumphs in solving open problems in mathematics that have resisted any solution for decades.

Other related areas, such as natural language translation, still have a long way to go, but are good enough to let us use them under restricted conditions. Both methods now have comparable but limited success in the wild. A deployed translation system at Ford that was initially developed for translating manufacturing process instructions from English to other languages initially started out as rule-based system with Ford and domain-specific vocabulary and language.

This system then evolved to incorporate statistical techniques along with rule-based techniques as it gained new uses beyond translating manuals, for example, lay users within Ford translating their own documents Rychtyckyj and Plesco This lack of any success in the unrestricted general case has caused a small set of researchers to break away into what is now called artificial general intelligence Goertzel and Pennachin The stated goals of this movement include shifting the focus again to building artifacts that are generally intelligent and not just capable in one narrow domain.

Computer Ethics has been around for a long time. If one were to attempt to engineer a robot with a capacity for sophisticated ethical reasoning and decision-making, one would also be doing Philosophical AI, as that concept is characterized elsewhere in the present entry. There can be many different flavors of approaches toward Moral AI. Wallach and Allen provide a high-level overview of the different approaches. Moral reasoning is obviously needed in robots that have the capability for lethal action.

Arkin provides an introduction to how we can control and regulate machines that have the capacity for lethal behavior. Moral AI goes beyond obviously lethal situations, and we can have a spectrum of moral machines. Moor provides one such spectrum of possbile moral agents. An example of a non-lethal but ethically-charged machine would be a lying machine.

Clark uses a computational theory of the mind , the ability to represent and reason about other agents, to build a lying machine that successfully persuades people into believing falsehoods. The most general framework for building machines that can reason ethically consists in endowing the machines with a moral code. This requires that the formal framework used for reasoning by the machine be expressive enough to receive such codes. The field of Moral AI, for now, is not concerned with the source or provenance of such codes.

Why the Chinese Room Argument is Flawed - deep ideas

The source could be humans, and the machine could receive the code directly via explicit encoding or indirectly reading. Another possibility is that the code is inferred by the machine from a more basic set of laws. We assume that the robot has access to some such code, and we then try to engineer the robot to follow that code under all circumstances while making sure that the moral code and its representation do not lead to unintended consequences.

Deontic logics are a class of formal logics that have been studied the most for this purpose. Abstractly, such logics are concerned mainly with what follows from a given moral code. Engineering then studies the match of a given deontic logic to a moral code i. Bringsjord et al. Deontic logic-based frameworks can also be used in a fashion that is analogous to moral self-reflection. Govindarajulu and Bringsjord present an approach, drawing from formal-program verification , in which a deontic-logic based system could be used to verify that a robot acts in a certain ethically-sanctioned manner under certain conditions.

Since formal-verification approaches can be used to assert statements about an infinite number of situations and conditions, such approaches might be preferred to having the robot roam around in an ethically-charged test environment and make a finite set of decisions that are then judged for their ethical correctness. More recently, Govindarajulu and Bringsjord use a deontic logic to present a computational model of the Doctrine of Double Effect , an ethical principle for moral dilemmas that has been studied empirically and analyzed extensively by philosophers.

While there has been substantial theoretical and philosophical work, the field of machine ethics is still in its infancy. There has been some embryonic work in building ethical machines. One recent such example would be Pereira and Saptawijaya who use logic programming and base their work in machine ethics on the ethical theory known as contractualism , set out by Scanlon And what about the future? Since artificial agents are bound to get smarter and smarter, and to have more and more autonomy and responsibility, robot ethics is almost certainly going to grow in importance.

This endeavor might not be a straightforward application of classical ethics. For example, experimental results suggest that humans hold robots to different ethical standards than they expect from humans under similar conditions Malle et al. For now it can be identified with the attempt to answer such questions as whether artificial agents created in AI can ever reach the full heights of human intelligence.

For example, one could engage, using the tools and techniques of philosophy, a paradox, work out a proposed solution, and then proceed to a step that is surely optional for philosophers: expressing the solution in terms that can be translated into a computer program that, when executed, allows an artificial agent to surmount concrete instances of the original paradox.

Daniel Dennett has famously claimed not just that there are parts of AI intimately bound up with philosophy, but that AI is philosophy and psychology, at least of the cognitive sort. He has made a parallel claim about Artificial Life Dennett In short, Dennett holds that AI is the attempt to explain intelligence, not by studying the brain in the hopes of identifying components to which cognition can be reduced, and not by engineering small information-processing units from which one can build in bottom-up fashion to high-level cognitive processes, but rather by — and this is why he says the approach is top-down — designing and implementing abstract algorithms that capture cognition.

Dennett sees the potential flaw, as reflected in:. Unfortunately, this is acutely problematic; and examination of the problems throws light on the nature of AI. So there is a philosophical claim, for sure. Philosophy of physics certainly entertains the proposition that the physical universe can be perfectly modeled in digital terms in a series of cellular automata, e. Such information processing is known as hypercomputation , a term coined by philosopher Jack Copeland, who has himself defined such machines e. The first machines capable of hypercomputation were trial-and-error machines , introduced in the same famous issue of the Journal of Symbolic Logic Gold ; Putnam Thus, this thesis has nothing to say about information processing that is more demanding than what a Turing machine can achieve.

Put another way, there is no counter-example to CTT to be automatically found in an information-processing device capable of feats beyond the reach of TMs. For all philosophy and psychology know, intelligence, even if tied to information processing, exceeds what is Turing-computational or Turing-mechanical. Therefore, contra Dennett, to consider AI as psychology or philosophy is to commit a serious error, precisely because so doing would box these fields into only a speck of the entire space of functions from the natural numbers including tuples therefrom to the natural numbers.

Only a tiny portion of the functions in this space are Turing-computable. AI is without question much, much narrower than this pair of fields. But this new field, by definition, would not be AI. Our exploration of AIMA and other textbooks provide direct empirical confirmation of this. The best way to demonstrate this is to simply present such research and development, or at least a representative example thereof.

For a detailed presentation and further discussion, see the. Given that the work in question has appeared in the pages of Artificial Intelligence , a first-rank journal devoted to that field, and not to philosophy, this is undeniable see, e. Many such papers do exist. But we must distinguish between writings designed to present the nature of AI, and its core methods and goals, versus writings designed to present progress on specific technical issues.

Writings in the latter category are more often than not quite narrow, but, as the example of Pollock shows, sometimes these specific issues are inextricably linked to philosophy. For example, for an entire book written within the confines of AI and computer science, but which is epistemic logic in action in many ways, suitable for use in seminars on that topic, see Fagin et al. What of writings in the former category? Writings in this category, while by definition in AI venues, not philosophy ones, are nonetheless philosophical.

Most textbooks include plenty of material that falls into this latter category, and hence they include discussion of the philosophical nature of AI e. Recall that we earlier discussed proposed definitions of AI, and recall specifically that these proposals were couched in terms of the goals of the field. In TTT, a machine must muster more than linguistic indistinguishability: it must pass for a human in all behaviors — throwing a baseball, eating, teaching a class, etc.

After all, what philosophical reason stands in the way of AI producing artifacts that appear to be animals or even humans? CRA is based on a thought-experiment in which Searle himself stars. The Chinese speakers send cards into the room through a slot; on these cards are written questions in Chinese. The following schematic picture sums up the situation.

The labels should be obvious. Now, what is the argument based on this thought-experiment? Where does CRA stand today? This is of course thoroughly unsurprising. Among these practitioners, the philosopher who has offered the most formidable response out of AI itself is Rapaport , who argues that while AI systems are indeed syntactic, the right syntax can constitute semantics. Readers may wonder if there are philosophical debates that AI researchers engage in, in the course of working in their field as opposed to when they might attend a philosophy conference. Surely, AI researchers have philosophical discussions amongst themselves, right?

Generally, one finds that AI researchers do discuss among themselves topics in philosophy of AI, and these topics are usually the very same ones that occupy philosophers of AI. However, the attitude reflected in the quote from Pollock immediately above is by far the dominant one. That is, in general, the attitude of AI researchers is that philosophizing is sometimes fun, but the upward march of AI engineering cannot be stopped, will not fail, and will eventually render such philosophizing otiose.

We will return to the issue of the future of AI in the final section of this entry. Four decades ago, J. His argument has not proved to be compelling, but Lucas initiated a debate that has produced more formidable arguments. Instead, readers will be given a decent sense of the argument by turning to an online paper in which Penrose, writing in response to critics e. Here is this version, verbatim:. Does this argument succeed? A firm answer to this question is not appropriate to seek in the present entry. Interested readers are encouraged to consult four full-scale treatments of the argument Hayes et.

The genesis of the Dreyfusian attack was a belief that the critique of if you will symbol-based philosophy e. Because machines, inevitably, will get smarter and smarter regardless of just how smart they get , Philosophy of AI, pure and simple, is a growth industry.

Arguably, in the case of AI, we may also specifically know today that progress will be much slower than what most expect. As it turned out, the new century would arrive without a single machine able to converse at even the toddler level. Recall that when it comes to the building of machines capable of displaying human-level intelligence, Descartes, not Turing, seems today to be the better prophet. Nonetheless, astonishing though it may be, serious thinkers in the late 20th century have continued to issue incredibly optimistic predictions regarding the progress of AI.

These robots, so the story goes, will evolve to such lofty cognitive heights that we will stand to them as single-cell organisms stand to us today.

Chinese room

Moravec is by no means singularly Pollyannaish: Many others in AI predict the same sensational future unfolding on about the same rapid schedule. McCarthy and Minsky gave firm, unhesitating affirmatives, and Solomonoff seemed to suggest that AI provided the one ray of hope in the face of fact that our species seems bent on destroying itself.

Moore returned a firm, unambiguous negative, and declared that once his computer is smart enough to interact with him conversationally about mathematical problems, he might take this whole enterprise more seriously. It is left to the reader to judge the accuracy of such risky predictions as have been given by Moravec, McCarthy, and Minsky. For extensive, balanced analysis of S , see Eden et al. Readers unfamiliar with the literature on S may be quite surprised to learn the degree to which, among learned folks, this hypothetical event is not only taken seriously, but has in fact become a target for extensive and frequent philosophizing [for a mordant tour of the recent thought in question, see Floridi ].

What arguments support the belief that S is in our future? There are two main arguments at this point: the familiar hardware-based one [championed by Moravec, as noted above, and again more recently by Kurzweil ]; and the — as far as we know — original argument given by mathematician I. Good In addition, there is a recent and related doomsayer argument advanced by Bostrom , which seems to presuppose that S will occur.

The key process is presumably the creation of one class of machine by another. The argument certainly appears to be formally valid. Are its three premises true? Taking up such a question would fling us far beyond the scope of this entry. We point out only that the concept of one class of machines creating another, more powerful class of machines is not a transparent one, and neither Good nor Chalmers provides a rigorous account of the concept, which is ripe for philosophical analysis.

As to mathematical analysis, some exists, of course.

Commenting Rules

Many others gladly fill this gap with dark, dark pessimism. He writes:. The 21st-century technologies — genetics, nanotechnology, and robotics GNR — are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups.

They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction KMD , this destructiveness hugely amplified by the power of self-replication. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

Philosophers would be most interested in arguments for this view. Well, no small reason for the attention lavished on his paper is that, like Raymond Kurzweil , Joy relies heavily on an argument given by none other than the Unabomber Theodore Kaczynski. The idea is that, assuming we succeed in building intelligent machines, we will have them do most if not all work for us. If we further allow the machines to make decisions for us — even if we retain oversight over the machines —, we will eventually depend on them to the point where we must simply accept their decisions.

Having said that, the pattern pushed by the Unabomber and his supporters certainly appears to be flatly invalid. So then, what about the reasoning of professional philosophers on the matter? Bostrom has recently painted an exceedingly dark picture of a possible future. Here perhaps the Good-Chalmers argument provides a basis. Searle writes:. The positively remarkable thing here, it seems to us, is that Searle appears to be unaware of the brute fact that most AI engineers are perfectly content to build machines on the basis of the AIMA view of AI we presented and explained above: the view according to which machines simply map percepts to actions.

If an AI can play the game of chess, and the game of Jeopardy! There are some things we can safely say about tomorrow. Certainly, barring some cataclysmic events nuclear or biological warfare, global economic depression, a meteorite smashing into Earth, etc. Since even some natural animals mules, e. In fact, many jobs currently done by humans will certainly be done by appropriately programmed artificial animals. Other examples would include: cleaners, mail carriers, clerical workers, military scouts, surgeons, and pilots. As to cleaners, probably a significant number of readers, at this very moment, have robots from iRobot cleaning the carpets in their homes.

It is hard to see how such jobs are inseparably bound up with the attributes often taken to be at the core of personhood — attributes that would be the most difficult for AI to replicate. Andy Clark has another prediction: Humans will gradually become, at least to an appreciable degree, cyborgs, courtesy of artificial limbs and sense organs, and implants. The main driver of this trend will be that while standalone AIs are often desirable, they are hard to engineer when the desired level of intelligence is high.

Another related prediction is that AI would play the role of a cognitive prosthesis for humans Ford et al. Even if the argument is formally invalid, it leaves us with a question — the cornerstone question about AI and the future: Will AI produce artificial creatures that replicate and exceed human cognition as Kurzweil and Joy believe? Or is this merely an interesting supposition?

This is a question not just for scientists and engineers; it is also a question for philosophers. This is so for two reasons. One, research and development designed to validate an affirmative answer must include philosophy — for reasons rooted in earlier parts of the present entry. Two, philosophers might well be able to provide arguments that answer the cornerstone question now, definitively.

No doubt the future holds not only ever-smarter machines, but new arguments pro and con on the question of whether this progress can reach the human level that Descartes declared to be unreachable. Thanks are due as well to the many first-rate human minds who have read earlier drafts of this entry, and provided helpful feedback. We are also very grateful to the anonymous referees who provided us with meticulous reviews in our reviewing round in late to early Special acknowledgements are due to the SEP editors and, in particular, Uri Nodelman for patiently working with us throughout and for providing technical and insightful editorial help.

Bringsjord gmail. G gmail. Artificial Intelligence First published Thu Jul 12, The History of AI 2. What Exactly is AI? Approaches to AI 3. The Explosive Growth of AI 4. AI in the Wild 6. Moral AI 7. Philosophical AI 8. Philosophy of Artificial Intelligence 8. Energy supplied by the dream of engineering a computer that can pass TT, or by controversy surrounding claims that it has already been passed, is if anything stronger than ever, and the reader has only to do an internet search via the string turing test passed to find up-to-the-minute attempts at reaching this dream, and attempts sometimes made by philosophers to debunk claims that some such attempt has succeeded.

The first is, that they could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only for the disposition of their organs.

For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act. Descartes , p. During the contest, Watson had to answer questions that required not only command of simple factoids Question 1 , but also of some amount of rudimentary reasoning in the form of temporal reasoning and commonsense Question 2 : Question 1 : The only two consecutive U.

Systems that think rationally. Behavior-Based: Systems that act like humans. Systems that act rationally.