FACTOID # 5: Minnesota and Connecticut are both in the top 5 in saving money and total tax burden per capita.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Artificial intelligence
Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.
Artificial intelligence Portal

The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.[1] John McCarthy, who coined the term in 1956,[2] defines it as "the science and engineering of making intelligent machines."[3] Other names for the field have been proposed, such as computational intelligence,[4] synthetic intelligence[5] or computational rationality.[6] This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the same title. ... Image File history File links Garry Kasparov playing Deep Blue in 1997. ... Image File history File links Garry Kasparov playing Deep Blue in 1997. ... Garry Kimovich Kasparov (IPA: ; Russian: ) (born April 13, 1963, in Baku, Azerbaijan SSR; now Azerbaijan) is a Russian chess grandmaster, former World Chess Champion, writer and political activist. ... Kasparov vs. ... Image File history File links Portal. ... It has been suggested that this article or section be merged with software agent. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Computational intelligence (CI) is a branch of artificial intelligence. ... It has been suggested that this article or section be merged into Artificial Intelligence. ...


The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates. Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[7] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.[8] Intelligence is the mental capacity to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. ... For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ...


AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic.[9] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[10] Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ... Psychological science redirects here. ... For other uses, see Philosophy (disambiguation). ... Drawing of the cells in the chicken cerebellum by S. Ramón y Cajal Neuroscience is a field that is devoted to the scientific study of the nervous system. ... Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ... Computational linguistics is an interdisciplinary field dealing with the statistical and logical modeling of natural language from a computational perspective. ... Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima... Computational economics is a form of economics which relies on mathematical methods, including mathematical economics and econometrics. ... For control theory in psychology and sociology, see control theory (sociology). ... Probability is the likelihood that something is the case or will happen. ... In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. ... Logic (from Classical Greek λόγος logos; meaning word, thought, idea, argument, account, reason, or principle) is the study of the principles and criteria of valid inference and demonstration. ... The Shadow robot hand system holding a lightbulb. ... A control system is a device or set of devices to manage, command, direct or regulate the behaviour of other devices or systems. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... Data mining is the principle of sorting through large amounts of data and picking out relevant information. ... Look up Logistics in Wiktionary, the free dictionary. ... Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a sequence of words in the form of digital data, by means of an algorithm implemented as a computer program. ... A facial recognition system is a computer-driven application for automatically identifying a person from a digital image. ...

Contents

Perspectives on AI

The rise and fall of AI in public perception

Main articles: History of artificial intelligence and Timeline of artificial intelligence
See also: AI Winter

The notion of artificial intelligence dates back to classical antiquity, however it was not until the advent of the modern programmable digital computer that scientists began to seriously consider information processing as the key to building intelligent machines. The field was born at a conference on the campus of Dartmouth College in the summer of 1956.[11] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[12] computers were solving word problems in algebra, proving logical theorems and speaking English.[13] By the middle 60s their research was heavily funded by DARPA,[14] and they were optimistic about the future of the new field: Artificial Intelligence was founded in the early 1950s by an eclectic group of visionaries who claimed to be on the verge of changing the world and mans place in it. ... See also: History of artificial intelligence // ^ Russell & Norvig 2003, p. ... To meet Wikipedias quality standards, this article may require cleanup. ... Classical antiquity is a broad term for a long period of cultural history centered on the Mediterranean Sea, which begins roughly with the earliest-recorded Greek poetry of Homer (7th century BC), and continues through the rise of Christianity and the fall of the Western Roman Empire (5th century AD... This article is about the machine. ... The ASCII codes for the word Wikipedia represented in binary, the numeral system most commonly used for encoding computer information. ... Dartmouth College is a private, coeducational university located in Hanover, New Hampshire, USA. Incorporated as Trustees of Dartmouth College,[6][7] it is a member of the Ivy League and one of the nine colonial colleges founded before the American Revolution. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie-Mellon’s School of Computer Science. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... Mapúa Institute of Technology (MIT, MapúaTech or simply Mapúa) is a private, non-sectarian, Filipino tertiary institute located in Intramuros, Manila. ... CMU is an acronym for three different universities: Carnegie Mellon University in Pittsburgh, Pennsylvania Central Michigan University in Mount Pleasant, Michigan Chiang Mai University in Chiangmai, Thailand Central Michigan University claims CMU as a trademark: [1]. A search through the United States Patent and Trademark Offices trademark database will... Stanford may refer: Stanford University Places: Stanford, Kentucky Stanford, California, home of Stanford University Stanford Shopping Center Stanford, New York, town in Dutchess County. ... The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of new technology for use by the military. ...

  • 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do"[15]
  • 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[16]

These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[17] Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Sir Michael James Lighthill FRS (23 January 1924 - 17 July 1998) was a British applied mathematician, known for his pioneering work in the field of Aeroacoustics. ... The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of new technology for use by the military. ... To meet Wikipedias quality standards, this article may require cleanup. ...


In the early 80s, the field was revived by the commercial success of expert systems and by 1985 the market for AI had reached more than a billion dollars.[18] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[19] Minsky was right. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[20] An expert system is a class of computer programs developed by researchers in artificial intelligence during the 1970s and applied commercially throughout the 1980s. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... The original Lisp machine built by Greenblatt and Knight Lisp machines were general-purpose computers designed (usually through hardware support) to efficiently run Lisp as their main software language. ... To meet Wikipedias quality standards, this article may require cleanup. ...


In the 90s AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[21] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[22] Look up Logistics in Wiktionary, the free dictionary. ... Data mining is the principle of sorting through large amounts of data and picking out relevant information. ... Diagnosis (from the Greek words dia = by and gnosis = knowledge) is the process of identifying a disease by its signs, symptoms and results of various diagnostic procedures. ... Gordon Moores original graph from 1965 Growth of transistor counts for Intel processors (dots) and Moores Law (upper line=18 months; lower line=24 months) For the observation regarding information retrieval, see Mooers Law. ...


The philosophy of AI

Mind and Brain Portal
Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence

The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters' opinions, artificial consciousness is considered the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Image File history File links Portal. ... The philosophy of artificial intelligence concerns questions of artificial intelligence (AI) such as: What is intelligence? How can one recognize its presence and applications? Is it possible for machines to exhibit intelligence? Does the presence of human-like intelligence imply consciousness and emotions? Is creating human-like artificial intelligence moral... There are many ethical problems associated with working to create intelligent creatures. ... The philosophy of artificial intelligence concerns questions of artificial intelligence (AI) such as: What is intelligence? How can one recognize its presence and applications? Is it possible for machines to exhibit intelligence? Does the presence of human-like intelligence imply consciousness and emotions? Is creating human-like artificial intelligence moral... A philosopher is a person who thinks deeply regarding people, society, the world, and/or the universe. ... A phrenological mapping of the brain. ... To meet Wikipedias quality standards, this article may require cleanup. ... Sir Roger Penrose, OM, FRS (born 8 August 1931) is an English mathematical physicist and Emeritus Rouse Ball Professor of Mathematics at the Mathematical Institute, University of Oxford and Emeritus Fellow of Wadham College. ... The Emperors New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by mathematical physicist Roger Penrose. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... This article or section does not cite its references or sources. ... In philosophy, physics, and other fields, a thought experiment (from the German Gedankenexperiment) is an attempt to solve a problem using the power of human imagination. ... Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and ones environment. ... Logic (from ancient Greek λόγος (logos), meaning reason) is the study of arguments. ... Douglas Richard Hofstadter (born February 15, 1945 in New York, New York) is an American academic. ... Gödel, Escher, Bach: an Eternal Golden Braid: A metaphorical fugue on minds and machines in the spirit of Lewis Carroll (commonly GEB) is a Pulitzer Prize (1980)-winning book by Douglas Hofstadter, published in 1979 by Basic Books. ... Daniel Clement Dennett (born March 28, 1942 in Boston, Massachusetts) is a prominent American philosopher whose research centers on philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. ... Cover of Consciousness Explained Consciousness Explained (published 1991) is a controversial book by the American philosopher Daniel Dennett which attempts to explain how consciousness arises from interaction of physical and cognitive processes in the brain. ... Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. ... Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness, is a field related to artificial intelligence and cognitive robotics whose aim is to define that which would have to be synthesized were consciousness to be found in an engineered artifact. ... For other uses, see Holy Grail (disambiguation). ... Edsger Dijkstra Edsger Wybe Dijkstra (Rotterdam, May 11, 1930 – Nuenen, August 6, 2002; IPA: ) was a Dutch computer scientist. ...


Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information (e.g., semantic networks). Theory of knowledge redirects here: for other uses, see theory of knowledge (disambiguation) According to Plato, knowledge is a subset of that which is both true and believed Epistemology or theory of knowledge is the branch of philosophy that studies the nature, methods, limitations, and validity of knowledge and belief. ... A semantic network is often used as a form of knowledge representation. ...


AI in myth and fiction

Main article: Artificial intelligence in fiction

Beings created by man have existed in mythology long before their currently imagined embodiment in electronics (and to a lesser extent biochemistry). Some notable examples include: Golems, and Frankenstein. These, and our modern science fiction stories, enables us to imagine that the fundamental problems of perception, knowledge representation, common sense reasoning, and learning have been solved and let's us consider the technology's impact on society. With Artificial Intelligence's theorized potential equal to or greater than our own, the impact can range from service (R2D2), cooperation (Lt. Commander Data), and/or human enhancement (Ghost in the Shell) to our domination (With Folded Hands) or extermination (Terminator (series), The Matrix (series), Battlestar Galactica (re-imagining)). Given the negative consequences, ranging from fear of losing one's job to an AI, the clouding of our self image, to the extreme of the AI Apocalypse, it is not surprising the Frankenstein complex would be a common reaction. Subconsciously we demonstrate this same fear in the Uncanny Valley hypothesis. See AI and Society in fiction for more ... This is a sub-article of Artificial intelligence (AI), describing the different futuristic portrayals of fictional artificial intelligence. ... This article is about a field of research. ... This article is about the engineering discipline. ... Biochemistry (from Greek: , bios, life and Egyptian kēme, earth[1]) is the study of the chemical processes in living organisms. ... For other uses, see Golem (disambiguation). ... This article is about the 1818 novel. ... Science fiction is a form of speculative fiction principally dealing with the impact of imagined science and technology, or both, upon society and persons as individuals. ... In psychology and the cognitive sciences, perception is the process of acquiring, interpreting, selecting, and organizing sensory information. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ... Learning is the acquisition and development of memories and behaviors, including skills, knowledge, understanding, values, and wisdom. ... R2-D2 (also spelled Artoo-Detoo, called R2 for short), is an astromech droid and colleague of C-3PO in the fictional Star Wars universe. ... Data[1] is a character, portrayed by Brent Spiner, in the Star Trek fictional universe. ... Motoko Kusanagi from the manga Ghost in the Shell. ... With Folded Hands is a 1947 science fiction short story by Jack Williamson (1908-2006). ... This article is about the entire Terminator franchise. ... The Matrix series is a media franchise consisting primarily of three films: The Matrix, The Matrix Reloaded and The Matrix Revolutions. ... The Battlestar Galactica science fiction franchise, which began as a 1978 TV series, was reimagined in 2003 into the TV miniseries. ... In Isaac Asimovs robot novels, the Frankenstein complex is a colloquial term for the fear of robots. ... Repliee Q2 The Uncanny Valley is a hypothesis about robotics concerning the emotional response of humans to robots and other non-human entities. ... This is a sub-article of Artificial intelligence (AI), describing the different futuristic portrayals of fictional artificial intelligence. ...



With the capabilities of a human, a sentient AI can play any of the roles normally ascribed to humans in a narrative, such as protagonist (Bicentennial Man (film)), antagonist (Terminator, HAL 9000), faithful companion (R2D2), cometic relief (C3PO). See Sentient AI in fiction for more ... Sentient computers are found in a number of science fiction stories, films and TV series. ... To meet Wikipedias quality standards, this article or section may require cleanup. ... A protagonist is the main figure of a piece of literature or drama and has the main part or role. ... Bicentennial Man is a 1999 film starring Robin Williams based on the well-known novella of the same name by Isaac Asimov. ... For other uses, see Antagonist (disambiguation). ... An 800-series terminator endoskeleton, a robot-only version of the cyborg played by Arnold Schwarzenegger. ... HAL 9000 (Heuristically programmed ALgorithmic computer) is a fictional character in Arthur C. Clarkes Space Odyssey saga. ... R2-D2 (also spelled Artoo-Detoo, called R2 for short), is an astromech droid and colleague of C-3PO in the fictional Star Wars universe. ... C-3PO (pronounced See-Threepio, called 3PO for short) is a character from the fictional Star Wars universe. ... This is a sub-article of Artificial intelligence (AI), describing the different futuristic portrayals of fictional artificial intelligence. ...



While most portrayals of AI in science fiction deal with sentient AIs, many imagined futures incorporate AI subsystems in their vision: such as self-navigating cars and speech recognition systems. See non-sentient AI in fiction for more ... Science fiction is a form of speculative fiction principally dealing with the impact of imagined science and technology, or both, upon society and persons as individuals. ... Sentience is the capacity for basic consciousness -- the ability to feel or perceive, not necessarily including the faculty of self-awareness. ... Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a sequence of words in the form of digital data, by means of an algorithm implemented as a computer program. ... This is a sub-article of Artificial intelligence (AI), describing the different futuristic portrayals of fictional artificial intelligence. ...



The inevitability of the integration of AI into human society is also argued by some science/futurist writers such as Kevin Warwick and Hans Moravec and the manga Ghost in the Shell Kevin Warwick speaking at the Tomorrows People conference in 2006 hosted by Oxford University. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ... Motoko Kusanagi from the manga Ghost in the Shell. ...


The future of AI

Main article: Strong AI

For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ...

AI research

Problems of AI

While there is no universally accepted definition of intelligence,[23] AI researchers have studied several traits that are considered essential.[7]


Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions.[24] These early methods often couldn't be applied to real world situations because the were unable to handle incomplete or imprecise information. However, by the late 80s and 90s, AI research developed highly successful methods for dealing with uncertainty, employing concepts from probability and economics.[25] “Uncertain” redirects here. ... Probability is the likelihood that something is the case or will happen. ... Face-to-face trading interactions on the New York Stock Exchange trading floor. ...


For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[26] In cryptanalysis, a brute force attack on a cipher is a brute-force search of the key space; that is, testing all possible keys, in an attempt to recover the plaintext used to produce a particular ciphertext. ...


It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-by-step deduction that early AI research was able to model.[27] For many problems, people seem to simply jump to the correct solution: they think "instinctively" and "unconsciously". These instincts seem to involve skills usually applied to other problems, such as motion and manipulation (our so-called "embodied" skills that allow us deal with the physical world) or perception (for example, our skills at pattern matching). It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolved. Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ... Look up Unconscious in Wiktionary, the free dictionary. ... Embodiment is the way in which human (or any other animals) psychology arises from the brains and bodys physiology. ... In computer science, pattern matching is the act of checking for the presence of the constituents of a given pattern. ... Computational intelligence (CI) is a branch of artificial intelligence. ... In artificial intelligence, the term situated refers to an agent which is embedded in an environment. ... Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ...


Knowledge representation

Main articles: knowledge representation and commonsense knowledge

Another important measure of intelligence is how much an agent knows. Many of the problems machines are expected to solve will require extensive knowledge about the world. Knowledge representation[28] and knowledge engineering[29] are central to AI research. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[30] situations, events, states and time;[31] causes and effects;[32] knowledge about knowledge (what we know about what other people know);[33] and many other, less well researched domains. A complete representation of "what exists" is an ontology[34] (borrowing a word from traditional philosophy). Ontological engineering is the science of finding a general representation that can handle all of human knowledge. Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... The process of building knowledge-based systems is called knowledge engineering (KE). ... In both computer science and information science, an ontology is a data model that represents a set of concepts within a domain and the relationships between those concepts. ... For other uses, see Philosophy (disambiguation). ...


Among the most difficult problems in knowledge representation are:

  • Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture a animal that is fist sized, sings, and flies. None of these things are true about birds in general. John McCarthy identified this problem in 1969[35] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[36]
  • Unconscious knowledge: Much of what people know isn't represented as "facts" or "statements" that they could actually say out loud. They take the form of intuitions or tendencies and are represented in the brain unconsciously and sub-symbolically. This unconscious knowledge informs, supports and provides a context for our conscious knowledge. As with the related problem of unconscious reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.
  • The breadth of common sense knowledge: The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge, such as Cyc, require enormous amounts of tedious step-by-step ontological engineering — they must be built, by hand, one complicated concept at a time.[37]

John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... In philosophy and AI, the qualification problem is concerned with the impossibility of listing all the preconditions required for a real-world action to have its intended effect. ... In artificial intelligence, the term situated refers to an agent which is embedded in an environment. ... Computational intelligence (CI) is a branch of artificial intelligence. ... Cyc is an artificial intelligence project that attempts to assemble a comprehensive ontology and database of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning. ...

Planning

Intelligent agents must be able set goals and achieve them.[38] They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. There are several types of planning problems: Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ...

  • Classical planning problems assume that the agent is the only thing acting on the world, and that the agent can be certain what the consequences of it's actions may be.[39] Partial order planning problems take into account the fact that sometimes it's not important which sub-goal the agent achieves first.[40]
  • If the environment is changing, or if the agent can't be sure of the results of its actions, it must periodically check if the world matches its predictions (conditional planning and execution monitoring) and it must change its plan as this becomes necessary (replanning and continuous planning).[41]
  • Some planning problems take into account the utility or "usefulness" of a given outcome. These problems can be analyzed using tools drawn from economics, such as decision theory or decision analysis[42] and information value theory.[43]
  • Multi-agent planning problems try to determine the best plan for a community of agents, using cooperation and competition to achieve a given goal.[44] These problems are related emerging fields like evolutionary algorithms and swarm intelligence.

In mathematics, a partially ordered set (or poset for short) is a set equipped with a special binary relation which formalizes the intuitive concept of an ordering. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... In economics, utility is a measure of the relative happiness or satisfaction (gratification) gained. ... Economics (deriving from the Greek words οίκω [okos], house, and νέμω [nemo], rules hence household management) is the social science that studies the allocation of scarce resources to satisfy unlimited wants. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... Decision analysis (DA) is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. ... The creator of or main contributor to this page may have a conflict of interest with the subject of this article. ... In computer science, agents in a multi-agent system need to coordinate their actions. ... Look up agent in Wiktionary, the free dictionary. ... This article is about cooperation as used in the social sciences. ... Competition is the act of striving against others for the purpose of achieving gain, such as income, pride, amusement, or dominance. ... In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. ... Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. ...

Learning

Main article: machine learning

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ...

Natural language processing

Natural language processing[45] gives machines the ability to be read and understand the languages human beings speak. The problem of natural language processing involves such subproblems as syntax and parsing,[46] semantics and disambiguation,[47] and discourse understanding. (e.g., identifying the speech act, using coherence relations in the text, and deciphering the speaker's intentions or pragmatics.)[48] Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... For other uses, see Syntax (disambiguation). ... An example of parsing a mathematical expression. ... The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ... “WSD” redirects here. ... The notion speech act is a technical term in linguistics and the philosophy of language. ... Coherence in linguistics is what makes a text semantically meaningful. ... Pragmatics is the study of the ability of natural language speakers to communicate more than that which is explicitly stated. ...


Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[49] Information retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertextually-networked databases such as the World Wide Web. ... Text mining, sometimes alternately referred to as text data mining, refers generally to the process of deriving high quality information from text. ... Machine translation, sometimes referred to by the acronym MT, is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. ...


Perception

Main articles: machine perception, computer vision, and speech recognition

Machine perception concerns the building of machines that sense and interpret their environments. ... Computer vision is the science and technology of machines that see. ... Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a sequence of words in the form of digital data, by means of an algorithm implemented as a computer program. ...

Motion and manipulation

Main article: robotics

The Shadow robot hand system holding a lightbulb. ...

Social intelligence

Main article: affective computing

Emotion and social skills play two roles for an intelligent agent:[50] Affective computing is computing that deals with the attempt to make machines which can detect and respond to human emotion. ...

  • It must be able predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability model human emotions and the perceptual skills to detect emotions.)
  • For good human-computer interaction, an intelligent machine also to needs to display emotions — at the very least it must appear polite and sensitive to the humans it interacts with. At best, it should appear to have normal emotions itself.

Game theory is a branch of applied mathematics that is often used in the context of economics. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... // Human–computer interaction (HCI), alternatively man–machine interaction (MMI) or computer–human interaction (CHI)This interactive computer allows the user to intergrate a reaction towards oneself and the primary source that is the http server, the port and Ip address show as the user connects to the imb harddrive , is...

General intelligence

Main articles: strong AI and AI-complete

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[8] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project. For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ... AI-complete is, by analogy to NP-completeness in complexity theory, a term first coined by Fanya S. Montalvo to indicate that the difficulty of a computational problem is equivalent to solving the central Artificial Intelligence problem, i. ... For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ... Anthropomorphism, also referred to as personification or prosopopeia, is the attribution of human characteristics to inanimate objects, animals, forces of nature, and others. ... Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness, is a field related to artificial intelligence and cognitive robotics whose aim is to define that which would have to be synthesized were consciousness to be found in an engineered artifact. ... Artificial brain is the research to develop hardware that has cognitive abilities similar to the human brain. ...


Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straight-forward, limited and specific task like machine translation is AI complete. To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reasoning, commonsense knowledge and the intuitions that underly motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[51] AI-complete is, by analogy to NP-completeness in complexity theory, a term first coined by Fanya S. Montalvo to indicate that the difficulty of a computational problem is equivalent to solving the central Artificial Intelligence problem, i. ... Machine translation, sometimes referred to by the acronym MT, is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. ... Understanding is a psychological state in relation to an object or person whereby one is able to think about it and use concepts to be able to deal adequately with that object. ... Machine translation, sometimes referred to by the acronym MT, is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. ... AI-complete is, by analogy to NP-completeness in complexity theory, a term first coined by Fanya S. Montalvo to indicate that the difficulty of a computational problem is equivalent to solving the central Artificial Intelligence problem, i. ... For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ...


Approaches to AI

Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole.


Cybernetics and brain simulation

In the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton and the Ratio Club in England.[52] Neurology is a branch of medicine dealing with disorders of the nervous system. ... Not to be confused with information technology, information science, or informatics. ... For other uses, see Cybernetics (disambiguation). ... W. Grey Walter (February 19, 1910 - May 6, 1977) was a neurophysiologist and robotician. ... Turtles are a class of educational robots designed originally in the late 1940s (largely under the auspices of Anglo-American researcher William Grey Walter) and used in computer science and mechanical engineering training. ... The Johns Hopkins Beast was an early robot built in the 1960 at Johns Hopkins University. ... The Ratio Club was a small informal dining club of young psychologists, physiologists, mathematicians and engineers who met to discuss issues in cybernetics. ...


Traditional symbolic AI

When access to digital computers became possible in the middle 1950s, AI research began explore that possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed it's own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[53] CMU is an acronym for three different universities: Carnegie Mellon University in Pittsburgh, Pennsylvania Central Michigan University in Mount Pleasant, Michigan Chiang Mai University in Chiangmai, Thailand Central Michigan University claims CMU as a trademark: [1]. A search through the United States Patent and Trademark Offices trademark database will... Stanford may refer: Stanford University Places: Stanford, Kentucky Stanford, California, home of Stanford University Stanford Shopping Center Stanford, New York, town in Dutchess County. ... Mapúa Institute of Technology (MIT, MapúaTech or simply Mapúa) is a private, non-sectarian, Filipino tertiary institute located in Intramuros, Manila. ... John Haugeland (born in 1945), is a philosopher and Professor of Philosophy at the University of Chicago. ... GOFAI stands for Good Old Fashioned Artificial Intelligence. ...

Cognitive simulation 
Economist Herbert Simon and Alan Newell studied human problem solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team performed psychological experiments to demonstrate the similarities between human problem solving and the programs (such as their "General Problem Solver") they where developing. This tradition, centered at Carnegie Mellon University,[54] would eventually culminate in the development of the Soar architecture in the middle 80s.[55]
Logical AI 
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[56] His laboratory at Stanford (SAIL) focussed on using formal logic to solve wide variety of problems, including knowledge representation, planning and learning. Work in logic led to the development of the programming language Prolog and the science of logic programming.[57]
"Scruffy" symbolic AI 
In contrast to the formal methods pursued at CMU, Stanford and Edinburgh, the researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving difficult problems in vision and natural language processing required ad-hoc solutions -- they argued that there was no silver bullet, no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. An important realization was that AI required large amounts of commonsense knowledge, and that this had to be engineered one complicated concept at time. This tradition, which Roger Schank named "scruffy AI"[58] still forms the basis of research into commonsense knowledge, such as Doug Lenat's Cyc.
Knowledge based AI 
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems, the first truly successful form of AI software.[59]

Alan Greenspan, former chairman, United States Federal Reserve. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation. ... Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ... Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima... Management science, or MS, is the discipline of using mathematics, and other analytical methods, to help make better business decisions. ... Psychological science redirects here. ... General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal problem solver machine. ... Carnegie Mellon University is a private research university in Pittsburgh, Pennsylvania, United States. ... Soar (also known as SOAR) is a symbolic cognitive architecture, created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Stanford redirects here. ... The Stanford Artificial Intelligence Laboratory (commonly called the Stanford AI Lab, or SAIL), was one of the leading centres for artificial intelligence research from the 1960s through the 1980s. ... Logic (from Classical Greek λόγος logos; meaning word, thought, idea, argument, account, reason, or principle) is the study of the principles and criteria of valid inference and demonstration. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... Prolog is a logic programming language. ... Logic programming (which might better be called logical programming by analogy with mathematical programming and linear programming) is, in its broadest sense, the use of mathematical logic for computer programming. ... CMU is an acronym for three different universities: Carnegie Mellon University in Pittsburgh, Pennsylvania Central Michigan University in Mount Pleasant, Michigan Chiang Mai University in Chiangmai, Thailand Central Michigan University claims CMU as a trademark: [1]. A search through the United States Patent and Trademark Offices trademark database will... Stanford may refer: Stanford University Places: Stanford, Kentucky Stanford, California, home of Stanford University Stanford Shopping Center Stanford, New York, town in Dutchess County. ... For other uses, see Edinburgh (disambiguation). ... Mapúa Institute of Technology (MIT, MapúaTech or simply Mapúa) is a private, non-sectarian, Filipino tertiary institute located in Intramuros, Manila. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Seymour Papert Seymour Papert (born March 1, 1928 Pretoria, South Africa) is an MIT mathematician, computer scientist, and prominent educator. ... Computer vision is the science and technology of machines that see. ... Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... The metaphor of the silver bullet applies to any straightforward solution perceived to have extreme effectiveness. ... Logic (from Classical Greek λόγος logos; meaning word, thought, idea, argument, account, reason, or principle) is the study of the principles and criteria of valid inference and demonstration. ... Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ... Roger Schank (* 1946) is president and CEO of Socratic Arts, and a leading visionary in artificial intelligence. ... In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing holy wars in artificial intelligence research. ... Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. ... Douglas B. Lenat is the CEO of Cycorp, Inc. ... Cyc is an artificial intelligence project that attempts to assemble a comprehensive ontology and database of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... An expert system, also known as a knowledge based system, is a computer program that contains the knowledge and analytical skills of one or more human experts, related to a specific subject. ...

Sub-symbolic AI

During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[60] By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[61] For other uses, see Cybernetics (disambiguation). ... // Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons. ... Machine perception concerns the building of machines that sense and interpret their environments. ... The Shadow robot hand system holding a lightbulb. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... Pattern recognition is a field within the area of machine learning. ...

Bottom-up, situated, behavior based or nouvelle AI 
Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focussed on the basic engineering problems that would allow robots to move and survive.[62] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. These "bottom-up" approaches are known as behavior-based AI, situated AI or Nouvelle AI.
Computational Intelligence 
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[63] These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.[64]
The new neats 
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Russell & Norvig (2003) describe this movement as nothing less than a "revolution" and "the victory of the neats."[65]

The Shadow robot hand system holding a lightbulb. ... Rodney Allen Brooks (b. ... Cybernetics is a theory of the communication and control of regulatory feedback. ... For control theory in psychology and sociology, see control theory (sociology). ... Behavior Based Artificial Intelligence (BBAI) is a methodology for developing AI based on a modular decomposition of intelligence. ... In artificial intelligence, the term situated refers to an agent which is embedded in an environment. ... During the late 1980s, the approach now known as nouvelle AI was pioneered at the MIT Artificial Intelligence Laboratory by Rodney Brooks. ... A neural network is an interconnected group of neurons. ... Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. ... David E. Rumelhart (born 1942, Wessington Springs) has made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. ... ... In computer science evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems. ... Computational intelligence (CI) is a branch of artificial intelligence. ... Scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. ... For other meanings of mathematics or uses of math and maths, see Mathematics (disambiguation) and Math (disambiguation). ... Face-to-face trading interactions on the New York Stock Exchange trading floor. ... Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima... In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing philosophical disputes in artificial intelligence research. ...

Intelligent agent paradigm

The "intelligent agent" paradigm became widely accepted during the 1990s.[66][67] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[68] the intelligent agent did not reach its modern form until Judea Pearl, Alan Newell and others brought concepts from decision theory and economics into the study of AI.[69] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete. Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... Judea Pearl is a computer science professor at UCLA. He was one of the pioneers of Bayesian networks and the probabilistic approach to artificial intelligence. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... Face-to-face trading interactions on the New York Stock Exchange trading floor. ... Face-to-face trading interactions on the New York Stock Exchange trading floor. ... In economics, an agent is an element of a model who solves an optimization problem. ... Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ... Object-oriented programming (OOP) is a computer programming paradigm in which a software system is modeled as a set of objects that interact with each other. ... It has been suggested that this article or section be merged into Modularity (programming). ... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ...


An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings.[67] Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ...


The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and some can be based on new approaches (without forcing researchers reject old approaches that have been proven to work). The paradigm provides a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like decision theory. // Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ...


Integrating the approaches

An agent architecture or cognitive architecture allows researchers to build more versatile and intelligent systems out of interacting intelligent agents in a multi-agent system.[70] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. In computer science, agent architecture reveres to the blueprints of software agents and intelligent control systems, depicting the arrangement of components. ... A cognitive architecture is a blueprint for intelligent agents. ... It has been suggested that this article or section be merged with software agent. ... A multi-agent system (MAS) is a system composed of several software agents, collectively capable of reaching goals that are difficult to achieve by an individual agent or monolithic system. ... In software programming, hybrid intelligent system denotes a software system which employs, in parallel, a combination of AI models, methods and techniques from such artificial intelligence subfields as: Neuro-fuzzy programming Fuzzy expert systems Connectionist expert systems Evolutionary neural networks Genetic-Fuzzy-Neural Systems Genetic fuzzy systems (Michigan, Pitsburg, Incremental... The core idea of A.I. systems integration is making individual software components, such as speech synthesizers, interoperable with other components, such as common sense knowledgebases, in order to create larger, broader and more capable A.I. systems. ...


Tools of AI research

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below. Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ...


Search

Main article: search algorithm

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[71] In computer science, a search algorithm, broadly speaking, is an algorithm that takes a problem as input and returns a solution to the problem, usually after evaluating a number of possible solutions. ...

  • Reasoning can be reduced to a form a search. For example, in game playing, the agent can search through a tree of possible moves and counter moves to find a strategy that improves its position. (Tools for two person games include minimax and alpha-beta pruning.)[72] Logical proof can be viewed as searching for a path the leads from premises to conclusions, where each step is the application of an inference rule.[73] Many other reasoning problems, such as constraint satisfaction[74] and dynamic programming[75] are solved using a form of search.
  • Planning algorithms search through trees of goals and subgoals, attempting to find a path a target goal.[76] These sets of goals and subgoals can be represented with graphs (as in the graphplan algorithm),[77] or in a hierarchical task network.[78]

There a several types of search algorithms: “Minmax” redirects here. ... Alpha-beta pruning is a search algorithm that reduces the number of nodes that need to be evaluated in the search tree by the minimax algorithm. ... Look up Premise in Wiktionary, the free dictionary. ... A conclusion is a final proposition, which is arrived at after the consideration of evidence, arguments or premises. ... In logic, especially in mathematical logic, a rule of inference is a scheme for constructing valid inferences. ... In artificial intelligence and operations research, constraint satisfaction is the process finding a solution to a set of constraints. ... In mathematics and computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naive methods. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... Graphplan is an algorithm for automated planning developed by Avrim Blum and Merrick Furst in 1995. ... In artificial intelligence, the hierarchical task network, or HTN, is an approach to automated planning in which the dependency among actions can be given in the form of networks. ...

In computer science, breadth-first search (BFS) is a tree search algorithm used for traversing or searching a tree, graph. ... Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. ... The concept of state space search is widely used in artificial intelligence. ... For other uses, see Heuristic (disambiguation). ... In search algorithms, search space is the collection of possible solutions to a problem, and incorporates some notion of distance between candidate solutions; the correct solution will be near the optimal point in this hypothetical space, which may be envisioned as having a dimension for each variable. ... Astronomy, which etymologically means law of the stars, (from Greek: αστρονομία = άστρον + νόμος) is a science involving the observation and explanation of events occurring outside Earth and its atmosphere. ... In cryptanalysis, a brute force attack on a cipher is a brute-force search of the key space; that is, testing all possible keys, in an attempt to recover the plaintext used to produce a particular ciphertext. ... For heuristics in computer science, see heuristic (computer science) Heuristic is the art and science of discovery and invention. ... For heuristics in computer science, see heuristic (computer science) Heuristic is the art and science of discovery and invention. ... Best-first search is a search algorithm which optimizes breadth-first search by expanding the most promising node chosen according to some rule. ... The prefix A-, in US military aviation, refers to a type of plane generally considered an attack plane. ... Hill climbing is a graph search algorithm where the current path is extended with a successor node which is closer to the solution than the end of the current path. ... For other uses, see Annealing. ... Beam search is a search algorithm that is an optimization of best-first search. ... In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. ... A genetic algorithm (GA) is a search technique used in computing to find exact or approximate solutions to optimization and search problems. ... In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. ... For other uses, see Natural selection (disambiguation). ... Individuals in the mollusk species Donax variabilis show diverse coloration and patterning in their phenotypes. ... Fitness (often denoted in population genetics models) is a central concept in evolutionary theory. ... This article or section does not adequately cite its references or sources. ... For linguistic mutation, see Apophony. ...

Logic

Logic[84] was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal.[85] The most important technical development was J. Alan Robinson's discovery of the resolution and unification algorithm for logical deduction in 1963. This procedure is simple, complete and entirely algorithmic, and can easily be performed by digital computers.[86] However, a naive implementation of the algorithm quickly leads to a combinatorial explosion or an infinite loop, so sophisticated search methods are used to implement the inference engine that is at the core of a logical agent or logic programming system.[87] Logic programming (which might better be called logical programming by analogy with mathematical programming and linear programming) is, in its broadest sense, the use of mathematical logic for computer programming. ... A production system consists of a collection of productions (rules), a working memory of facts and an algorithm, known as forward chaining, for producing new facts from old. ... Logic (from Classical Greek λόγος logos; meaning word, thought, idea, argument, account, reason, or principle) is the study of the principles and criteria of valid inference and demonstration. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... John McCarthy proposed the Advice Taker in his 1958 paper Programs with Common Sense [1]. It was probably the first proposal to use logic to represent information in a computer and not just as the subject matter of another program. ... J. Alan Robinson is a philosopher (by training), mathematician and computer scientist. ... In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic and first-order logic. ... For the idea of global unification, see globalization. ... In cryptanalysis, a brute force attack on a cipher is a brute-force search of the key space; that is, testing all possible keys, in an attempt to recover the plaintext used to produce a particular ciphertext. ... An infinite loop is a sequence of instructions in a computer program which loops endlessly, either due to the loop having no terminating condition or having one that can never be met. ... An inference engine tries to derive answers from a knowledge base. ... Logic programming (which might better be called logical programming by analogy with mathematical programming and linear programming) is, in its broadest sense, the use of mathematical logic for computer programming. ...


Logic is used for knowledge representation and problem solving, but it can applied to other problems as well. For example, the satplan algorithm uses logic for planning,[88] and inductive logic programming is a method for learning.[89] Satplan is a method for automated planning. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... Inductive logic programming (ILP) is a machine learning approach, which uses techniques of logic programming. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ...


There are several different forms of logic used in AI research.

Propositional logic or sentential logic is the logic of propositions, sentences, or clauses. ... A propositional calculus is a formal, deduction system, or proof theory for reasoning with propositional formulas as symbolic logic. ... First-order predicate calculus or first-order logic (FOL) is a theory in symbolic logic that permits the formulation of quantified statements such as there is at least one X such that. ... In language and logic, quantification is a construct that specifies the extent of validity of a predicate, that is the extent to which a predicate holds over a range of things. ... Look up predicate in Wiktionary, the free dictionary. ... A production system consists of a collection of productions (rules), a working memory of facts and an algorithm, known as forward chaining, for producing new facts from old. ... First-order predicate calculus or first-order logic (FOL) is a theory in symbolic logic that permits the formulation of quantified statements such as there is at least one X such that. ... In computer science a formal grammar is an abstract structure that describes a formal language precisely, i. ... In logic, and in particular in propositional calculus, a Horn clause is a proposition of the general type (p and q and . ... Inference is the act or process of deriving a conclusion based solely on what one already knows. ... Forward chaining is one of the two main methods of reasoning when using inference rules (in artificial intelligence). ... Backward chaining is one of the two main methods of reasoning when using inference rules. ... Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely deduced from classical predicate logic. ... ... Default logic is a non-monotonic logic proposed by Ray Reiter to formalize the way humans reason using default assumptions. ... A non-monotonic logic is a formal logic whose consequence relation is not monotonic. ... Circumscription is a non-monotonic logic created by John McCarthy to formalize the common sense assumption that things are as expected unless otherwise specified. ... In philosophy and AI, the qualification problem is concerned with the impossibility of listing all the preconditions required for a real-world action to have its intended effect. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... Description logics (DL) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. ... The situation calculus is a first order logic formalism designed for representing and reasoning about dynamically changing worlds. ... The event calculus is a logical language for representing and reasoning about actions and their effects first presented by Robert Kowalski and Marek Sergot in 1986. ... The fluent calculus is a formalism for expressing dynamical domains in first-order logic. ... Causality or causation denotes the relationship between one event (called cause) and another event (called effect) which is the consequence (result) of the first. ... In formal logic, a modal logic is any logic for handling modalities: concepts like possibility, existence, and necessity. ...

Stochastic methods

Main articles: Bayesian network, hidden Markov model, and Kalman filter

Starting in the late 80s and early 90s, Judea Pearl and others championed the use of stochastic or probabilistic methods in artificial intelligence.[93] Researchers have used principles from probability theory[94] to devise a number of powerful tools. A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. ... State transitions in a hidden Markov model (example) x — hidden states y — observable outputs a — transition probabilities b — output probabilities A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... Judea Pearl is a computer science professor at UCLA. He was one of the pioneers of Bayesian networks and the probabilistic approach to artificial intelligence. ... Stochastic, from the Greek stochos or goal, means of, relating to, or characterized by conjecture; conjectural; random. ... Probability is the likelihood that something is the case or will happen. ... Probability is the likelihood that something is the case or will happen. ...


Bayesian networks[95] have been applied to a large number of problems, such as: uncertain reasoning (using the Bayesian inference algorithm),[96] learning (using the expectation-maximization algorithm),[97] and planning (using decision networks).[98] A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. ... Bayesian inference is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... In statistical computing, an expectation-maximization (EM) algorithm is an algorithm for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... An influence diagram (ID) (also called a decision network) is a compact graphical and mathematical representation of a decision situation. ...


Probabilistic methods have been particularly successful at dealing with processes that occur over time. Several successful algorithms have been developed for filtering, prediction, smoothing and finding explanations for streams of data,[99] such as hidden Markov models,[100] Kalman filters[101] and dynamic Bayesian networks.[102] These tools are used for the problems of perception (such as pattern matching) and learning. State transitions in a hidden Markov model (example) x — hidden states y — observable outputs a — transition probabilities b — output probabilities A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... A Bayesian network or Bayesian belief network is a directed acyclic graph of nodes representing variables and arcs representing dependence relations among the variables. ... Machine perception concerns the building of machines that sense and interpret their environments. ... In computer science, pattern matching is the act of checking for the presence of the constituents of a given pattern. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ...


Economic models

AI has been able to use tools drawn from economics, such as decision theory and decision analysis,[42] Bayesian decision networks,[98] information value theory,[43] Markov decision processes,[103] dynamic decision networks,[103] game theory and mechanism design[104] These tools have been especially important for planning problems. This article is about utility in economics and in game theory. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... Game theory is a branch of applied mathematics that is often used in the context of economics. ... Economics (deriving from the Greek words οίκω [okos], house, and νέμω [nemo], rules hence household management) is the social science that studies the allocation of scarce resources to satisfy unlimited wants. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... Decision analysis (DA) is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. ... An influence diagram (ID) (also called a decision network) is a compact graphical and mathematical representation of a decision situation. ... The creator of or main contributor to this page may have a conflict of interest with the subject of this article. ... To meet Wikipedias quality standards, this article or section may require cleanup. ... An influence diagram (ID) (also called a decision network) is a compact graphical and mathematical representation of a decision situation. ... Game theory is a branch of applied mathematics that is often used in the context of economics. ... Mechanism design is a sub-field of game theory. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ...


Classifiers and statistical learning methods

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. In mathematics, a classifier is a mapping from a (discrete or continuous) feature space X to a discrete set of labels Y. Classifiers have practical applications in many branches of science and society. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ...


Classifiers[105] are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. In mathematics, a classifier is a mapping from a (discrete or continuous) feature space X to a discrete set of labels Y. Classifiers have practical applications in many branches of science and society. ... In computer science, pattern matching is the act of checking for the presence of the constituents of a given pattern. ...


When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches. As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ...


A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.


The most widely used classifiers are the neural network,[106] kernel methods such as the support vector machine,[107] k-nearest neighbor algorithm,[108] Gaussian mixture model,[109] naive Bayes classifier,[110] and decision tree.[111] The performance of these classifiers have been compared over a wide range of classification tasks[112] in order to find data characteristics that determine classifier performance. An artificial neural network (ANN), often just called a neural network (NN), is a mathematical model or computational model based on biological neural networks. ... Kernel Methods (KMs) are a class of algorithms for pattern analysis, whose best known element is the Support Vector Machine (SVM). ... Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. ... In pattern recognition, the k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. ... In mathematics, the term mixture model is a model in which independent variables are fractions of a total. ... A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes theorem with strong (naive) independence assumptions. ... In operations research, specifically in decision analysis, a decision tree is a decision support tool that uses a graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. ...


Neural networks

Main articles: neural networks and connectionism
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

Techniques and technologies in AI which have been directly derived from neuroscience include neural networks and connectionism.[106] The study of neural networks began with cybernetics researchers, working in the decade before the field AI research was founded. In the 1960s Frank Rosenblatt developed an important early version, the perceptron.[113] Paul Werbos discovered the backpropagation algorithm in 1984,[114] which led to a renaissance in neural network research and connectionism in general in the middle 1980s. The Hopfield net, a form of attractor network, was first described by John Hopfield in 1982. A neural network is an interconnected group of neurons. ... Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. ... Image File history File links Artificial_neural_network. ... Image File history File links Artificial_neural_network. ... This article is about cells in the nervous system. ... The human brain controls the central nervous system (CNS), by way of the cranial nerves and spinal cord, the peripheral nervous system (PNS) and regulates virtually all human activity. ... Drawing of the cells in the chicken cerebellum by S. Ramón y Cajal Neuroscience is a field that is devoted to the scientific study of the nervous system. ... // Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons. ... Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. ... // Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons. ... Cybernetics is a theory of the communication and control of regulatory feedback. ... Frank Rosenblatt (1928–1969) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. ... The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. ... Paul Werbos is an scientist best known for his 1974 Harvard University Ph. ... Backpropagation is a supervised learning technique used for training artificial neural networks. ... Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. ... A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. ... John J. Hopfield is an American scientist most widely known for his invention of associative neural network in 1982. ...


Neural networks are applied to the problem of learning, using such techniques as Hebbian learning[115] and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.[116] As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... Hebbian learning is a hypothesis for how neuronal connections are enforced in mammalian brains; it is also a technique for weight selection in artificial neural networks. ... Hierarchical Temporal Memory (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. ... The neocortex (Latin for new bark or new rind) is a part of the brain of mammals. ...


Social and emergent models

Several algorithms for learning use tools from evolutionary computation, such as genetic algorithms[117] and swarm intelligence.[118] In computer science evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems. ... As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... In computer science evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems. ... A genetic algorithm (GA) is an algorithm used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. ... Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. ...


Control theory

Main article: intelligent control

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[119] All control techniques that use various soft computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms can be put into the class of intelligent control. ... For control theory in psychology and sociology, see control theory (sociology). ... For other uses, see Cybernetics (disambiguation). ... The Shadow robot hand system holding a lightbulb. ...


Specialized languages

AI researchers have developed specialized languages for AI research. The first programming language developed for AI was IPL, developed by Alan Newell, Herbert Simon and J. C. Shaw,[120] and the two most important languages are Lisp[121] (developed by John McCarthy at MIT in 1958[122]) and Prolog,[123] a language based on logic programming (invented by French researchers Alain Colmerauer and Phillipe Roussel, in collaboration with Robert Kowalski of the University of Edinburgh[124]). These two languages are used in the vast majority of AI applications. Two other important historical languages are the planning languages STRIPS (developed at Stanford) and Planner (developed at MIT), both in the 1960s. IPL is a three-letter abbreviation with multiple meanings, including Intense pulsed light Information Processing Language Initial Program Load, a term sometimes used in place of booting up, especially when discussing mainframes built by IBM and others. ... “LISP” redirects here. ... Prolog is a logic programming language. ... The term strips has various meanings: A financial option composed of one call option and two put options with the same strike price A treasury security acronym for Separate Trading of Registered Interest and Principal of Securities, which are the securities obtained when trading the coupons and principal of bonds... This article is being considered for deletion in accordance with Wikipedias deletion policy. ... A programming language is an artificial language that can be used to control the behavior of a machine, particularly a computer. ... IPL is a three-letter abbreviation with multiple meanings, including Intense pulsed light Information Processing Language Initial Program Load, a term sometimes used in place of booting up, especially when discussing mainframes built by IBM and others. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... J.C. (Cliff) Shaw was a systems programmer at the RAND Corporation. ... Lisp is a family of computer programming languages with a long history and a distinctive fully-parenthesized syntax. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Mapúa Institute of Technology (MIT, MapúaTech or simply Mapúa) is a private, non-sectarian, Filipino tertiary institute located in Intramuros, Manila. ... Prolog is a logic programming language. ... Logic programming (which might better be called logical programming by analogy with mathematical programming and linear programming) is, in its broadest sense, the use of mathematical logic for computer programming. ... Professor Alain Colmerauer is the creator of the logic programming language Prolog for computers. ... Robert Anthony Kowalski (Bob Kowalski, born May 15, 1941 in Bridgeport, Connecticut, USA) is an American logician and computer scientist, who has spent much of his career in the UK. He was educated at the University of Chicago, University of Bridgeport (BA in mathematics, 1963), Stanford University (MSc in mathematics... The University of Edinburgh (Scottish Gaelic: ), founded in 1582,[4] is a renowned centre for teaching and research in Edinburgh, Scotland. ... The term strips has various meanings: A financial option composed of one call option and two put options with the same strike price A treasury security acronym for Separate Trading of Registered Interest and Principal of Securities, which are the securities obtained when trading the coupons and principal of bonds... Stanford may refer: Stanford University Places: Stanford, Kentucky Stanford, California, home of Stanford University Stanford Shopping Center Stanford, New York, town in Dutchess County. ... This article is being considered for deletion in accordance with Wikipedias deletion policy. ... Mapúa Institute of Technology (MIT, MapúaTech or simply Mapúa) is a private, non-sectarian, Filipino tertiary institute located in Intramuros, Manila. ...


Research challenges

A legged league game from RoboCup 2004 in Lisbon, Portugal.

The 800 million-Euro EUREKA Prometheus Project on driverless cars (1987-1995) showed that fast autonomous vehicles, notably those of Ernst Dickmanns and his team, can drive long distances (over 100 miles) in traffic, automatically recognizing and tracking other cars through computer vision, passing slower cars in the left lane. But the challenge of safe door-to-door autonomous driving in arbitrary environments will require additional research. Image File history File links Metadata Size of this preview: 800 × 600 pixelsFull resolution‎ (1,280 × 960 pixels, file size: 322 KB, MIME type: image/jpeg) I LIEK ROBOT PUPPIEZ GUISE!!!!! I, the copyright holder of this work, hereby release it into the public domain. ... Image File history File links Metadata Size of this preview: 800 × 600 pixelsFull resolution‎ (1,280 × 960 pixels, file size: 322 KB, MIME type: image/jpeg) I LIEK ROBOT PUPPIEZ GUISE!!!!! I, the copyright holder of this work, hereby release it into the public domain. ... The EUREKA Prometheus Project (1987-1995) was the largest R&D project ever in the field of driverless cars. ... The driverless car is an emerging family of technologies, ultimately aimed at a full taxi-like experience for car users, but without a driver. ... Autonomy is the condition of something that does not depend on anything else. ... Ernst Dieter Dickmanns (born 1936), a former professor at the Universität der Bundeswehr München in Munich (1975 - 2001), is the pioneer of dynamic machine vision and of Driverless cars. ... // Computer music Tracking is the art of creating tracking modules for the computer representation of music. ... Computer vision is the science and technology of machines that see. ...


The DARPA Grand Challenge was a race for a $2 million prize where cars had to drive themselves over a hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005, the winning vehicles completed all 132 miles (212 km) of the course in just under seven hours. This was the first in a series of challenges aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.[125] For November 2007, DARPA introduced the DARPA Urban Challenge. The course will involve a sixty-mile urban area course. Darpa has secured the prize money for the challenge as $2 million for first place, $1 million for second and $500,000 for third. Darpa Grand Challenge The DARPA Grand Challenge is a prize competition for driverless cars, sponsored by the Defense Advanced Research Projects Agency (DARPA), the central research organization of the United States Department of Defense. ... Over fifty GPS satellites such as this NAVSTAR have been launched since 1978. ... 2007 Urban Challenge The DARPA Grand Challenge is a prize competition for driverless cars, sponsored by the Defense Advanced Research Projects Agency (DARPA), the central research organization of the United States Department of Defense. ...


A popular challenge amongst AI research groups is the RoboCup and FIRA annual international robot soccer competitions. Hiroaki Kitano has formulated the International RoboCup Federation challenge: "In 2050 a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."[126] // Overview RoboCup is an international robotics competition founded in 1993. ... The Federation of International Robot-soccer Association (sic) or FIRA for short is an international organisation organising competitive soccer - usually 5-a-side - competitions between autonomous robots. ...


A lesser known challenge to promote AI research is the annual Arimaa challenge match. The challenge offers a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. Arimaa is a two-player abstract strategy board game that can be played using the same equipment as chess. ... Arimaa is a two-player abstract strategy board game that can be played using the same equipment as chess. ...


In the post-dot-com boom era, some search engine websites use a simple form of AI to provide answers to questions entered by the visitor. Questions such as What is the tallest building? can be entered into the search engine's input form, and a list of answers will be returned. AskWiki is an example this sort of search engine. AskWiki, developed in partnership between AskMeNow and the Wikimedia Foundation, is a preliminary integration of a semantic search engine that seeks to provide specific answers to questions using information from Wikipedia articles. ...


Applications of artificial intelligence

Business

Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[127] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering. A stock trader or a stock investor is an individual or firm who buys and sells stocks or bonds (and possibly other financial assets) in the financial markets. ... BBC News is the department within the BBC responsible for the corporations news-gathering and production of news programmes on BBC television, radio and online. ... A neural network is an interconnected group of neurons. ... In financial economics, a financial institution acts as an agent that provides financial services for its clients. ... For the United States Cabinet department, see United States Department of Homeland Security. ... Diagnosis (from the Greek words dia = by and gnosis = knowledge) is the process of identifying a disease by its signs, symptoms and results of various diagnostic procedures. ... Concept Processing is a type of technology used in some Electronic Medical Record (EMR) software applications, as an alternative to the more widespread template-based technology. ... EMR can stand for: Electromagnetic radiation Electronic medical record Emergency medical response Educational Media Resources, Inc. ... Data mining is the principle of sorting through large amounts of data and picking out relevant information. ... E-mail spam, also known as bulk e-mail or junk e-mail is a subset of spam that involves sending nearly identical messages to numerous recipients by e-mail. ...


Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.[128] For other uses, see robot (disambiguation). ...


Toys and games

The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamagotchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy. This article or section does not cite its references or sources. ... The Tamagotchi (たまごっち) is a handheld digital pet created in 1996 by Aki Maita and sold by Bandai. ... Giga Pets are a series of virtual pets. ... One of the many second generation variations of Classic (1998) Furby A Furby (plural Furbys, according to Tiger. ... To meet Wikipedias quality standards, this article may require cleanup. ... The AIBO ERS-7 resembles a small dog AIBO (Artificial Intelligence roBOt, homonymous with companion in Japanese) is one of several types of robotic pets designed and manufactured by Sony; there have been several different models since their introduction in 1999. ... Look up autonomy, autonomous in Wiktionary, the free dictionary. ...


List of applications

Typical problems to which AI methods are applied
Other fields in which AI methods are implemented
Lists of researchers, projects & publications

Pattern recognition is a field within the area of machine learning. ... Optical character recognition, usually abbreviated to OCR, is a type of computer software designed to translate images of handwritten or typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them (e. ... Handwriting recognition is the ability of a computer to receive intelligible handwritten input. ... Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a sequence of words in the form of digital data, by means of an algorithm implemented as a computer program. ... A facial recognition system is a computer-driven application for automatically identifying a person from a digital image. ... Artificial Creativity is a branch of Artificial Intelligence based on trying to make computers creative or on trying to understand human creativity by doing research in making computers creative. ... Computer vision is the science and technology of machines that see. ... This article is about the simulation technology. ... UPIICSA IPN - Binary image Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image. ... As a subfield in artificial intelligence, Diagnosis is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. ... Game theory is a branch of applied mathematics that is often used in the context of economics. ... Strategic planning is an organizations process SCREW YOU, RILEY of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy, including its capital and people. ... Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). ... Contrast with aimbot, a type of software that is used to cheat in multiplayer games A bot, most prominently in the first person shooter PC game types (FPS), is a robotic computer controlled entity that simulates an online or LAN multiplayer human deathmatch opponent, team deathmatch opponent or a cooperative... Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... Look up translate in Wiktionary, the free dictionary. ... A chatterbot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. ... Non-linear control is a sub-division of control engineering which deals with the control of non-linear systems. ... For other uses, see robot (disambiguation). ... This article is about a field of research. ... Automated reasoning is an area of Computer Science dedicated to creating software which allows to perform reasoning on computers completely or nearly completely automatically. ... This article does not cite any references or sources. ... Biologically-inspired computing (also bio-inspired computing) is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. ... Colloquis, previously called ActiveBuddy, is a company that creates conversation-based interactive agents originally distributed via instant messaging platforms. ... It has been suggested that Taxonomic classification be merged into this article or section. ... Data mining is the principle of sorting through large amounts of data and picking out relevant information. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... W3Cs Semantic Web logo The Semantic Web is an evolving extension of the World Wide Web in which web content can be expressed not only in natural language, but also in a format that can be read and used by software agents, thus permitting them to find, share and... E-mail spam, also known as bulk e-mail or junk e-mail is a subset of spam that involves sending nearly identical messages to numerous recipients by e-mail. ... For other uses, see robot (disambiguation). ... Behavior-based robotics or behavioral robotics or behavioural robotics is the branch of robotics that incorporates modular or behavior based AI (BBAI). ... Cognitive The scientific study of how people obtain, retrieve, store and manipulate information. ... For other uses, see Cybernetics (disambiguation). ... Developmental Robotics (DevRob), sometimes called epigenetic robotics, is a methodology that uses metaphors from developmental psychology to develop controllers for autonomous robots. ... Epigenetic Robotics is an interdiciplinary research area with the goal of understanding biological systems by the integration between neuroscience, developmental psychology and engineering sciences. ... Evolutionary Robotics (ER) is a methodology that uses evolutionary computation to develop controllers for autonomous robots. ... In software programming, hybrid intelligent system denotes a software system which employs, in parallel, a combination of AI models, methods and techniques from such artificial intelligence subfields as: Neuro-fuzzy programming Fuzzy expert systems Connectionist expert systems Evolutionary neural networks Genetic-Fuzzy-Neural Systems Genetic fuzzy systems (Michigan, Pitsburg, Incremental... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... All control techniques that use various soft computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms can be put into the class of intelligent control. ... A lawsuit is a civil action brought before a court in order to recover a right, obtain damages for an injury, obtain an injunction to prevent an injury, or obtain a declaratory judgment to prevent future legal disputes. ... The following is a list of current and past notable artificial intelligence projects. ... This is a list of important publications in computer science, organized by field. ...

See also

Robotics Portal
Main list: List of basic artificial intelligence topics

Image File history File links Animation2. ... Artificial intelligence (AI) is a branch of computer science that deals with intelligent behavior, learning, and adaptation in machines. ... Artificial Intelligence was founded in the early 1950s by an eclectic group of visionaries who claimed to be on the verge of changing the world and mans place in it. ... The AI effect is a term for the tendency for individuals to discount advances in artificial intelligence after the fact. ... To meet Wikipedias quality standards, this article may require cleanup. ... The core idea of A.I. systems integration is making individual software components, such as speech synthesizers, interoperable with other components, such as common sense knowledgebases, in order to create larger, broader and more capable A.I. systems. ... The Association for the Advancement of Artificial Intelligence or AAAI is an international, nonprofit, scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. ... This article or section does not adequately cite its references or sources. ... Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ... In artificial intelligence, an embodied agent is an intelligent agent that interacts with the environment through a physical body within that environment. ... The Fifth Generation Computer Systems project (FGCS) was an initiative by Japans Ministry of International Trade and Industry, begun in 1982, to create a fifth generation computer (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. ... A Friendly artificial intelligence or FAI is an artificial intelligence (AI) which has a positive rather than negative effect on humanity. ... Generative Systems refers to systems that use a few basic rules to yield extremely varied and unpredictable patterns. ... DFKI building in Saarbrücken Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), German Research Center for Artificial Intelligence, is an important research center in Saarbrücken and Kaiserslautern. ... To meet Wikipedias quality standards, this article may require cleanup. ... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... The International Joint Conference on Artificial Intelligence (or IJCAI) a meeting of researchers from the different areas of artificial intelligence (AI). ... The Loebner Prize is an annual competition that awards prizes to the Chatterbot considered by the judges to be the most humanlike of those entered. ... Buckminsterfullerene C60, also known as the buckyball, is the simplest of the carbon structures known as fullerenes. ... For the 1988 video game, see Neuromancer (video game). ... During the late 1980s, the approach now known as nouvelle AI was pioneered at the MIT Artificial Intelligence Laboratory by Rodney Brooks. ... Binomial name Pisum sativum A pea (Pisum sativum) is the small, edible round green seed which grows in a pod on a leguminous vine, hence why it is called a legume. ... In colloquial English, person is often synonymous with human. ... Predictive analytics encompasses a variety of techniques from statistics and data mining that process current and historical data in order to make “predictions” about future events. ... For other uses, see robot (disambiguation). ... Singularitarianism is a moral philosophy based upon the belief that a technological singularity — the technological creation of smarter-than-human intelligence — is possible, and advocating deliberate action to bring it into effect and ensure its safety. ... This cover of I, Robot illustrates the story Runaround, the first to list all Three Laws of Robotics. ... Transhuman is a term that refers to an intermediary form between the human and the posthuman. ...

Notes

  1. ^ Textbooks that define AI this way include Poole, Mackworth & Goebel 1998, p. 1, Nilsson 1998, and Russell & Norvig 2003, preface (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55)
  2. ^ Although there is some controversy on this point (see Crevier 1993, p. 50), McCarthy states unequivocally "I came up with the term" in a c|net interview. (See Getting Machines to Think Like Us.)
  3. ^ See John McCarthy, What is Artificial Intelligence?
  4. ^ Poole, Mackworth & Goebel 1998, p. 1
  5. ^ Poole, Mackworth & Goebel 1998, p. 1, Law 1994
  6. ^ Russell & Norvig 2003, p. 17
  7. ^ a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig 2003, Luger & Stubblefield 2004, Poole, Mackworth & Goebel 1998 and Nilsson 1998.
  8. ^ a b General intelligence (strong AI) is discussed by popular introductions to AI, such as: Kurzweil 1999, Kurzweil 2005, Hawkins & Blakeslee 2004
  9. ^ Russell & Norvig 2003, pp. 5-16
  10. ^ See AI Topics: applications
  11. ^ Crevier 1993, pp. 47-49, Russell & Norvig 2003, p. 17
  12. ^ Russell and Norvig write "it was astonishing whenever a computer did anything kind of smartish." Russell & Norvig 2003, p. 18
  13. ^ Crevier 1993, pp. 52-107, Moravec 1988, p. 9 and Russell & Norvig 2003, p. 18-21. The programs described are Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
  14. ^ Crevier 1993, pp. 64-65
  15. ^ Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  16. ^ Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
  17. ^ Crevier 1993, pp. 115-117, Russell & Norvig 2003, p. 22, NRC 1999 under "Shift to Applied Research Increases Investment." and also see Howe, J. "Artificial Intelligence at Edinburgh University : a Perspective"
  18. ^ Crevier 1993, pp. 161-162,197-203 and and Russell & Norvig 2003, p. 24
  19. ^ Crevier 1993, p. 203
  20. ^ Crevier 1993, pp. 209-210
  21. ^ Russell Norvig, p. 28, NRC 1999 under "Artificial Intelligence in the 90s"
  22. ^ Russell Norvig, pp. 25-26
  23. ^ "We cannot yet characterize in general what kinds of computational procedures we want to call intelligent." John McCarthy, Basic Questions
  24. ^ Problem solving, puzzle solving, game playing and deduction: Russell & Norvig 2003, chpt. 3-9, Poole et al. chpt. 2,3,7,9, Luger & Stubblefield 2004, chpt. 3,4,6,8, Nilsson, chpt. 7-12.
  25. ^ Uncertain reasoning: Russell & Norvig 2003, pp. 452-644, Poole, Mackworth & Goebel 1998, pp. 345-395, Luger & Stubblefield 2004, pp. 333-381, Nilsson 1998, chpt. 19
  26. ^ Intractability and efficiency and the combinatorial explosion: Russell & Norvig 2003, pp. 9, 21-22
  27. ^ Several famous examples: Wason (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allowed the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) Tversky, Slovic & Kahnemann (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). Lakoff & Nunez (2000) have controversially argued that even our skills at mathematics depend on knowledge and skills that come from "the body", i.e. sensorimotor and perceptual skills. (See Where Mathematics Comes From)
  28. ^ Knowledge representation: ACM 1998, I.2.4, Russell & Norvig 2003, pp. 320-363, Poole, Mackworth & Goebel 1998, pp. 23-46, 69-81, 169-196, 235-277, 281-298, 319-345 Luger & Stubblefield 2004, pp. 227-243, Nilsson 1998, chpt. 18
  29. ^ Knowledge engineering: Russell & Norvig 2003, pp. 260-266, Poole, Mackworth & Goebel 1998, pp. 199-233, Nilsson 1998, chpt. ~17.1-17.4
  30. ^ a b Representing categories and relations: Semantic networks, description logics, inheritance, including frames and scripts): Russell & Norvig 2003, pp. 349-354, Poole, Mackworth & Goebel 1998, pp. 174-177, Luger & Stubblefield 2004, pp. 248-258, Nilsson 1998, chpt. 18.3
  31. ^ a b Representing events and time: Situation calculus, event calculus, fluent calculus (including solving the frame problem): Russell & Norvig 2003, pp. 328-341, Poole, Mackworth & Goebel 1998, pp. 281-298, Nilsson 1998, chpt. 18.2
  32. ^ a b Causal calculus: Poole, Mackworth & Goebel 1998, pp. 335-337
  33. ^ a b Representing knowledge about knowledge: Belief calculus, modal logics: Russell & Norvig 2003, pp. 341-344, Poole, Mackworth & Goebel 1998, pp. 275-277
  34. ^ Ontology: Russell & Norvig 2003, pp. 320-328
  35. ^ McCarthy & Hayes 1969
  36. ^ a b Default reasoning and default logic, non-monotonic logics, circumscription, closed world assumption, abduction (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning"): Russell & Norvig 2003, pp. 354-360, Poole, Mackworth & Goebel 1998, pp. 248-256, 323-335 Luger & Stubblefield 2004, pp. 335-363, Nilsson 1998, ~18.3.3
  37. ^ Crevier 1993, pp. 113-114, Moravec 1988, p. 13, Lenat 1989 (Introduction), Russell & Norvig 2003, p. 21
  38. ^ Planning: ACM 1998, ~I.2.8, Russell & Norvig 2003, pp. 375-459, Poole, Mackworth & Goebel 1998, pp. 281-316, Luger & Stubblefield 2004, pp. 314-329, Nilsson 1998, chpt. 10.1-2, 22
  39. ^ Classical planning: Russell & Norvig 2003, pp. 375-430 Poole, Mackworth & Goebel 1998, pp. 281-309, Luger & Stubblefield 2004, pp. 314-329, Nilsson 1998, chpt. 10.1-2, 22
  40. ^ Partial order planning: Russell & Norvig 2003, pp. 387-395, Poole, Mackworth & Goebel 1998, pp. 309-315, Nilsson 1998, chpt. 22.2
  41. ^ Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning: Russell & Norvig 2003, pp. 430-449
  42. ^ a b Decision theory and decision analysis: Russell & Norvig 2003, pp. 584-597, Poole, Mackworth & Goebel 1998, pp. 381-394
  43. ^ a b Information value theory: Russell & Norvig 2003, pp. 600-604
  44. ^ Multi-agent planning and emergent behavior Russell & Norvig 2003, pp. 449-455
  45. ^ Natural language processing: ACM 1998, I.2.7, Russell & Norvig 2003, pp. 790-831, Poole, Mackworth & Goebel 1998, pp. 91-104, Luger & Stubblefield 2004, pp. 591-632
  46. ^ Syntax and parsing: Russell & Norvig 2003, pp. 795-810, Luger & Stubblefield 2004, pp. 597-616
  47. ^ Semantics and disambiguation: Russell & Norvig 2003, pp. 810-821
  48. ^ Discourse understanding (coherence relations, speech acts, pragmatics): Russell & Norvig 2003, pp. 820-824
  49. ^ Applications of natural language processing, including information retrieval (or text mining) and machine translation Russell & Norvig 2003, pp. 840-857, Luger & Stubblefield 2004, pp. 623-630
  50. ^ Minsky 2007, Picard 1997
  51. ^ Shapiro 1992, p. 9
  52. ^ Among the researchers who laid the foundations of cybernetics, information theory and neural networks were Claude Shannon, Norbert Weiner, Warren McCullough, Walter Pitts, Donald Hebb, Donald McKay, Alan Turing and John Von Neumann. McCorduck 2004, pp. 51-107 Crevier 1993, pp. 27-32, Russell & Norvig 2003, pp. 15,940, Moravec 1988, p. 3.
  53. ^ Haugeland 1985, pp. 112-117
  54. ^ Then called Carnegie Tech
  55. ^ Crevier 1993, pp. 52-54, 258-263, Nilsson 1998, p. 275
  56. ^ See Science at Google Books, and McCarthy's presentation at [email protected]
  57. ^ Crevier 1993, pp. 193-196
  58. ^ Crevier 1993, pp. 163-176. Neats vs. scruffies: Crevier 1993, pp. 168.
  59. ^ Crevier 1993, pp. 145-162
  60. ^ The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AI, AI winter, or Frank Rosenblatt. (Crevier 1993, pp. 102-105).
  61. ^ Nilsson (1998, p. 7) characterizes these newer approaches to AI as "sub-symbolic".
  62. ^ Brooks 1990 and Moravec 1988
  63. ^ Crevier 1993, pp. 214-215 and Russell & Norvig 2003, p. 25
  64. ^ See IEEE Computational Intelligence Society
  65. ^ Russell & Norvig 2003, p. 25-26
  66. ^ "The whole-agent view is now widely accepted in the field" Russell & Norvig 2003, p. 55.
  67. ^ a b The intelligent agent paradigm is discussed in major AI textbooks, such as: Russell & Norvig 2003, pp. 27, 32-58, 968-972, Poole, Mackworth & Goebel 1998, pp. 7-21, Luger & Stubblefield 2004, pp. 235-240
  68. ^ For example, both John Doyle (Doyle 1983) and Marvin Minsky's popular classic The Society of Mind (Minsky 1986) used the word "agent" to describe modular AI systems.
  69. ^ Russell & Norvig 2003, pp. 27, 55
  70. ^ Agent architectures, hybrid intelligent systems, and multi-agent systems: ACM 1998, I.2.11, Russell & Norvig (1998, pp. 27, 932, 970-972) and Nilsson (1998, chpt. 25)
  71. ^ Search algorithms: Russell & Norvig 2003, pp. 59-189, Poole, Mackworth & Goebel 1998, pp. 113-163, Luger & Stubblefield 2004, pp. 79-164, 193-219, Nilsson 1998, chpt. 7-12
  72. ^ Adversarial search: Russell & Norvig 2003, pp. 161-185, Luger & Stubblefield 2004, pp. 150-157, Nilsson 1998, chpt. 12
  73. ^ a b Forward chaining, backward chaining, Horn clauses, production systems, blackboard systems and logical deduction as search: Russell & Norvig 2003, pp. 217-225, 280-294, Poole, Mackworth & Goebel 1998, pp. ~46-52, Luger & Stubblefield 2004, pp. 62-73, Nilsson 1998, chpt. 2.2, 5.4
  74. ^ Constraint satisfaction: Russell & Norvig 2003, pp. 137-156, Poole, Mackworth & Goebel 1998, pp. pp. 147-163
  75. ^ Dynamic programming: Russell & Norvig 2003, p. 293, Poole, Mackworth & Goebel 1998, pp. 145-147, Nilsson 1998, p. 178
  76. ^ State space search and planning: Russell & Norvig 2003, pp. 382-387, Poole, Mackworth & Goebel 1998, pp. 298-305, Nilsson 1998, chpt. 10.1-2
  77. ^ Graphplan: Russell & Norvig 2003, pp. 395-402
  78. ^ Hierarchical task network: Russell & Norvig 2003, pp. 422-430
  79. ^ Naive searches: Russell & Norvig 2003, pp. 59-93, Poole, Mackworth & Goebel 1998, pp. 113-132, Luger & Stubblefield 2004, pp. 79-121, Nilsson 1998, chpt. 8
  80. ^ John McCarthy writes that "the combinatorial explosion problem has been recognized in AI from the beginning" in Review of Lighthill report
  81. ^ Heuristic or informed searches: Russell & Norvig 2003, pp. 94-109, Poole, Mackworth & Goebel 1998, pp. pp. 132-147, Luger & Stubblefield 2004, pp. 133-150, Nilsson 1998, chpt. 9
  82. ^ Optimization searches: Russell & Norvig 2003, pp. 110-116,120-129, Poole, Mackworth & Goebel 1998, pp. 56-163, Luger & Stubblefield 2004, pp. 127-133
  83. ^ Genetic algorithms: Russell & Norvig 2003, pp. 116-119, Poole, Mackworth & Goebel 1998, pp. 162, Luger & Stubblefield 2004, pp. 509-530, Nilsson 1998, chpt. 4.2
  84. ^ Logic: ACM 1998, ~I.2.3, Russell & Norvig 2003, pp. 194-310, Luger & Stubblefield 2004, pp. 35-77, Nilsson 1998, chpt. 13-16
  85. ^ McCorduck 2004, p. 51, Russell & Norvig 2003, pp. 19, 23
  86. ^ Resolution and unification are discussed in: Russell & Norvig 2003, pp. 213-217, 275-280, 295-306, Poole, Mackworth & Goebel 1998, pp. 56-58, Luger & Stubblefield 2004, pp. 554-575, Nilsson 1998, chpt. 14 & 16
  87. ^ Inference engine, inference and logic programming: Russell & Norvig 2003, pp. 213-224, 272-310, Poole, Mackworth & Goebel 1998, pp. 46-58, Luger & Stubblefield 2004, pp. 62-73, 194-219, 547-589, Nilsson 1998, chpt. 14 & 16
  88. ^ Satplan: Russell & Norvig 2003, pp. 402-407, Poole, Mackworth & Goebel 1998, pp. 300-301, Nilsson 1998, chpt. 21
  89. ^ Explanation based learning, relevance based learning, inductive logic programming, case based reasoning: Russell & Norvig 2003, pp. 678-710, Poole, Mackworth & Goebel 1998, pp. 414-416, Luger & Stubblefield 2004, pp. ~422-442, Nilsson 1998, chpt. 10.3, 17.5
  90. ^ Propositional logic: Russell & Norvig 2003, pp. 204-233, Luger & Stubblefield 2004, pp. 45-50 Nilsson 1998, chpt. 13
  91. ^ First order logic and features such as equality: ACM 1998, ~I.2.4, Russell & Norvig 2003, pp. 240-310, Poole, Mackworth & Goebel 1998, pp. 268-275, Luger & Stubblefield 2004, pp. 50-62, Nilsson 1998, chpt. 15
  92. ^ Fuzzy logic: Russell & Norvig 2003, pp. 526-527
  93. ^ Russell & Norvig 2003, pp. 25-26 (on Judea Pearl's contribution). Stochastic methods are described in all the major AI textbooks: ACM 1998, ~I.2.3, Russell & Norvig 2003, pp. 462-644, Poole, Mackworth & Goebel 1998, pp. 345-395, Luger & Stubblefield 2004, pp. 165-191, 333-381, Nilsson 1998, chpt. 19
  94. ^ Probability: Russell & Norvig 2003, pp. 462-489, Poole, Mackworth & Goebel 1998, pp. 346-366, Luger & Stubblefield 2004, pp. ~165-182, Nilsson 1998, chpt. 19.1
  95. ^ Bayesian networks: Russell & Norvig 2003, pp. 492-523, Poole, Mackworth & Goebel 1998, pp. 361-381, Luger & Stubblefield 2004, pp. ~182-190, ~363-379, Nilsson 1998, chpt. 19.3-4
  96. ^ Bayesian inference algorithm: Russell & Norvig 2003, pp. 504-519, Poole, Mackworth & Goebel 1998, pp. 361-381, Luger & Stubblefield 2004, pp. ~363-379, Nilsson 1998, chpt. 19.4 & 7
  97. ^ Bayesian learning and the expectation-maximization algorithm Russell & Norvig 2003, pp. 712-724, Poole, Mackworth & Goebel 1998, pp. 424-433, Nilsson 1998, chpt. 20
  98. ^ a b Bayesian decision networks: Russell & Norvig 2003, pp. 597-600
  99. ^ Russell & Norvig 2003, pp. 537-581
  100. ^ Hidden Markov model: Russell & Norvig 2003, pp. 549-551
  101. ^ Kalman filter: Russell & Norvig 2003, pp. 551-557
  102. ^ Dynamic Bayesian network: Russell & Norvig 2003, pp. 551-557
  103. ^ a b Markov decision processes and dynamic decision networks: Russell & Norvig 2003, pp. 613-631
  104. ^ Game theory and mechanism design: Russell & Norvig 2003, pp. 631-643
  105. ^ Statistical learning methods and classifiers: Russell & Norvig 2003, pp. 712-754, Luger & Stubblefield 2004, pp. 453-541
  106. ^ a b Neural networks and connectionism: Russell & Norvig 2003, pp. 736-748, Poole, Mackworth & Goebel 1998, pp. 408-414, Luger & Stubblefield 2004, pp. 453-505, Nilsson 1998, chpt. 3
  107. ^ Kernel methods: Russell & Norvig 2003, pp. 749-752
  108. ^ K-nearest neighbor algorithm: Russell & Norvig 2003, pp. 733-736
  109. ^ Gaussian mixture model: Russell & Norvig 2003, pp. 725-727
  110. ^ Naive Bayes classifier: Russell & Norvig 2003, pp. 718
  111. ^ Decision tree: Russell & Norvig 2003, pp. 653-664, Poole, Mackworth & Goebel 1998, pp. 403-408, Luger & Stubblefield 2004, pp. 408-417
  112. ^ van der Walt, Christiaan. Data characteristics that determine classifier performance.
  113. ^ Perceptrons: Russell & Norvig 2003, pp. 740-743, Luger & Stubblefield 2004, pp. 458-467
  114. ^ Backpropagation: Russell & Norvig 2003, pp. 744-748, Luger & Stubblefield 2004, pp. 467-474, Nilsson 1998, chpt. 3.3
  115. ^ Competitive learning, Hebbian coincidence learning, Hopfield networks and attractor networks: Luger & Stubblefield 2004, pp. 474-505.
  116. ^ Hawkins & Blakeslee 2004
  117. ^ Genetic algorithms for learning: Luger & Stubblefield 2004, pp. 509-530, Nilsson 1998, chpt. 4.2
  118. ^ Artificial life and society based learning: Luger & Stubblefield 2004, pp. 530-541
  119. ^ Control theory: ACM 1998, ~I.2.8, Russell & Norvig 2003, pp. 926-932
  120. ^ Crevier 1993, p. 46-48
  121. ^ Lisp: Luger & Stubblefield 2004, pp. 723-821
  122. ^ Crevier 1993, pp. 59-62, Russell & Norvig 2003, p. 18
  123. ^ Prolog: Poole, Mackworth & Goebel 1998, pp. 477-491, Luger & Stubblefield 2004, pp. 641-676, 575-581
  124. ^ Crevier 1993, pp. 193-196
  125. ^ Congressional Mandate DARPA
  126. ^ The RoboCup2003 Presents: Humanoid Robots playing Soccer PRESS RELEASE: 2 June 2003
  127. ^ Robots beat humans in trading battle. BBC News, Business. The British Broadcasting Corporation (August 8, 2001). Retrieved on 2006-11-02.
  128. ^ "Robot," Microsoft® Encarta® Online Encyclopedia 2006

John McCarthy may be: Government: John McCarthy (1857–1943), American politician Science: John McCarthy (born 1927), American computer scientist John McCarthy (born 1953), American phonologist Sports: John McCarthy, Mixed martial arts referee Johnny McCarthy, a NBA player Johnny McCarthy, a MLB first baseman John McCarthy, a former Australian rules footballer... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... For the strong AI hypothesis, see philosophy of artificial intelligence Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence. ... Daniel G. Bobrow is a Research Fellow in the Intelligent Systems Laboratory of the Palo Alto Research Center, and is amongst other things known for creating an oft-cited artificial intelligence program STUDENT, with which he earned his PhD. He earned his BS from RPI in 1957, SM from Harvard... STUDENT is an early artificial intelligence program that solves algebra word problems. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie-Mellon’s School of Computer Science. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... Logic Theorist is a computer program written in 1955 by Alan Newell, Herbert Simon and J. C. Shaw. ... Terry A. Winograd Terry Allen Winograd (born February 24, 1946) is a professor of computer science at Stanford University. ... // SHRDLU was an early natural language understanding computer program, developed by Terry Winograd at MIT from 1968-1970. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Complexity theory is part of the theory of computation dealing with the resources required during computation to solve a given problem. ... In cryptanalysis, a brute force attack on a cipher is a brute-force search of the key space; that is, testing all possible keys, in an attempt to recover the plaintext used to produce a particular ciphertext. ... There are various types of intelligence. ... Named in honour of Peter Cathcart Wason, who first described the task, the Wason selection task is a logical puzzle which is formally equivalent to the following question: You are shown a set of four cards placed on a table each of which has a number on one side and... Cognitive bias is distortion in the way humans perceive reality (see also cognitive distortion). ... Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. ... Knowledge representation is an issue that arises in both cognitive science and artificial intelligence. ... The process of building knowledge-based systems is called knowledge engineering (KE). ... A semantic network is often used as a form of knowledge representation. ... Description logics (DL) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. ... This article or section does not cite any references or sources. ... Frames were proposed by Marvin Minsky in his 1974 article A Framework for Representing Knowledge. ... Scripts were developed in the early AI work by Roger Schank and his research group, and are a method of representing procedural knowledge. ... The situation calculus is a first order logic formalism designed for representing and reasoning about dynamically changing worlds. ... The event calculus is a logical language for representing and reasoning about actions and their effects first presented by Robert Kowalski and Marek Sergot in 1986. ... The fluent calculus is a formalism for expressing dynamical domains in first-order logic. ... In artificial intelligence, the frame problem has a number of possible formulations. ... Causality or causation denotes the relationship between one event (called cause) and another event (called effect) which is the consequence (result) of the first. ... In formal logic, a modal logic is any logic for handling modalities: concepts like possibility, existence, and necessity. ... In both computer science and information science, an ontology is a data model that represents a set of concepts within a domain and the relationships between those concepts. ... Default logic is a non-monotonic logic proposed by Ray Reiter to formalize the way humans reason using default assumptions. ... A non-monotonic logic is a formal logic whose consequence relation is not monotonic. ... Circumscription is a non-monotonic logic created by John McCarthy to formalize the common sense assumption that things are as expected unless otherwise specified. ... The closed world assumption is the presumption that what is not currently known to be true is false. ... Look up abduction in Wiktionary, the free dictionary In logic, abduction is a method of reasoning; see abductive reasoning. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... In mathematics, a partially ordered set (or poset for short) is a set equipped with a special binary relation which formalizes the intuitive concept of an ordering. ... Decision theory is an area of study of discrete mathematics that models human decision-making in science, engineering and indeed all human social activities. ... Decision analysis (DA) is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. ... The creator of or main contributor to this page may have a conflict of interest with the subject of this article. ... Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. ... For other uses, see Syntax (disambiguation). ... An example of parsing a mathematical expression. ... The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ... “WSD” redirects here. ... Coherence in linguistics is what makes a text semantically meaningful. ... The notion speech act is a technical term in linguistics and the philosophy of language. ... Pragmatics is the study of the ability of natural language speakers to communicate more than that which is explicitly stated. ... Information retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertextually-networked databases such as the World Wide Web. ... Text mining, sometimes alternately referred to as text data mining, refers generally to the process of deriving high quality information from text. ... Machine translation, sometimes referred to by the acronym MT, is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. ... Cybernetics is a theory of the communication and control of regulatory feedback. ... Not to be confused with information technology, information science, or informatics. ... A neural network is an interconnected group of neurons. ... Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001), an American electrical engineer and mathematician, has been called the father of information theory,[1] and was the founder of practical digital circuit design theory. ... Norbert Wiener (November 26, 1894 - March 18, 1964) was an American mathematician, known as the founder of cybernetics. ... Warren Sturgis McCulloch (November 16, 1899 – September 24, 1969) was an American neurophysiologist and cybernetician. ... Walter Pitts (1923? - 1969) was a logician who worked in the field of cognitive psychology. ... Donald Olding Hebb (July 22, 1904-August 20, 1985) was an influential psychologist, particularly in the area of neuropsychology, where he sought to understand how the function of neurons contributed to psychological processes such as learning. ... Donald McKay (1810–1880) was a Canadian-born American designer and builder of sailing ships. ... Alan Mathison Turing, OBE, FRS (23 June 1912 – 7 June 1954) was an English mathematician, logician, and cryptographer. ... For other persons named John Neumann, see John Neumann (disambiguation). ... Carnegie Mellon University is a private research university located in Pittsburgh, Pennsylvania. ... In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing holy wars in artificial intelligence research. ... The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Seymour Papert Seymour Papert (born March 1, 1928 Pretoria, South Africa) is an MIT mathematician, computer scientist, and prominent educator. ... The history of artificial intelligence begins with the four thousand year old wish to craft a copy of a human being out of spirit, or alchemy, or clockwork, or chemistry, or by the infusion of newly discovered energies. ... To meet Wikipedias quality standards, this article may require cleanup. ... Frank Rosenblatt (1928–1969) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. ... Simple reflex agent Learning agent The terms agent and intelligent agent are ambiguous and have been used in two different, but related senses, which are often confused. ... John Doyle may refer to: John Doyle (announcer), whose voice is used by the NIST radio clock John Doyle (artist) (born 1897), Irish artist and grandfather of Arthur Conan Doyle John Doyle (baseball player), Canadian Major League Baseball player John Doyle (comedian), Australian comedian and writer John Doyle (critic) (born... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... The Society of Mind is a theory of human intelligence developed by Marvin Minsky. ... In computer science, agent architecture reveres to the blueprints of software agents and intelligent control systems, depicting the arrangement of components. ... In software programming, hybrid intelligent system denotes a software system which employs, in parallel, a combination of AI models, methods and techniques from such artificial intelligence subfields as: Neuro-fuzzy programming Fuzzy expert systems Connectionist expert systems Evolutionary neural networks Genetic-Fuzzy-Neural Systems Genetic fuzzy systems (Michigan, Pitsburg, Incremental... A multi-agent system (MAS) is a system composed of several software agents, collectively capable of reaching goals that are difficult to achieve by an individual agent or monolithic system. ... In computer science, a search algorithm, broadly speaking, is an algorithm that takes a problem as input and returns a solution to the problem, usually after evaluating a number of possible solutions. ... Forward chaining is one of the two main methods of reasoning when using inference rules (in artificial intelligence). ... Backward chaining is one of the two main methods of reasoning when using inference rules. ... In logic, and in particular in propositional calculus, a Horn clause is a proposition of the general type (p and q and . ... A production system consists of a collection of productions (rules), a working memory of facts and an algorithm, known as forward chaining, for producing new facts from old. ... A blackboard system in computer science is a type of Artificial Intelligence application based on the blackboard architectural model. ... In mathematics and computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naive methods. ... The concept of state space search is widely used in artificial intelligence. ... Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. ... Graphplan is an algorithm for automated planning developed by Avrim Blum and Merrick Furst in 1995. ... In artificial intelligence, the hierarchical task network, or HTN, is an approach to automated planning in which the dependency among actions can be given in the form of networks. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... For other uses, see Heuristic (disambiguation). ... In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. ... A genetic algorithm (GA) is an algorithm used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. ... Logic (from Classical Greek λόγος logos; meaning word, thought, idea, argument, account, reason, or principle) is the study of the principles and criteria of valid inference and demonstration. ... In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic and first-order logic. ... For the idea of global unification, see globalization. ... An inference engine tries to derive answers from a knowledge base. ... Inference is the act or process of deriving a conclusion based solely on what one already knows. ... Logic programming (which might better be called logical programming by analogy with mathematical programming and linear programming) is, in its broadest sense, the use of mathematical logic for computer programming. ... Satplan is a method for automated planning. ... Inductive logic programming (ILP) is a machine learning approach, which uses techniques of logic programming. ... Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. ... Propositional logic or sentential logic is the logic of propositions, sentences, or clauses. ... First-order predicate calculus or first-order logic (FOL) is a theory in symbolic logic that permits the formulation of quantified statements such as there is at least one X such that. ... EQUAL is a popular artificial sweetener Equal (sweetener) Equality can mean several things: Mathematical equality Social equality Racial equality Sexual equality Equality of outcome Equality, a town in Illinois See also Equity Egalitarianism Equals sign This is a disambiguation page — a navigational aid which lists other pages that might... Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely deduced from classical predicate logic. ... Judea Pearl is a computer science professor at UCLA. He was one of the pioneers of Bayesian networks and the probabilistic approach to artificial intelligence. ... Probability is the likelihood that something is the case or will happen. ... A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. ... Bayesian inference is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. ... Bayesian refers to probability and statistics -- either methods associated with the Reverend Thomas Bayes (ca. ... In statistical computing, an expectation-maximization (EM) algorithm is an algorithm for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. ... Bayesian refers to probability and statistics -- either methods associated with the Reverend Thomas Bayes (ca. ... An influence diagram (ID) (also called a decision network) is a compact graphical and mathematical representation of a decision situation. ... State transitions in a hidden Markov model (example) x — hidden states y — observable outputs a — transition probabilities b — output probabilities A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... A Bayesian network or Bayesian belief network is a directed acyclic graph of nodes representing variables and arcs representing dependence relations among the variables. ... To meet Wikipedias quality standards, this article or section may require cleanup. ... An influence diagram (ID) (also called a decision network) is a compact graphical and mathematical representation of a decision situation. ... Game theory is a branch of applied mathematics that is often used in the context of economics. ... Mechanism design is a sub-field of game theory. ... In mathematics, a classifier is a mapping from a (discrete or continuous) feature space X to a discrete set of labels Y. Classifiers have practical applications in many branches of science and society. ... Kernel Methods (KMs) are a class of algorithms for pattern analysis, whose best known element is the Support Vector Machine (SVM). ... In pattern recognition, the k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. ... In mathematics, the term mixture model is a model in which independent variables are fractions of a total. ... A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes theorem with strong (naive) independence assumptions. ... An Alternating Decision Tree (ADTree) is a machine learning method for classification. ... The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. ... Backpropagation is a supervised learning technique used for training artificial neural networks. ... Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cells repeated and persistent stimulation of the postsynaptic cell. ... Simplified view of an artificial neural network A neural network is an interconnected group of artificial or biological neurons. ... A genetic algorithm (GA) is a search technique used in computing to find exact or approximate solutions to optimization and search problems. ... This article is about a field of research. ... For control theory in psychology and sociology, see control theory (sociology). ... “LISP” redirects here. ... Prolog is a logic programming language. ... is the 220th day of the year (221st in leap years) in the Gregorian calendar. ... Year 2001 (MMI) was a common year starting on Monday (link displays the 2001 Gregorian calendar). ... Year 2006 (MMVI) was a common year starting on Sunday of the Gregorian calendar. ... is the 306th day of the year (307th in leap years) in the Gregorian calendar. ...

References

Major AI textbooks

Nils J. Nilsson is one of the founding researchers in the discipline of Artificial intelligence. ... Stuart Russell is a computer scientist known for his contributions to artificial intelligence. ... Peter Norvig is currently the Director of Research (formerly Director of Search Quality) at Google Inc. ... David Poole born 12/22/1984, aint afraid of no truck. ...

Other sources

The Association for Computing Machinery, or ACM, was founded in 1947 as the worlds first scientific and educational computing society. ... Rodney Allen Brooks (b. ... Daniel Crevier (born 1947) is a Canadian entrepreneur and artificial intelligence and image processing researcher. ... John Haugeland (born in 1945), is a philosopher and Professor of Philosophy at the University of Chicago. ... Jeff Hawkins (born June 1, 1957 in Huntington, New York) is the founder of Palm Computing (where he invented the Palm Pilot) [1] and Handspring (where he invented the Treo). ... Daniel Kahneman Daniel Kahneman (born March 5, 1934 in Tel Aviv, in the then British Mandate of Palestine, now in Israel), is a key pioneer and theorist of behavioral finance, which integrates economics and cognitive science to explain seemingly irrational risk management behavior in human beings. ... Amos Tversky (March 16, 1937 - June 2, 1996) was a pioneer of cognitive science, a longtime collaborator of Daniel Kahneman, and a key figure in the discovery of systematic human cognitive bias and handling of risk. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... Dr. Raymond Kurzweil (born February 12, 1948) is a pioneer in the fields of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic musical keyboards. ... This article or section does not cite any references or sources. ... Rafael Núñez (politician) Rafael E. Núñez - cognitive scientist Categories: Disambiguation ... Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. ... Douglas B. Lenat (born in 1950) is the CEO of Cycorp, Inc. ... Sir Michael James Lighthill FRS (23 January 1924 - 17 July 1998) was a British applied mathematician, known for his pioneering work in the field of Aeroacoustics. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Nathan Rochester designed the IBM 701, wrote the first assembler and participated in the founding of the field of artificial intelligence. ... John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ... Hans Moravec (born November 30, 1948 in Austria) is a research professor at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. ... The National Research Council (NRC) of the USA is the working arm of the United States National Academy of Sciences and the United States National Academy of Engineering, carrying out most of the studies done in their names. ... Allen Newell (March 19, 1927 - July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie-Mellon’s School of Computer Science. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... John Rogers Searle (born July 31, 1932 in Denver, Colorado) is the Slusser Professor of Philosophy at the University of California, Berkeley, and is noted for contributions to the philosophy of language, philosophy of mind and consciousness, on the characteristics of socially constructed versus physical realities, and on practical reason. ... Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. ... Alan Mathison Turing, OBE, FRS (23 June 1912 – 7 June 1954) was an English mathematician, logician, and cryptographer. ... ISSN, or International Standard Serial Number, is the unique eight-digit number applied to a periodical publication including electronic serials. ... A digital object identifier (or DOI) is a standard for persistently identifying a piece of intellectual property on a digital network and associating it with related data, the metadata, in a structured extensible way. ... Peter Cathcart Wason (22 April 1924 - 17 April 2003) was a cognitive psychologist, who worked on the psychology of reason. ... Joseph Weizenbaum. ...

Further reading

  • R. Sun & L. Bookman, (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.

External links

Find more information on Artificial Intelligence by searching Wikipedia's sister projects
Dictionary definitions from Wiktionary
Textbooks from Wikibooks
Quotations from Wikiquote
Source texts from Wikisource
Images and media from Commons
News stories from Wikinews
Learning resources from Wikiversity
  • AI at the Open Directory Project
  • AI with Neural Networks
  • AI-Tools, the Open Source AI community homepage
  • Artificial Intelligence Directory, a directory of Web resources related to artificial intelligence
  • The Association for the Advancement of Artificial Intelligence
  • Freeview Video 'Machines with Minds' by the Vega Science Trust and the BBC/OU
  • Heuristics and artificial intelligence in finance and investment
  • John McCarthy's frequently asked questions about AI
  • Jonathan Edwards looks at AI (BBC audio)
  • Artificial Intelligence in the Computer science directory
  • Generation5 - Large artificial intelligence portal with articles and news.
  • Mindmakers.org, an online organization for people building large scale A.I. systems
  • Ray Kurzweil's website dedicated to AI including prediction of future development in AI
  • AI articles on the Accelerating Future blog
  • AI Genealogy Project
  • Artificial intelligence library and other useful links
  • International Journal of Computational Intelligence
  • International Journal of Intelligent Technology
  • AI definitions at Labor Law Talk
  • Virtual Humans Forum and Directory

  Results from FactBites:
 
Artificial intelligence - Wikipedia, the free encyclopedia (1818 words)
Artificial intelligence (AI) is a branch of computer science that deals with intelligent behavior, learning, and adaptation in machines.
In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence.
Many practical applications are dependent on artificial neural networks — networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition.
Portal:Artificial intelligence - Wikipedia, the free encyclopedia (589 words)
Artificial Intelligence (AI) is defined as intelligence exhibited by an artificial (non-natural, man-made) entity.
The philosophy of AI As is often the case with a nascent science, Artificial Intelligence ('AI') has enough confusing questions at the fundamental, conceptual level to warrant philosophical as well as scientific work.
An artificial neural network (ANN), also called a simulated neural network (SNN) (but the term neural network (NN) is grounded in biology and refers to very real, highly complex plexus), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m