FACTOID # 11: Oklahoma has the highest rate of women in State or Federal correctional facilities.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "Connectionism" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Connectionism

Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. Connectionism models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many different forms of connectionism, but the most common forms utilize neural network models. Hondas humanoid robot AI redirects here. ... Cognitive science is usually defined as the scientific study either of mind or of intelligence (e. ... Drawing of the cells in the chicken cerebellum by S. Ramón y Cajal Neuroscience is a field that is devoted to the scientific study of the nervous system. ... Psychology is an academic and applied discipline involving the scientific study of mental processes and behavior. ... A Phrenological mapping of the brain. ... For other uses, see Mind (disambiguation). ... For the Pet Shop Boys album of the same name see Behaviour Behavior or behaviour (see spelling differences) refers to the actions or reactions of an object or organism, usually in relation to the environment. ... A termite cathedral mound produced by a termite colony: a classic example of emergence in nature. ... Simplified view of an artificial neural network A neural network is a system of interconnecting neurons in a network working together to produce an output function. ...

Contents

Basic principles

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses. Another model might make each unit in the network a word, and each connection an indication of semantic similarity. Neurons (also called nerve cells) are the primary cells of the nervous system. ... Synapses allow nerve cells to communicate with one another through axons and dendrites, converting electrical signals into chemical ones. ... Semantic similarity, variously also called semantic closeness/proximity/nearness, is a concept whereby a set of documents or terms within term lists are assigned a metric based on the likeness of their meaning / semantic content. ...


Spreading activation

Most connectionist models include time, i.e. the network changes over time. A closely related and extremely common aspect of connectionist models is activation. At any time a unit in the network has an activation, which is a numerical value intended to represent some aspect of the unit. For example, if the units in the model are neurons the activation could represent the probability that the neuron would generate an action potential spike. If the model is a spreading activation model then over time a unit's activation spreads to all the other units connected to it. Spreading activation is always a feature of neural network models, and it is very common in connectionist models used by cognitive psychologists. Probability is the extent to which something is likely to happen or be the case[1]. Probability theory is used extensively in areas such as statistics, mathematics, science, philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. ... A. A schematic view of an idealized action potential illustrates its various phases as the action potential passes a point on a cell membrane. ... Spreading activation is a method for searching semantic networks by labeling a set of source concepts with weights or activation and then iteratively propagating or spreading that activation out to other concepts linked to the source concepts or their children. ... Cognitive Psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. ...


Neural networks

Main article: Neural networks

Neural networks are by far the dominant form of connectionist model today. A lot of research utilizing neural networks is carried out under the more general name "connectionist". These connectionist models adhere to two major principles regarding the mind: A neural network is an interconnected group of neurons. ...

  1. Any given mental state can be described as a (N)-dimensional vector of numeric activation values over neural units in a network.
  2. Memory is created by modifying the strength of the connections between neural units. The connection strengths, or "weights", are generally represented as a (N×N)-dimensional matrix.

Though there is a large variety of neural network models, they very rarely stray from these two basic principles. Most of the variety comes from: A vector in physics and engineering typically refers to a quantity that has close relationship to the spatial coordinates, informally described as an object with a magnitude and a direction. The word vector is also now used for more general concepts (see also vector and generalizations below), but in this... In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. ...

  • Interpretation of units—units can be interpreted as neurons or groups of neurons.
  • Definition of activation—activation can be defined in a variety of fashions. For example, in a Boltzmann machine, the activation is interpreted as the probability of generating an action potential spike, and it's determined via a logistic function on the sum of the inputs to a unit.
  • Learning algorithm—different networks modify their connections differently. Generally, any mathematically defined change in connection weights over time is referred to as the "learning algorithm".

Connectionists are in agreement that recurrent neural networks (networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks (networks with no directed cycles). A lot of recurrent connectionist models incorporate dynamical systems theory as well. Many researchers, such as the connectionist Paul Smolensky, have argued that the direction connectionist models will take is towards fully continuous, high-dimensional, non-linear, dynamic systems approaches. A Boltzmann machine is a type of stochastic recurrent neural network originally invented by Geoffrey Hinton and Terry Sejnowski, whose dynamics consist of simulated annealing. ... Logistic curve, specifically the sigmoid function A logistic function or logistic curve models the S-curve of growth of some set P. The initial stage of growth is approximately exponential; then, as competition arises, the growth slows, and at maturity, growth stops. ... A recurrent neural network is a neural network where the connections between the units form a directed cycle. ... A feedforward neural network is a neural network where connections between the units do not form a directed cycle. ... Dynamical systems theory is an area of mathematics used to describe the behavior of complex systems by employing differential equations. ... Paul Smolensky, a professor of Cognitive Science at the Johns Hopkins University. ... In mathematics, a continuous function is a function for which, intuitively, small changes in the input result in small changes in the output. ... To do: 20th century mathematics chaos theory, fractals Lyapunov stability and non-linear control systems non-linear video editing See also: Aleksandr Mikhailovich Lyapunov Dynamical system External links http://www. ... In engineering and mathematics, a dynamical system is a deterministic process in which a functions value changes over time according to a rule that is defined in terms of the functions current value. ...


Biological realism

The neural network branch of connectionism suggests that the study of mental activity is really the study of neural systems. This links connectionism to neuroscience, and models involve varying degrees of biological realism. Connectionist work in general need not be biologically realistic, but some neural network researchers try to model the biological aspects of natural neural systems very closely. As well, many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. However, this is also a source of criticism, as some people view this as reductionism. Biology studies the variety of life (clockwise from top-left) E. coli, tree fern, gazelle, Goliath beetle Biology (from Greek Βìο meaning life and Λoγος meaning the study of) is the study of life. ... The term scientific reductionism has been used to describe various reductionist ideas about science. ...


Learning

Connectionists generally stress the importance of learning in their models. As a result, many sophisticated learning procedures for neural networks have been developed by connectionists. Learning always involves modifying the connection weights. These generally involve mathematical formulas to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units.


By formalizing learning in such a way connectionists have many tools at their hands. A very common tactic in connectionist learning methods is to incorporate gradient descent over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the partial derivative of the error surface with respect to the weight. Backpropagation, first made popular in the 1980s, is probably the most commonly known connectionist gradient descent algorithm today. Gradient descent is an optimization algorithm that approaches a local minimum of a function by taking steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. ... In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). ... Backpropagation is a supervised learning technique used for training artificial neural networks. ...


History

Connectionism can be traced back to ideas more than a century old. However, connectionist ideas were little more than speculation until the mid-to-late 20th century. It wasn't until the 1980's that connectionism became a popular perspective amongst scientists.


Parallel distributed processing

Complex parallel distributed processing programs, such as PDP++ shown here, can result in powerful simulations
Complex parallel distributed processing programs, such as PDP++ shown here, can result in powerful simulations

The prevailing connectionist approach today was originally known as Parallel Distributed Processing (PDP). PDP was a neural network approach that stressed the parallel nature of neural processing, and the distributed nature of neural representations. Screenshot of the Leabra PDP++ program. ... Screenshot of the Leabra PDP++ program. ...


PDP provided a general mathematical framework for researchers to operate in. The framework involved eight major aspects:

  • A set of processing units, represented by a set of integers.
  • An activation for each unit, represented by a vector of time-dependent functions.
  • An output function for each unit, represented by a vector of functions on the activations.
  • A pattern of connectivity among units, represented by a matrix of real numbers indicating connection strength.
  • A propagation rule spreading the activations via the connections, represented by a function on the output of the units.
  • An activation rule for combining inputs to a unit to determine its new activation, represented by a function on the current activation and propagation.
  • A learning rule for modifying connections based on experience, represented by a change in the weights based on any number of variables.
  • An environment which provides the system with experience, represented by sets of activation vectors for some subset of the units.

These eight aspects are now the foundation for almost all connectionist models. In mathematics, a set can be thought of as any collection of distinct things considered as a whole. ... Partial plot of a function f. ... A is a subset of B, and B is a superset of A. In mathematics, especially in set theory, the terms, subset, superset and proper (or strict) subset or superset are used to describe the relation, called inclusion, of one set being contained inside another set. ...


A lot of the research that led to the development of PDP was done in the 1970s, but PDP became popular in the 1980s with the release of Parallel Distributed Processing: Explorations in the Microstructure of Cognition - Volume 1 (foundations) & Volume 2 (Psychological and Biological Models), by James L. McClelland, David E. Rumelhart, and the PDP Research Group. Although the books are now considered seminal connectionist works, the term "connectionism" was not used by the authors to describe their framework at that point. However it is now common to fully equate PDP and connectionism. James L. (Jay) McClelland (born December 1, 1948) is a Professor of Psychology and Cognitive Neuroscience at Carnegie Mellon University. ... David E. Rumelhart has made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. ...


Earlier work

PDP's direct roots were the perceptron theories of researchers such as Frank Rosenblatt from the 1950s and 1960s. However, perceptron models were made very unpopular with the release in 1969 of a book titled Perceptrons by Marvin Minsky and Seymour Papert. Minsky and Papert elegantly demonstrated the limits on the sorts of functions which perceptrons can calculate, showing that even simple functions like the exclusive disjunction could not be handled properly. The PDP books overcame this earlier limitation by showing that multi-level, non-linear neural networks were far more robust and could be used for a vast array of functions. The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. ... Frank Rosenblatt (1928–1969) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Seymour Papert Seymour Papert (born March 1, 1928 Pretoria, South Africa) is an MIT mathematician, computer scientist, and prominent educator. ... Exclusive disjunction, also known as exclusive or and symbolized by XOR or EOR, is a logical operation on two operands that results in a logical value of true if and only if one of the operands, but not both, has a value of true. ...


However, there were many reseachers outside of the perceptron theorists who were advocating connectionist style models prior to the 1980s.


In the 1940s and 1950s researchers such as Warren McCulloch, Walter Pitts, Donald Hebb, and Karl Lashley were advocating connectionist style theories. McCullough and Pitts showed how first-order logic could be implemented by neural systems: their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here (they were influenced by the important work of Nicolas Rashevsky in the 1930's). Hebb contributed greatly to speculations about neural functioning, and even proposed a learning principle that is still in use today, known as Hebbian learning. Lashley argued for distributed representations as a result of his failure to find anything like a localized engram in years of lesion experiments. Warren McCulloch (November 16, 1899 - September 24, 1969) was an American neurophysiologist and cybernetician. ... Walter Pitts (1923? - 1969) was a logician who worked in the field of cognitive psychology. ... Donald Olding Hebb (July 22, 1904-August 20, 1985) was a Canadian psychologist who was influentian in the area of neuropsychology, where he sought to understand how the function of neurons contributed to psychological processes such as learning. ... Karl S. Lashley (1890-1958) was an American behaviorist well-remembered for his influential contributions to the study of learning and memory. ... First-order logic (FOL) is a universal language in symbolic science, and is in use everyday by mathematicians, philosophers, linguists, computer scientists and practitioners of artificial intelligence. ... Hebbian learning is a hypothesis for how neuronal connections are enforced in mammalian brains; it is also a technique for weight selection in artificial neural networks. ... To meet Wikipedias quality standards, this article or section may require cleanup. ... A lesion is a non-specific term referring to abnormal tissue in the body. ...


Connectionism apart from PDP

Though PDP is the dominant form of connectionism, other theorists' work should be classified as connectionist.


Many connectionist principles can be traced back to early work in psychology such as the work of William James, although it should be pointed out that psychological theories based on what was then known about the human brain were quite fashionable at the end of the 19th century. As early as 1869, the neurologist John Hughlings Jackson was arguing for multi-level, distributed systems. Following from this lead, Herbert Spencer's Principles of Psychology, 3rd edition (1872), and Sigmund Freud's Project for a Scientific Psychology (composed 1895) propounded connectionist or proto-connectionist theories. However these tended to be speculative theories. But by the early 20th century Edward Thorndike was carrying out experiments on learning that posited a connectionist type network. For other people named William James see William James (disambiguation) William James (January 11, 1842 – August 26, 1910) was a pioneering American psychologist and philosopher. ... John Hughlings Jackson (1835–1911), was an English neurologist; born at Providence, Green Hammerton, Yorkshire. ... Herbert Spencer. ... Sigmund Freud (May 6, 1856–September 23, 1939; IPA pronunciation: []) was a Jewish-Austrian neurologist and the co-founder of the psychoanalytic school of psychology. ... (19th century - 20th century - 21st century - more centuries) Decades: 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s As a means of recording the passage of time, the 20th century was that century which lasted from 1901–2000 in the sense of the Gregorian calendar (1900–1999... Edward Lee Thorndike (August 31, 1874 - August 9, 1949) was an American psychologist who spent nearly his entire career at Teachers College, Columbia University. ...


In the 1950s the researcher Friedrich Hayek posited the idea of spontaneous order in the brain arising out of decentralized networks of simple units, but Hayek's work was generally not cited in the PDP literature until recently. Friedrich August von Hayek, CH (May 8, 1899 in Vienna – March 23, 1992 in Freiburg) was an Austrian-born British economist and political philosopher known for his defense of liberal democracy and free-market capitalism against socialist and collectivist thought in the mid-20th century. ...


Another form of connectionist model was the relational network framework developed by the linguist Sydney Lamb in the 1960s. Relational networks have only ever been used by linguists, and have never been unified with the PDP approach. As a result, relational networks are used by very few researchers today. Sydney MacDonald Lamb (May 4, 1929- ), American linguist, professor at Rice University whose stratificational grammar is a significant alternative theory to Chomskys transformational grammar. ...


Connectionism vs. computationalism debate

As connectionism became increasingly popular in the late 1980s there was a reaction to it by some researchers, including Fodor, Pinker, and others. These theorists argued that connectionism, as it was being developed at that time, was in danger of obliterating what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism. Computationalism is a specific form of cognitivism which argues that mental activity is computational, i.e., that the mind is essentially a Turing machine. Some researchers argued that the trend in connectionism was a reversion towards associationism and the abandonment of the idea of a language of thought, something they felt was mistaken. In contrast, it was those very tendencies that made connectionism attractive for other researchers. Jerry Alan Fodor (born 1935) is a philosopher at Rutgers University, New Jersey. ... Steven Pinker Steven Arthur Pinker (born September 18, 1954) is a prominent American experimental psychologist, cognitive scientist, and popular science writer known for his spirited and wide-ranging advocacy of evolutionary psychology and the computational theory of mind. ... In psychology, cognitivism is a theoretical approach to understanding the mind, which argues that mental function can be understood by quantitative, positivist and scientific methods, and that such functions can be described as information processing models. ... The tower of a personal computer. ... An artistic representation of a Turing Machine . ... In the philosophy of mind, associationism began as a theory about how ideas combine in the mind. ... Fodors language of thought (LOT) hypothesis states that cognition is a process of computation over compositional mental representations. ...


Connectionism and computationalism need not be at odds per se, but the debate as it was phrased in the late 1980s and early 1990s certainly led to opposition between the two approaches. However, throughout the debate some researchers have argued that connectionism and computationalism are fully compatible, though no consensus has been reached. The differences between the two approaches that are usually cited are the following:

  • Computationalists posit symbolic models that do not resemble underlying brain structure at all, whereas connectionists engage in "low level" modeling, trying to ensure that their models resemble neurological structures
  • Computationalists generally focus on the structure of explicit symbols (mental models) and syntactical rules for their internal manipulation, whereas connectionists focus on learning from environmental stimuli and storing this information in a form of connections between neurons
  • Computationalists believe that internal mental activity consists of manipulation of explicit symbols, whereas connectionists believe that the manipulation of explicit symbols is a poor model of mental activity.
  • Computationalists often posit domain specific symbolic sub-systems designed to support learning in specific areas of cognition (e.g. language, intentionality, number), while connectionists posit one or a small set of very general learning mechanisms.

Though these differences do exist, they may not be necessary. For example, it is well known that connectionist models can actually implement symbol manipulation systems of the kind used in computationalist models. Hence the differences might be a matter of the personal choices that some connectionist researchers make rather than anything fundamental to connectionism. A kind of internal symbol or representation of external reality, hypothesised to play a major part in cognition. ... In linguistics, syntax is the study of the rules, or patterned relations, that govern the way the words in a sentence come together. ... Domain-specificity is a theoretical position in cognitive science (especially modern cognitive development) that argues that many aspects of cognition are supported by specialized, presumably evolutionarily specified, learning devices. ...


The recent popularity of dynamical systems in philosophy of mind (due to the works of authors such as Gelder) have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism is really just a split between computationalism and dynamical systems, suggesting that the original debate was wholly misguided. Tim van Gelder is an associate professor of philosophy and a fellow of the Philosophy Department at the University of Melbourne. ...


All of these views have led to considerable discussion on the issue amongst researchers, and it is likely that the debates will continue.


See also

An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. ... Behaviorism is an approach to psychology based on the proposition that behaviour can be studied and explained scientifically without recourse to internal mental states. ... In cognitive neuroscience, a neural network (also known as a neuronal network or biological neural network to distinguish from artificial neural networks) is a population of interconnected neurons. ... Cybernetics is the study of communication and control, typically involving regulatory feedback in living organisms, machines and organisations, as well as their combinations. ... Eliminativists argue that our modern belief in the existence of mental phenomena is analogous to our ancient belief in obsolete theories such as the geocentric model of the universe. ... The self-organizing map (SOM) is a subtype of artificial neural networks. ... System (from Latin systēma, in turn from Greek sustēma) is a set of entities, real or abstract, comprising a whole where each component interacts with or is related to at least one other component. ...

References

  • Abdi, H. "A neural network primer. Journal of Biological Systems, 2, 247-281, (1994)".
  • Abdi, H. "[1] (2003). Neural Networks. In M. Lewis-Beck, A. Bryman, T. Futing (Eds): Encyclopedia for research methods for the social sciences. Thousand Oaks (CA): Sage. pp. 792-795.]".
  • Abdi, H. "[2] (2001). Linear algebra for neural networks. In N.J. Smelser, P.B. Baltes (Eds.): International Encyclopedia of the Social and Behavioral Sciences. Oxford (UK): Elsevier.]".
  • Abdi, H., Valentin, D., Edelman, B.E. (1999). Neural Networks. Thousand Oaks: Sage.
  • Rumelhart, D.E., J.L. McClelland and the PDP Research Group (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, Cambridge, MA: MIT Press
  • McClelland, J.L., D.E. Rumelhart and the PDP Research Group (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models, Cambridge, MA: MIT Press
  • Pinker, Steven and Mehler, Jacques (1988). Connections and Symbols, Cambridge MA: MIT Press.
  • Jeffrey L. Elman, Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, Kim Plunkett (1996). Rethinking Innateness: A connectionist perspective on development, Cambridge MA: MIT Press.
  • Marcus, Gary F. (2001). The Algebraic Mind: Integrating Connectionism and Cognitive Science (Learning, Development, and Conceptual Change), Cambridge, MA: MIT Press

External links


  Results from FactBites:
 
Connectionism - Wikipedia, the free encyclopedia (2040 words)
Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind.
These theorists argued that connectionism, as it was being developed at that time, was in danger of obliterating the progress made in the fields of cognitive science and psychology by the classical approach of computationalism.
Connectionism and computationalism need not be at odds per se, but the debate as it was phrased in the late 1980s and early 1990s certainly led to opposition between the two approaches.
SNCC.html (6617 words)
Connectionism, as a branch of cognitive science, is consequently an exciting new way of studying the mind, and so must surely have consequences for the relationship between the mind and the body.
Connectionism is developing a scientific picture of mind in which (by and large) the mental entities of manifest image simply do not figure; nor is there anything in connectionist models with which such entities might be even loosely identified.
Connectionism is potentially significant for the mind-body problem because, as part of Cognitive Science, it is a way of examining the internal causal underpinnings of human behavior, and hence, a source of evidence in choosing between Computational Functionalism and Eliminativism.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m