FACTOID # 11: Oklahoma has the highest rate of women in State or Federal correctional facilities.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "Perceptron" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Perceptron

The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feedforward neural network: a linear classifier. An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. ... 1957 (MCMLVII) was a common year starting on Tuesday of the Gregorian calendar. ... Cornell Aeronautical Laboratory (now Calspan Corporation) was originally founded in 1943 as part of the Research Laboratory of the Curtiss-Wright Airplane Division at Buffalo, N.Y. It operated as the Cornell Aeronautical Laboratory from 1946 until 1972 when Cornell University sold public stock in the lab and set it... Frank Rosenblatt (1928–1969) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. ... Feed-forward is a term describing a kind of system which reacts to changes in its environment, usually to maintain some desired state of the system. ... The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ...

Contents

Definition

The perceptron is a kind of binary classifier that maps its input x (a vector of type Real) to an output value f(x) (a scalar of type Real) calculated as In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ...


f(x) = langle w,x rangle + b



where w is a vector of real-valued weights and langle cdot,cdot rangle is the dot product(which computes a weighted sum). b is the 'bias', a constant term that does not depend on any input value. In mathematics, the dot product, also known as the scalar product, is a binary operation which takes two vectors over the real numbers R and returns a real-valued scalar quantity. ...


The sign of f(x) is used to classify x as either a positive or a negative instance, in the case of a binary classification problem. The bias can be thought of as offsetting the activation function, or giving the output neuron a "base" level of activity. If b is negative, then the weighted combination of inputs must produce a positive value greater than b in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. This article is in need of attention from an expert on the subject. ...


Since the inputs are fed directly to the output unit via the weighted connections, the perceptron can be considered the simplest kind of feed-forward neural network.


Learning algorithm

The learning algorithm is the same across all neurons, therefore everything that follows is applied to a single neuron in isolation. We first define some variables:

  • x(j) denotes the j-th item in the input vector
  • w(j) denotes the j-th item in the weight vector
  • y denotes the output from the neuron
  • δ denotes the expected output
  • α is a constant and 0 < α < 1
the appropriate weights are applied to the inputs that passed to a function which produces the output y
the appropriate weights are applied to the inputs that passed to a function which produces the output y

The weights are updated after each input according to the update rule below: Image File history File links Perceptron. ... Image File history File links Perceptron. ...

  • w(j)' = w(j) + α(δ − y)x(j)

Therefore, learning is modeled as the weight vector being updated after one iteration, which will only take place if the output y is different from the desired output δ. Still considering a single neuron but trying to incorporate multiple iterations, let us first define some more variables:

  • xi denotes the input vector for the i-th iteration
  • wi denotes the weight vector for the i-th iteration
  • yi denotes the output for the i-th iteration
  • D_m = {(x_1,y_1),dots,(x_m,y_m)} denotes a training set of m iterations

Each iteration the weight vector is updated as follows

  • For each (x,y) pair in D_m = {(x_1,y_1),dots,(x_m,y_m)}
  • Pass (xi,yi,wi) to the update rule w(j)' = w(j) + α(δ − y)x(j)

The training set Dm is said to be linearly separable if there exists a positive constant γ and a weight vector w such that y_i cdotleft( langle w, x_i rangle +b right) > gamma for all i. Novikoff (1962) proved that the perceptron algorithm converges after a finite number of iterations if the data set is linearly separable and the number of mistakes is bounded by left(frac{2R}{gamma}right)^2. In geometry, when two sets of points in a two-dimensional graph can be completely separated by a single line, they are said to be linearly separable. ... A dataset is a set of data, usually presented in tabular form. ...


However, if the training set is not linearly separable, the above online algorithm is not guaranteed to converge. In geometry, when two sets of points in a two-dimensional graph can be completely separated by a single line, they are said to be linearly separable. ...


Variants

The pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far "in its pocket". The pocket algorithm then returns the solution in the pocket, rather than the last solution.


The α-perceptron further utilised a preprocessing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classify analogue patterns, by projecting them into a binary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable. Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. ...


As an example, consider the case of having to classify data into two classes. Here is a small such data set, consisting of two points coming from two Gaussian distributions. Probability density function of Gaussian distribution (bell curve). ...

A linear classifier can only separate things with a hyperplane, so it's not possible to perfectly classify all the examples. On the other hand, we may project the data into a large number of dimensions. In this case a random matrix was used to project the data linearly to a 1000-dimensional space; then each resulting data point was transformed through the hyperbolic tangent function. A linear classifier can then separate the data, as shown in the third figure. However the data may still not be completely separable in this space, in which the perceptron algorithm would not converge. In the example shown, stochastic steepest gradient descent was used to adapt the parameters. A hyperplane is a concept in geometry. ... In probability theory and statistics, a random matrix is a matrix-valued random variable. ... In mathematics, the hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. ... Stochastic gradient descent is a general optimization algorithm, but is typically used to fit the parameters of a machine learning model. ...


It should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal.


Other training algorithms for linear classifiers are possible: see, e.g., support vector machine and logistic regression. Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. ... Logistic regression is a statistical regression model for Bernoulli-distributed dependent variables. ...


History

Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This led to the field of neural network research stagnating for many years, before it was recognised that a feedforward neural network with three or more layers (also called a multilayer perceptron) had far greater processing power than perceptrons with one layer (also called a single layer perceptron) or two. Single layer perceptrons are only capable of learning linearly separable patterns; in 1969 a famous monograph entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. They conjectured (incorrectly) that a similar result would hold for a perceptron with three or more layers. Three years later Stephen Grossberg published a series of papers introducing networks capable of modelling differential, contrast-enhancing and XOR functions. (The papers were published in 1972 and 1973, see e.g.: Grossberg, Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52 (1973), 213-257, online [1]). Nevertheless the often-cited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until the neural network research experienced a resurgence in the 1980s. This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected. An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. ... An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. ... In geometry, when two sets of points in a two-dimensional graph can be completely separated by a single line, they are said to be linearly separable. ... For the Stargate SG-1 episode, see 1969 (Stargate SG-1). ... Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as Old Man Minsky, is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of MITs AI laboratory, and author of several texts on AI and philosophy. ... Seymour Papert Seymour Papert (born March 1, 1928 Pretoria, South Africa) is an MIT mathematician, computer scientist, and prominent educator. ... Exclusive disjunction (usual symbol xor) is a logical operator that results in true if one of the operands (not both) is true. ... Stephen Grossberg is a cognitive scientist, mathematician, and head of the Department of Cognitive and Neural Systems at Boston University. ... // See also Artificial neural network. ...


More recently, interest in the perceptron learning algorithm has increased again after Freund and Schapire (1998) presented a voted formulation of the original algorithm (attaining large margin) and suggested that one can apply the kernel trick to it. The kernel-perceptron not only can handle nonlinearly separable data but can also go beyond vectors and classify instances having a relational representation (e.g. trees, graphs or sequences). In machine learning, the kernel trick is a method for converting a linear classifier algorithm into a non-linear one by using a non-linear function to map the original observations into a higher-dimensional space; this makes a linear classification in the new space equivalent to non-linear classification...


Running Time

First we consider a network with only one neuron. In this neuron, if we assume the length of the largest element in x or in w to be of size n then the running time is simply the running time of the dot product which is O(n3). The reason for this is that the dot product is the rate determining step. If we extend the example further, we find that, in a network with k neurons, each with a running time of O(n3), we have an overall running time of O(kn3)


See also

An artificial neuron (also called a node or Nv neuron or Binary neuron or McCulloch-Pitts neuron) is an abstraction of biological neurons and the basic unit in an artificial neural network. ... Hondas humanoid robot AI redirects here. ... An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. ... Data mining (DM), also called Knowledge-Discovery in Databases (KDD) or Knowledge-Discovery and Data Mining, is the process of automatically searching large volumes of data for patterns using tools such as classification, association rule mining, clustering, etc. ... The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ... Linear discriminant analysis (LDA) and the related Fishers linear discriminant are used in statistics to find the linear combination of features which best separate two or more classes of object or event. ... In mathematics, especially as applied in statistics, the logit (pronounced with a long o and a soft g, IPA ) of a number p between 0 and 1 is This function is used in logistic regression. ... Logistic regression is a statistical regression model for Bernoulli-distributed dependent variables. ... As a broad subfield of artificial intelligence, Machine learning is concerned with the development of algorithms and techniques that allow computers to learn. At a general level, there are two types of learning: inductive, and deductive. ... Pattern recognition is a field within the area of machine learning. ... Predictive analytics encompasses a variety of techniques from statistics and data mining that process current and historical data in order to make “predictions” about future events. ... Support vector machines (SVMs) are a set of related supervised learning methods, applicable to both classification and regression. ...

References

  • Abdi, H., Valentin, D., Edelman, B (1999). Neural Networks (Thousand Oaks: Sage)
  • Abdi, H., (1994). A neural network primer. Journal of Biological Systems, 2, 247-281. pdf file
  • Freund, Y. and Schapire, R. E. 1998. Large margin classification using the perceptron algorithm. In Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT' 98). ACM Press.
  • Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, vol. 1, no. 2, pp. 179-191.
  • Rosenblatt, Frank (1958), The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Cornell Aeronautical Laboratory, Psychological Review, v65, No. 6, pp. 386-408.
  • Minsky M L and Papert S A 1969 Perceptrons (Cambridge, MA: MIT Press)
  • Novikoff, A. B. (1962). On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata, 12, 615-622. Polytechnic Institute of Brooklyn.
  • Widrow, B., Lehr, M.A., "30 years of Adaptive Neural Networks: Perceptron, Madaline, and Backpropagation," Proc. IEEE, vol 78, no 9, pp. 1415-1442, (1990).

Bernard Widrow (born December 24, 1929) is a U.S. professor of electrical engineering at Stanford University. ...

External links


  Results from FactBites:
 
Perceptron - Wikipedia, the free encyclopedia (548 words)
The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt.
Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns.
In facts, perceptrons are only capable of learning linearly separable patterns; in 1969 a famous monograph entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function.
generation5 - Perceptrons (968 words)
Learning on a perceptron is guaranteed, as stated by the Perceptron Convergence Theorem which states that if a solution can be implemented on a perceptron, the learning rule will find the solution in a finite number of steps.
Perceptrons can only classify data when the two classes can be divided by a straight line (or, more generally, a hyperplane)—this is called linear separation.
The first perceptron balances at {0,1,1} (remember, the first element is the bias), the second at {2,-1, -1} and the final one at {-1, 1, 1}.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m