FACTOID # 19: Cheap sloppy joes: Looking for reduced-price lunches for schoolchildren? Head for Oklahoma!
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Machine Learning
Artificial intelligence Portal

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets. Image File history File links Portal. ... AI redirects here. ... In mathematics, computing, linguistics, and related disciplines, an algorithm is a finite list of well-defined instructions for accomplishing some task that, given an initial state, will terminate in a defined end-state. ... Aristotle appears first to establish the mental behaviour of induction as a category of reasoning. ... Deductive reasoning is the kind of reasoning where the conclusion is necessitated or implied by previously known premises. ...


The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related to data mining and statistics but also theoretical computer science. Kurt Thearling, An Introduction to Data Mining (also available is a corresponding online tutorial) Dean Abbott, I. Philip Matkovsky, and John Elder IV, Ph. ... This article is about the field of statistics. ... Computer science (informally, CS or compsci) is, in its most general sense, the study of computation and information processing, both in hardware and in software. ...


Machine learning has a wide spectrum of applications including natural language processing, syntactic pattern recognition, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion. Natural language processing (NLP) is a subfield of artificial intelligence and linguistics. ... Syntactic pattern recognition or structural pattern recognition is a form of pattern recognition, where items are presented pattern structures which can take into account more complex interrelationships between features than simple numerical feature vectors used in statistical classification. ... The success of the Google search engine was mainly due to its powerful PageRank algorithm and its simple, easy-to-use interface. ... In general, diagnosis (plural diagnoses) has two distinct dictionary definitions. ... Map of the human X chromosome (from the NCBI website). ... Cheminformatics (also known as chemoinformatics and chemical informatics) is the use of computer and informational techniques, applied to a range of problems in the field of chemistry. ... Credit card fraud is a wide-ranging term for theft and fraud committed using a credit card or any similar payment mechanism as a fraudulent source of funds in a transaction. ... A stock market is a market for the trading of company stock, and derivatives of same; both of these are securities listed on a stock exchange as well as those only traded privately. ... part of a DNA sequence A DNA sequence (sometimes genetic sequence) is a succession of letters representing the primary structure of a real or hypothetical DNA molecule or strand, The possible letters are A, C, G, and T, representing the four nucleotide subunits of a DNA strand (adenine, cytosine, guanine... Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as Voice Recognition) is the process of converting a speech signal to a sequence of words, by means of an algorithm implemented as a computer program. ... Handwriting recognition is the ability of a computer to receive intelligible handwritten input. ... Computer vision is the study of methods which allow computers to understand images, or multidimensional data in general. ... Computer vision is the science and technology of machines that see. ... Chess is one of the most well-known and played strategy games of all time. ... Robot locomotion is the study of how to design robot appendages and control mechanisms to allow robots to move fluidly and efficiently. ...

Contents

Human interaction

Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method. Scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. ...


Some statistical machine learning researchers create methods within the framework of Bayesian statistics. Bayesian inference is statistical inference in which probabilities are interpreted not as frequencies or proportions or the like, but rather as degrees of belief. ...


Algorithm types

Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include: In mathematics, computing, linguistics, and related disciplines, an algorithm is a finite list of well-defined instructions for accomplishing some task that, given an initial state, will terminate in a defined end-state. ... For the science of classifying living things, see alpha taxonomy. ...

  • Supervised learning — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector [X_1, X_2, ldots X_N], into one of several classes by looking at several input-output examples of the function.
  • Unsupervised learning — which models a set of inputs: labeled examples are not available.
  • Semi-supervised learning — which combines both labeled and unlabeled examples to generate an appropriate function or classifier.
  • Reinforcement learning — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm.
  • Transduction — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training.
  • Learning to learn — in which the algorithm learns its own inductive bias based on previous experience.

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Supervised learning is a machine learning technique for creating a function from training data. ... Statistical classification is a type of supervised learning problem in which labeled training data is used to create a function that will correctly predict the label of future data. ... Unsupervised learning is a method of machine learning where a model is fit to observations. ... In computer science, semi-supervised learning is a class of machine learning techniques that make use of both labeled and unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data. ... Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment in which the agent perceives its current state and takes actions. ... In logic, statistical inference, and supervised learning, transduction or transductive inference is reasoning from observed, specific (training) cases to specific (test) cases. ... Multi-task learning is an area of machine learning research in computer science. ... Informally speaking, the inductive bias of a machine learning algorithm refers to additional assumptions, that the learner will use to predict correct outputs for situations that have not been encountered so far. ... Computer science (informally, CS or compsci) is, in its most general sense, the study of computation and information processing, both in hardware and in software. ... In statistics, computational learning theory is a mathematical field related to the analysis of machine learning algorithms. ...


Machine learning topics

This list represents the topics covered on a typical machine learning course.
Approximate inference techniques
Optimization
  • Most of methods listed above either use optimization or are instances of optimization algorithms
Meta-learning (ensemble methods)
Inductive transfer and learning to learn

Bayesianism is the philosophical tenet that the mathematical theory of probability applies to the degree of plausibility of statements, or to the degree of belief of rational agents in the truth of statements; when used with Bayes theorem, it then becomes Bayesian inference. ... This article defines some terms which characterize probability distributions of two or more variables. ... In statistics, regression analysis examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). ... Statistical classification is a type of supervised learning problem in which labeled training data is used to create a function that will correctly predict the label of future data. ... An artificial neural network (ANN), often just called a neural network (NN), is a mathematical model or computational model based on biological neural networks. ... In operations research, specifically in decision analysis, a decision tree is a decision support tool that uses a graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. ... Gene Expression Programming (GEP) is a new evolutionary algorithm that evolves computer programs. ... A genetic algorithm (GA) is an algorithm used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. ... Genetic programming (GP) is an evolutionary algorithm based methodology inspired by biological evolution to find computer programs that perform a user-defined task. ... Inductive logic programming (ILP) is a machine learning approach, which uses techniques of logic programming. ... Kriging is group of geostatistical techniques to interpolate the value of a random field (e. ... Linear discriminant analysis (LDA) and the related Fishers linear discriminant are methods used in statistics and machine learning to find the linear combination of features which best separate two or more classes of object or event. ... The nearest neighbor algorithm in pattern recognition is a method for classifying phenomena based upon observable features. ... Minimum message length (MML) is a formal information theory restatement of Occams Razor: even when models are not equal in goodness of fit accuracy to the observed data, the one generating the shortest overall message is more likely to be correct (where the message consists of a statement of... The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. ... A quadratic classifier is used in machine learning to separate measurements of two or more classes of objects or events by a quadric surface. ... A radial basis function network is an artificial neural network which uses radial basis functions as activation functions. ... Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. ... In mathematics and computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naive methods. ... In statistical computing, an expectation-maximization (EM) algorithm is an algorithm for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. ... In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals. ... Generative models are mathematical equations or algorithms, typically used in computer science and machine learning disciplines. ... In probability theory and statistics, a graphical model (GM) represents dependencies among random variables by a graph in which each random variable is a node. ... A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic dependencies. ... A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set of random variables. ... In Statistics, GTM is basically a Bayesianised version of the Kohonnen network. ... Monte Carlo methods are a widely used class of computational algorithms for simulating the behavior of various physical and mathematical systems, and for other computations. ... In Bayesian network theory, the joint conditional posterior $P(h|v)$ is often intractable but the joint posterior $P(h,v)$ is tractable. ... Variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. ... Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. ... Belief propagation, also known as the sum-product algorithm, is an iterative algorithm for computing marginals of functions on a graphical model most commonly used in artificial intelligence and information theory. ... In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. ... Boosting is a machine learning meta-algorithm for performing supervised learning. ... Bootstrap aggregating (bagging) is a meta-algorithm to improve classification and regression models in terms of stability and classification accuracy. ... A Random Forest classifier is a classifier that is constructed using an algorithm developed by Leo Breiman and Adele Cutler. ... In machine learning, Weighted Majority Algorithm (WMA) is a meta-learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. ... Inductive transfer or transfer learning refers to the problem of retaining and applying the knowledge learned in one or more tasks to efficiently develop an effective hypothesis for a new task. ... Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment in which the agent perceives its current state and takes actions. ... Temporal difference learning is a prediction method. ... Monte Carlo methods are a class of computational algorithms for simulating the behavior of various physical and mathematical systems. ...

See also

AI redirects here. ... Autonomous robots are robots which can perform desired tasks in unstructured environments without continuous human guidance. ... Map of the human X chromosome (from the NCBI website). ... Computational intelligence (CI) is a branch of artificial intelligence. ... Computer vision is the science and technology of machines that see. ... Kurt Thearling, An Introduction to Data Mining (also available is a corresponding online tutorial) Dean Abbott, I. Philip Matkovsky, and John Elder IV, Ph. ... Inductive logic programming (ILP) is a machine learning approach, which uses techniques of logic programming. ... To meet Wikipedias quality standards, this article may require cleanup. ... The Journal of Machine Learning Research (usually abbreviated as JMLR), is a peer-reviewed scientific journal, published since 2000. ... This is a list of important publications in computer science, organized by field. ... Listed here are a number of computer programs used for performing numerical calculations: Baudline is a time-frequency browser for numerical signals analysis and scientific visualization. ... Machine Learning: Models, Technologies & Applications (MLMTA) is an international machine learning and applied statistics conference held every June in Las Vegas, USA. The conference is part of WORLDCOMP - World Congress in Computer Science, Computer Engineering, and Applied Computing which is the largest annual gathering of researchers in computer science, computer... There are very few or no other articles that link to this one. ... Neural Information Processing Systems (NIPS) began in 1987 as a computational neuroscience conference. ... Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems. ... Variable order Markov (VOM) Models are an important class of models that extend the well known Markov Chain Models. ... Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. ... Pattern recognition is a field within the area of machine learning. ... Predictive analytics encompasses a variety of techniques from statistics and data mining that process current and historical data in order to make “predictions” about future events. ... Weka is a suite of machine learning software written in Java at the University of Waikato which implements several machine learning algorithms from various learning paradigms. ...

Bibliography

  • Ethem Alpaydın (2004) Introduction to Machine Learning (Adaptive Computation and Machine Learning), MIT Press, ISBN 0262012111
  • Christopher M. Bishop (2007) Pattern Recognition and Machine Learning, Springer ISBN 0-387-31073-8.
  • Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1983), Machine Learning: An Artificial Intelligence Approach, Tioga Publishing Company, ISBN 0-935382-05-4.
  • Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), Machine Learning: An Artificial Intelligence Approach, Volume II, Morgan Kaufmann, ISBN 0-934613-00-1.
  • Yves Kodratoff, Ryszard S. Michalski (1990), Machine Learning: An Artificial Intelligence Approach, Volume III, Morgan Kaufmann, ISBN 1-55860-119-8.
  • Ryszard S. Michalski, George Tecuci (1994), Machine Learning: A Multistrategy Approach, Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8.
  • Bhagat, P. M. (2005). Pattern Recognition in Industry, Elsevier. ISBN 0-08-044538-1.
  • Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2.
  • Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3.
  • Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7 [1].
  • KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8 [2].
  • MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms, Cambridge University Press. ISBN 0-521-64298-1.
  • Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0-07-042807-7.
  • Ian H. Witten and Eibe Frank "Data Mining: Practical machine learning tools and techniques" Morgan Kaufmann ISBN 0-12-088407-0.
  • Sholom Weiss and Casimir Kulikowski (1991). Computer Systems That Learn, Morgan Kaufmann. ISBN 1-55860-065-5.
  • Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006.
  • Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). The Elements of Statistical Learning, Springer. ISBN 0387952845 (companion book site).
  • Vladimir Vapnik (1998). Statistical Learning Theory. Wiley-Interscience, ISBN 0471030031.

External links


  Results from FactBites:
 
Machine learning - Wikipedia, the free encyclopedia (1216 words)
Machine learning overlaps heavily with statistics, since both fields study the analysis of data, but unlike statistics, machine learning is concerned with the algorithmic complexity of computational implementations.
Machine learning has a wide spectrum of applications including search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion.
Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm.
Machine Learning (430 words)
BLAKE, C.L. and C.J. MERZ, UCI repository of machine learning databases, 1998.
BLAKE, C.L. and C.J. UCI Repository of machine learning databases [http://www.
BLAKE, C.L. and C.J. UCI repository of machine learning databases.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m