Browsing AI Memos (1959 - 2004) by Author "Jordan, Michael I."
Now showing items 1-14 of 14
-
Active Learning with Statistical Models
Cohn, David A.; Ghahramani, Zoubin; Jordan, Michael I. (1995-03-21)For many types of learners one can compute the statistically 'optimal' way to select data. We review how these techniques have been used with feedforward neural networks. We then show how the same principles may be ... -
Computing Upper and Lower Bounds on Likelihoods in Intractable Networks
Jaakkola, Tommi S.; Jordan, Michael I. (1996-03-01)We present techniques for computing upper and lower bounds on the likelihoods of partial instantiations of variables in sigmoid and noisy-OR networks. The bounds determine confidence intervals for the desired likelihoods ... -
Convergence Results for the EM Approach to Mixtures of Experts Architectures
Jordan, Michael I.; Xu, Lei (1993-11-01)The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of ... -
Estimating Dependency Structure as a Hidden Variable
Meila, Marina; Jordan, Michael I.; Morris, Quaid (1998-09-01)This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning ... -
Estimating Dependency Structure as a Hidden Variable
Meila, Marina; Jordan, Michael I.; Morris, Quaid (1997-06-01)This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EMand the Minimum Spanning ... -
Factorial Hidden Markov Models
Ghahramani, Zoubin; Jordan, Michael I. (1996-02-09)We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum ... -
Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks
Jaakkola, Tommi S.; Saul, Lawrence K.; Jordan, Michael I. (1996-02-09)Sigmoid type belief networks, a class of probabilistic neural networks, provide a natural framework for compactly representing probabilistic information in a variety of unsupervised and supervised learning problems. ... -
Hierarchical Mixtures of Experts and the EM Algorithm
Jordan, Michael I.; Jacobs, Robert A. (1993-08-01)We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are ... -
Learning Fine Motion by Markov Mixtures of Experts
Meila, Marina; Jordan, Michael I. (1995-11-01)Compliant control is a standard method for performing fine manipulation tasks, like grasping and assembly, but it requires estimation of the state of contact between the robot arm and the objects involved. Here we present ... -
Learning from Incomplete Data
Ghahramani, Zoubin; Jordan, Michael I. (1995-01-24)Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the ... -
Mean Field Theory for Sigmoid Belief Networks
Saul, Lawrence K.; Jaakkola, Tommi; Jordan, Michael I. (1996-08-01)We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it ... -
Neural Networks
Jordan, Michael I.; Bishop, Christopher M. (1996-03-13)We present an overview of current research on artificial neural networks, emphasizing a statistical perspective. We view neural networks as parameterized graphs that make probabilistic assumptions about data, and view ... -
On the Convergence of Stochastic Iterative Dynamic Programming Algorithms
Jaakkola, Tommi; Jordan, Michael I.; Singh, Satinder P. (1993-08-01)Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton ... -
Triangulation by Continuous Embedding
Meila, Marina; Jordan, Michael I. (1997-03-01)When triangulating a belief network we aim to obtain a junction tree of minimum state space. Searching for the optimal triangulation can be cast as a search over all the permutations of the network's vaeriables. Our ...