one revised paper and NIPS slides by Buntine
Wray Buntine
wray at ptolemy-ethernet.arc.nasa.gov
Wed Dec 13 17:06:42 EST 1995
Dear Connectionists
Please note the following two WWW resources. One, a forthcoming journal
paper, and the other, slides from a NIPS'95 Workshop presentation.
Also, please note my new address, email, and company. I am no longer at
Heuristicrats.
Wray Buntine
Thinkbank, Inc. +1 (510) 540-6080 [voice]
1678 Shattuck Avenue, Suite 320 +1 (510) 540-6627 [fax]
Berkeley, CA 94709 wray at Thinkbank.COM
============ Article
URL: http://www.thinkbank.com/wray/graphbib.ps.Z
(about 240Kb compressed)
TITLE: A guide to the literature on learning probabilistic
networks from data
AUTHOR: Wray Buntine, Thinkbank
JOURNAL: Accepted for IEEE Trans. on Knowledge and Data Eng.,
Final draft submitted.
ABSTRACT: This literature review discusses different methods under the
general rubric of learning Bayesian networks from data, and includes some
overlapping work on more general probabilistic networks. Connections are
drawn between the statistical, neural network, and uncertainty communities,
and between the different methodological communities, such as Bayesian,
description length, and classical statistics. Basic concepts for learning
and Bayesian networks are introduced and methods are then reviewed. Methods
are discussed for learning parameters of a probabilistic network, for
learning the structure, and for learning hidden variables. The presentation
avoids formal definitions and theorems, as these are plentiful in the
literature, and instead illustrates key concepts with simplified examples.
KEYWORDS: Bayesian networks, graphical models, hidden variables,
learning, learning structure, probabilistic networks, knowledge discovery
=========== Talk
URL: http://www.thinkbank.com/wray/refs.html
(and look under Talks for NIPS)
TITLE: Compiling Probabilistic Networks and Some Questions this Poses.
AUTHOR: Wray Buntine
WORKSHOP: NIPS'95 Workshop on Learning Graphical Models
ABSTRACT:
Probabilistic networks (or similar) provide a high-level language that
can be used as the input to a compiler for generating a learning or
inference algorithm. Example compilers are BUGS (inputs a Bayes
net with plates) by Gilks, Spiegelhalter, et al., and MultiClass (inputs
a dataflow graph) by Roy. This talk will cover three parts: (1) an
outline of the arguments for such compilers for probabilistic
networks, (2) an introduction to some compilation techniques, and
(3) the presentation of some theoretical challenges that compilation
poses.
High-level language compilers are usually justified as a rapid
prototyping tool. In learning, rapid prototyping arises for the
following reasons: good priors for complex networks are not obvious
and experimentation can be required to understand them; several
algorithms may suggest themselves and experimentation is required
for comparative evaluation. These and other justifications will be
described in the context of some current research on learning
probabilistic networks, and past research on learning classification
trees and feed-forward neural networks. Techniques for compilation
include the data flow graph, automatic differentiation, Monte Carlo
Markov Chain samplers of various kinds, and the generation of C
code for certain exact inference tasks. With this background, I will
then pose a number of research questions to the audience.
===========
More information about the Connectionists
mailing list