COLT `92 conference program

David Haussler haussler at cse.ucsc.edu
Fri May 29 15:37:38 EDT 1992


                                   COLT '92 

                   Workshop on Computational Learning Theory 

                      Sponsored by ACM SIGACT and SIGART 

                              July 27 - 29, 1992 

               University of Pittsburgh, Pittsburgh, Pennsylvania 


GENERAL INFORMATION

Registration & Reception: Sunday, 7:00 - 10:00 pm, 2M56-2P56 Forbes Quadrangle
Conference Banquet: Monday, 7:00 pm

The conference sessions will be held in the William Pitt Union.
        Late Registration, etc.: Kurtzman Room (during technical sessions)
        Lectures & Impromptu Talks: Ballroom
        Poster Sessions: Assembly Room

SCHEDULE OF TALKS

                        Sunday, July 26

RECEPTION: 7:00 - 10:00 pm

                        Monday, July 27

SESSION 1: 8:45 - 10:05 am 

 8:45 -  9:05   Learning boolean read-once formulas with arbitrary symmetric
        and constant fan-in gates,
        by Nader H. Bshouty, Thomas Hancock, and Lisa Hellerstein
 9:05 -  9:25   On-line Learning of Rectangles,
        by Zhixiang Chen and Wolfgang Maass
 9:25 -  9:45   Cryptographic lower bounds on learnability of AC^1 functions on
        the uniform distribution,
        by Michael Kharitonov
 9:45 -  9:55   Learning hierarchical rule sets,
        by Jyrki Kivinen, Heikki Mannila and Esko Ukkonen
 9:55 - 10:05   Random DFA's can be approximately learned from sparse uniform
        examples,
        by Kevin Lang

SESSION 2: 10:30 - 11:50 am 

10:30 - 10:50   An O(n^loglog n) Learning Algorithm for DNF,
        by Yishay Mansour
10:50 - 11:10   A technique for upper bounding the spectral norm with
        applications to learning,
        by Mihir Bellare
11:10 - 11:30   Exact learning of read-k disjoint DNF and not-so-disjoint DNF,
        by Howard Aizenstein and Leonard Pitt
11:30 - 11:40   Learning k-term DNF formulas with an incomplete membership
        oracle,
        by Sally A. Goldman, and H. David Mathias
11:40 - 11:50   Learning DNF formulae under classes of probability
        distributions,
        by Michele Flammini, Alberto Marchetti-Spaccamela and Ludek Kucera


SESSION 3: 1:45 - 3:05 pm 

 1:45 -  2:05   Bellman strikes again -- the rate of growth of sample
        complexity with dimension for the nearest neighbor classifier, 
        by Santosh S. Venkatesh, Robert R. Snapp, and Demetri Psaltis
 2:05 -  2:25   A theory for memory-based learning,
        by Jyh-Han Lin and Jeffrey Scott Vitter
 2:25 -  2:45   Learnability of description logics,
        by William W. Cohen and Haym Hirsh
 2:45 -  2:55   PAC-learnability of determinate logic programs,
        by Savso Dvzeroski, Stephen Muggleton and Stuart Russell
 2:55 -  3:05   Polynomial time inference of a subclass of context-free
        transformations,
        by Hiroki Arimura, Hiroki Ishizaka, and Takeshi Shinohara

SESSION 4: 3:30 - 4:40 pm 

 3:30 -  3:50   A training algorithm for optimal margin classifiers,
        by Bernhard Boser, Isabell Guyon, and Vladimir Vapnik
 3:50 -  4:10   The learning complexity of smooth functions of a single
        variable,
        by Don Kimber and Philip M. Long
 4:10 -  4:20   Absolute error bounds for learning linear functions online,
        by Ethan Bernstein
 4:20 -  4:30   Probably almost discriminative learning,
        by Kenji Yamanishi
 4:30 -  4:40   PAC Learning with generalized samples and an application to
        stochastic geometry,
        by S.R. Kulkarni, S.K. Mitter, J.N. Tsitsiklis and O. Zeitouni

POSTER SESSION #1 & IMPROMPTU TALKS: 5:00 - 6:30 pm 

BANQUET: 7:00 pm

                        Tuesday, July 28

SESSION 5: 8:45 -  10:05 am 

 8:45 -  9:05   Degrees of inferability,
        by P. Cholak, R. Downey, L. Fortnow, W. Gasarch, E. Kinber, M. Kummer,
        S. Kurtz, and T. Slaman
 9:05 -  9:25   On learning limiting programs,
        by John Case, Sanjay Jain, and Arun Sharma
 9:25 -  9:45   Breaking the probability 1/2 barrier in FIN-type learning,
        by Robert Daley, Bala Kalyanasundaram, and Mahendran Velauthapillai
 9:45 -  9:55   Case based learning in inductive inference,
        by Klaus P. Jantke
 9:55 - 10:05   Generalization versus classification,
        by Rolf Wiehagen and Carl Smith

SESSION 6: 10:30 - 11:50 am 

10:30 - 10:50   Learning switching concepts,
        by Avrim Blum and Prasad Chalasani
10:50 - 11:10   Learning with a slowly changing distribution,
        by Peter L. Bartlett
11:10 - 11:30   Dominating distributions and learnability,
        by Gyora M. Benedek and Alon Itai
11:30 - 11:40   Polynomial uniform convergence and polynomial-sample
        learnability,
        by Alberto Bertoni, Paola Campadelli, Anna Morpurgo, and Sandra Panizza
11:40 - 11:50   Learning functions by simultaneously estimating errors,
        by Kevin Buescher and P.R. Kumar


INVITED TALK: 1:45 - 2:45 pm: Reinforcement learning,
        by Andy Barto, University of Massachusetts

SESSION 7: 3:10 - 4:40 pm 

 3:10 -  3:30   On learning noisy threshold functions with finite precision
        weights,
        by R. Meir and J.F. Fontanari
 3:30 -  3:50   Query by committee,
        by H.S. Seung, M. Opper, H. Sompolinsky
 3:50 -  4:00   A noise model on learning sets of strings,
        by Yasubumi Sakakibara and Rani Siromoney
 4:00 -  4:10   Language learning from stochastic input,
        by Shyam Kapur and Gianfranco Bilardi
 4:10 -  4:20   On exact specification by examples,
        by Martin Anthony, Graham Brightwell, Dave Cohen and John Shawe-Taylor
 4:20 -  4:30   A computational model of teaching,
        by Jeffrey Jackson and Andrew Tomkins
 4:30 -  4:40   Approximate testing and learnability,
        by Kathleen Romanik

IMPROMPTU TALKS: 5:00 - 6:00 pm  

BUSINESS MEETING: 8:00 pm 

POSTER SESSION #2: 9:00 - 10:30 pm 

                        Wednesday, July 29

SESSION 8: 8:45 -  9:45 am 

 8:45 -  9:05   Characterizations of learnability for classes of 0,...,n-valued
        functions, 
        by Shai Ben-David, Nicol`o Cesa-Bianchi and Philip M. Long
 9:05 -  9:25   Toward efficient agnostic learning,
        by Michael J. Kearns, Robert E. Schapire, and Linda Sellie
 9:25 -  9:45   Approximating Bayes decisions by additive estimations
        by Svetlana Anoulova, Paul Fischer, Stefan Polt, and Hans Ulrich Simon

SESSION 9: 10:10 - 10:50 am 

10:10 - 10:30   On the role of procrastination for machine learning,
        by Rusins Freivalds and Carl Smith
10:30 - 10:50   Types of monotonic language learning and their
        characterization,
        by Steffen Lange and Thomas Zeugmann

SESSION 10: 11:10 - 11:50 am 

11:10 - 11:30   An improved boosting algorithm and its implications on learning
        complexity,
        by Yoav Freund
11:30 - 11:50   Some weak learning results,
        by David P. Helmbold and Manfred K. Warmuth

SESSION 11: 1:45 - 2:45 pm 

 1:45 -  2:05   Universal sequential learning and decision from individual data
        sequences,
        by Neri Merhav and Meir Feder
 2:05 -  2:25   Robust trainability of single neurons,
        by Klaus-U. Hoffgen and Hans-U. Simon
 2:25 -  2:45   On the computational power of neural nets,
        by Hava T. Siegelmann and Eduardo D. Sontag


===============================================================================

ADDITIONAL INFORMATION

To receive complete information regarding conference registration and
accomodations contact Betty Brannick:

        E-mail: brannick at cs.pitt.edu
        PHONE: (412) 624-8493
        FAX: (412) 624-8854. 

Please specify whether you want the information sent in PLAIN text or LATEX
format.

NOTE: Attendees must register BY JUNE 19 TO AVOID THE LATE REGISTRATION FEE.


More information about the Connectionists mailing list