From nasmith at cs.cmu.edu Tue Jan 19 09:11:44 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 19 Jan 2010 09:11:44 -0500 Subject: [Intelligence Seminar] January 26: Yiannis Aloimonos, GHC 4303, 3:30, "Grammars of Human Activity" Message-ID: <309df0211001190611o647e9688qa0dde4c12c80d045@mail.gmail.com> Intelligence Seminar January 26, 2010 3:30 pm GHC 4303 Host: ?Fernando De la Torre For meetings, contact Fernando De la Torre (ftorre at cs.cmu.edu) Grammars of Human Activity Yiannis Aloimonos, University of Maryland College Park Abstract: One of the major goals of Cognitive Systems is to interpret human activity sensed by a variety of sensors. In order to develop useful technologies and a subsequent industry around cognitive interaction technology, we need to proceed in a principled manner. This talk suggests that human activity can be expressed in a language. This is a special language with its own phonemes, its own morphemes (words) and its own syntax and it can be learned using machine learning techniques applied to gargantuan amounts of data collected by sensor networks. I will present two examples of grammatical frameworks that we have been developing over the past few years and their application to Health, Cognition and Social Signal Processing. I will also discuss the problem of language grounding and show recent results. Bio: Yiannis Aloimonos is a Professor of Computational Vision and Intelligence in the Dept. of Computer Science at the University of Maryland, College Park and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies. He works in the fundamental aspects of Geometry and Statistics in the area of multiple view vision (3D shape, segmentation, motion analysis). He is known for his work on Active Vision and his study of vision as a dynamic process. He has received several awards for his work (including the Marr Prize Honorable Mention Award, 1st International Conference on Computer Vision for his work on Active Vision and the Presidential Young Investigator Award). His research has been supported over the years by the European Union (Cognitive Systems), NSF, NIH, ONR, DARPA, IBM, Honeywell, Dassault and Westinghouse. His current interests are on the integration of vision, action and cognition. From nasmith at cs.cmu.edu Mon Jan 25 10:05:19 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 25 Jan 2010 10:05:19 -0500 Subject: [Intelligence Seminar] TOMORROW: Yiannis Aloimonos, GHC 4303, 3:30, "Grammars of Human Activity" Message-ID: <309df0211001250705g4c82e8b5wa4f9f6bf4e45330@mail.gmail.com> Intelligence Seminar January 26, 2010 3:30 pm GHC 4303 Host: Fernando De la Torre For meetings, contact Fernando De la Torre (ftorre at cs.cmu.edu) Grammars of Human Activity Yiannis Aloimonos, University of Maryland College Park Abstract: One of the major goals of Cognitive Systems is to interpret human activity sensed by a variety of sensors. In order to develop useful technologies and a subsequent industry around cognitive interaction technology, we need to proceed in a principled manner. This talk suggests that human activity can be expressed in a language. This is a special language with its own phonemes, its own morphemes (words) and its own syntax and it can be learned using machine learning techniques applied to gargantuan amounts of data collected by sensor networks. I will present two examples of grammatical frameworks that we have been developing over the past few years and their application to Health, Cognition and Social Signal Processing. I will also discuss the problem of language grounding and show recent results. Bio: Yiannis Aloimonos is a Professor of Computational Vision and Intelligence in the Dept. of Computer Science at the University of Maryland, College Park and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies. He works in the fundamental aspects of Geometry and Statistics in the area of multiple view vision (3D shape, segmentation, motion analysis). He is known for his work on Active Vision and his study of vision as a dynamic process. He has received several awards for his work (including the Marr Prize Honorable Mention Award, 1st International Conference on Computer Vision for his work on Active Vision and the Presidential Young Investigator Award). His research has been supported over the years by the European Union (Cognitive Systems), NSF, NIH, ONR, DARPA, IBM, Honeywell, Dassault and Westinghouse. His current interests are on the integration of vision, action and cognition. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20100125/50600a23/attachment.html From nasmith at cs.cmu.edu Tue Feb 16 17:14:52 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 16 Feb 2010 17:14:52 -0500 Subject: [Intelligence Seminar] February 23: Alex Smola, GHC 4303, 3:30, "Fast and Sloppy - Scaling Up Linear Models" Message-ID: <309df0211002161414m2530ff43s8e5efd0e538f2924@mail.gmail.com> Intelligence Seminar February 23, 2010 3:30 pm GHC 4303 Host: Carlos Guestrin For meetings, contact Michelle Martin (michelle324 at cs.cmu.edu). Title: Fast and Sloppy - Scaling Up Linear Models Alex Smola, Yahoo Research Abstract: In this talk I present an overview over a range of methods designed to scale up linear models both in terms of model complexity and in terms of their ability to process large amounts of data. The first aspect is addressed by hashing feature vectors for both prediction and matrix factorization. The second aspect can be dealt with by parallelizing stochastic gradient descent optimization procedures. I will present an algorithm suitable for multicore parallelism. Bio: Dr. Alex Smola is Principal Researcher at Yahoo! Research, Santa Clara and Adjunct Professor at the Australian National University. Prior to that until 2008 he was Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA. He received his Diplom in Physics from the University of Technology in Munich and his Doctoral degree in Computer Science from the University of Technology in Berlin. He has worked at AT&T Research, the Fraunhofer Institute, the Australian National University, NICTA and Yahoo!. His research interest are nonparametric methods for estimation, in particular kernel methods and exponential families. This includes Support Vector Machines, Gaussian processes, and conditional random fields. He is currently working on large scale methods for document analysis and representation, such as nonparametric Bayesian models. He has organized workshops at NIPS, EUROCOLT, ICML and 5 Machine Learning Summer Schools. Moreover, he served on the senior program committee of COLT, ICML, NIPS, and AAAI. He has written one book and edited 4 books. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20100216/773360a1/attachment.html From nasmith at cs.cmu.edu Mon Feb 22 10:21:31 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 22 Feb 2010 10:21:31 -0500 Subject: [Intelligence Seminar] TOMORROW: Alex Smola, GHC 4303, 3:30, "Fast and Sloppy - Scaling Up Linear Models" Message-ID: <309df0211002220721k47671900jb342e7700ed927da@mail.gmail.com> Intelligence Seminar February 23, 2010 3:30 pm GHC 4303 Host: ?Carlos Guestrin For meetings, contact Michelle Martin (michelle324 at cs.cmu.edu). Title: Fast and Sloppy - Scaling Up Linear Models Alex Smola, Yahoo Research Abstract: In this talk I present an overview over a range of methods designed to scale up linear models both in terms of model complexity and in terms of their ability to process large amounts of data. The first aspect is addressed by hashing feature vectors for both prediction and matrix factorization. The second aspect can be dealt with by parallelizing stochastic gradient descent optimization procedures. I will present an algorithm suitable for multicore parallelism. Bio: Dr. Alex Smola is Principal Researcher at Yahoo! Research, Santa Clara and Adjunct Professor at the Australian National University. Prior to that until 2008 he was Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA. He received his Diplom in Physics from the University of Technology in Munich and his Doctoral degree in Computer Science from the University of Technology in Berlin. He has worked at AT&T Research, the Fraunhofer Institute, the Australian National University, NICTA and Yahoo!. His research interest are nonparametric methods for estimation, in particular kernel methods and exponential families. This includes Support Vector Machines, Gaussian processes, and conditional random fields. He is currently working on large scale methods for document analysis and representation, such as nonparametric Bayesian models. He has organized workshops at NIPS, EUROCOLT, ICML and 5 Machine Learning Summer Schools. Moreover, he served on the senior program committee of COLT, ICML, NIPS, and AAAI. He has written one book and edited 4 books. From nasmith at cs.cmu.edu Wed Feb 24 08:51:48 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Wed, 24 Feb 2010 08:51:48 -0500 Subject: [Intelligence Seminar] March 2: Manuela Veloso, GHC 4303, 3:30, "Towards Autonomous Mobile Robots Coexisting with Humans in Indoor Environments" Message-ID: <309df0211002240551h60f9cb99wac7157634ceaa4c8@mail.gmail.com> Intelligence Seminar March 2, 2010 3:30 pm GHC 4303 Title: Towards Autonomous Mobile Robots Coexisting with Humans in Indoor Environments Manuela M. Veloso, Carnegie Mellon University We envision ubiquitous autonomous mobile robots that can help and coexist with humans. Such robots are still far from common, as our environments offer great challenges to robust robot perception, cognition, and action. We realize the envisioned robot and human coexistance as offering a symbiotic human-robot interaction, such that we view robots and humans with complementary limitations and expertise. I will present CoBot, our visitor's companion robot that can provide guidance to visitors unfamiliar with the building, while it can also identify and overcome its limitations by asking for human help. I will present CoBot's effective mobile robot indoor localization and navigation algorithms that use a WiFi signature perceptual map combined with geometric constraints of the building. I will illustrate CoBot's performance with examples of a few autonomous hours-long past runs of the robot in Wean Hall and very recent runs in the new Gates Hillman Center. I will then discuss the opportunities and tradeoffs raised by the symbiotic human-robot interaction, and present illustrative studies. I conclude with the introduction of our newly arrived second CoBot robot and the presentation of our ongoing work towards having multiple robots and humans engaged in planning and coordination for a variety of tasks. This work is joint with my students Joydeep Biswas, Stephanie Rosenthal, and Nick Armstrong-Crews. The robots were remarkably designed and constructed by Michael Licitra, as omnidirectional four-wheeled robots, inspired by his previously created platform of our small-size soccer robots. Bio: Manuela M. Veloso is Herbert A. Simon Professor of Computer Science at Carnegie Mellon University. She directs the CORAL research laboratory, for the study of agents that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is a Fellow of the Association for the Advancement of Artificial Intelligence, and the President of the RoboCup Federation. She recently received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso is the author of one book on "Planning by Analogical Reasoning" and editor of several other books. She is also an author in over 200 journal articles and conference papers. From nasmith at cs.cmu.edu Mon Mar 1 09:35:33 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 1 Mar 2010 09:35:33 -0500 Subject: [Intelligence Seminar] TOMORROW: Manuela Veloso, GHC 4303, 3:30, "Towards Autonomous Mobile Robots Coexisting with Humans in Indoor Environments" Message-ID: <309df0211003010635n274afc94g523c7d93fa55babe@mail.gmail.com> Intelligence Seminar March 2, 2010 3:30 pm GHC 4303 Title: Towards Autonomous Mobile Robots Coexisting with Humans in Indoor Environments Manuela M. Veloso, Carnegie Mellon University We envision ubiquitous autonomous mobile robots that can help and coexist with humans. Such robots are still far from common, as our environments offer great challenges to robust robot perception, cognition, and action. We realize the envisioned robot and human coexistance as offering a symbiotic human-robot interaction, such that we view robots and humans with complementary limitations and expertise. I will present CoBot, our visitor's companion robot that can provide guidance to visitors unfamiliar with the building, while it can also identify and overcome its limitations by asking for human help. I will present CoBot's effective mobile robot indoor localization and navigation algorithms that use a WiFi signature perceptual map combined with geometric constraints of the building. I will illustrate CoBot's performance with examples of a few autonomous hours-long past runs of the robot in Wean Hall and very recent runs in the new Gates Hillman Center. I will then discuss the opportunities and tradeoffs raised by the symbiotic human-robot interaction, and present illustrative studies. I conclude with the introduction of our newly arrived second CoBot robot and the presentation of our ongoing work towards having multiple robots and humans engaged in planning and coordination for a variety of tasks. This work is joint with my students Joydeep Biswas, Stephanie Rosenthal, and Nick Armstrong-Crews. The robots were remarkably designed and constructed by Michael Licitra, as omnidirectional four-wheeled robots, inspired by his previously created platform of our small-size soccer robots. Bio: Manuela M. Veloso is Herbert A. Simon Professor of Computer Science at Carnegie Mellon University. She directs the CORAL research laboratory, for the study of agents that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is a Fellow of the Association for the Advancement of Artificial Intelligence, and the President of the RoboCup Federation. She recently received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso is the author of one book on "Planning by Analogical Reasoning" and editor of several other books. She is also an author in over 200 journal articles and conference papers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20100301/5b74e10e/attachment-0001.html From nasmith at cs.cmu.edu Tue Mar 9 07:12:54 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 9 Mar 2010 07:12:54 -0500 Subject: [Intelligence Seminar] March 16: Yael Niv, GHC 4303, 3:30, "Better Safe Than Sorry? Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement Learning Process" Message-ID: <309df0211003090412w599b383fr7e8fb482be291270@mail.gmail.com> Intelligence Seminar March 16, 2010 3:30 pm GHC 4303 Title: ?Better Safe Than Sorry? Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement Learning Process Yael Niv, Princeton University Which of these would you prefer: getting $10 with certainty or tossing a coin for a 50% chance to win $20? Whatever your answer, you probably were not indifferent between these two options. In general, human choice behavior is influenced not only by the expected reward value of options, but also by their variance, with people differing in the degree to which they are risk-averse or risk-seeking. Economic, psychological and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional reinforcement learning (RL) models of action selection, however, rely on temporal difference methods that learn the mean value of an option, ignoring risk. We used fMRI to test this assumption by examining the neural correlates of reinforcement learning and asking whether they are indeed indifferent to risk. Our results show that reinforcement learning is modulated by experienced risk, and reveal a close coupling between the fluctuating, experience-based, evaluations of risky options measured neurally, and fluctuations in behavioral choice. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice and neuroscientific models of learning. Joint work with: Jeffrey A. Edlund, Peter Dayan, John P. O'Doherty Bio: Yael is an assistant professor at the Princeton Neuroscience Institute (PNI) and the Psychology Department at Princeton University since September 2008. She was also a postdoc at Princeton, and earned her PhD at The Hebrew University of Jerusalem (Israel) while conducting most of her research at the Gatsby Computational Neuroscience Unit (UCL, London). Her research focuses on normative computational models of learning and decision making, and in understanding the neural basis for simple day to day trial and error learning. From nasmith at cs.cmu.edu Tue Mar 9 07:34:44 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 9 Mar 2010 07:34:44 -0500 Subject: [Intelligence Seminar] March 16: Yael Niv, GHC 4303, 3:30, "Better Safe Than Sorry? Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement Learning Process" In-Reply-To: <309df0211003090412w599b383fr7e8fb482be291270@mail.gmail.com> References: <309df0211003090412w599b383fr7e8fb482be291270@mail.gmail.com> Message-ID: <309df0211003090434i50b41d60i2f9dd076a655ff09@mail.gmail.com> Addendum: please contact Sharon Cavlovich (sharonw at cs.cmu.edu) for meetings with Yael. On Tue, Mar 9, 2010 at 7:12 AM, Noah A Smith wrote: > Intelligence Seminar > > March 16, 2010 > 3:30 pm > GHC 4303 > > Title: ?Better Safe Than Sorry? Neural Prediction Errors Reveal a > Risk-Sensitive Reinforcement Learning Process > Yael Niv, Princeton University > > Which of these would you prefer: getting $10 with certainty or tossing > a coin for a 50% chance to win $20? Whatever your answer, you probably > were not indifferent between these two options. In general, human > choice behavior is influenced not only by the expected reward value of > options, but also by their variance, with people differing in the > degree to which they are risk-averse or risk-seeking. Economic, > psychological and neural aspects of this are well studied when > information about risk is provided explicitly. However, we must > normally learn about outcomes from experience, through trial and > error. Traditional reinforcement learning (RL) models of action > selection, however, rely on temporal difference methods that learn the > mean value of an option, ignoring risk. ?We used fMRI to test this > assumption by examining the neural correlates of reinforcement > learning and asking whether they are indeed indifferent to risk. Our > results show that reinforcement learning is modulated by experienced > risk, and reveal a close coupling between the fluctuating, > experience-based, evaluations of risky options measured neurally, and > fluctuations in behavioral choice. This suggests that risk sensitivity > is integral to human learning, illuminating economic models of choice > and neuroscientific models of learning. > > Joint work with: Jeffrey A. Edlund, Peter Dayan, John P. O'Doherty > > Bio: > > Yael is an assistant professor at the Princeton Neuroscience Institute > (PNI) and the Psychology Department at Princeton University since > September 2008. She was also a postdoc at Princeton, and earned her > PhD at The Hebrew University of Jerusalem (Israel) while conducting > most of her research at the Gatsby Computational Neuroscience Unit > (UCL, London). Her research focuses on normative computational models > of learning and decision making, and in understanding the neural basis > for simple day to day trial and error learning. > From nasmith at cs.cmu.edu Fri Mar 12 15:34:28 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Fri, 12 Mar 2010 15:34:28 -0500 Subject: [Intelligence Seminar] Talk of interest: Monday, 3/15, noon, GHC6115: Guy Lebanon on "Value of Labels in Unsupervised, Supervised, and Semisupervised Learning" Message-ID: <309df0211003121234t35580316i1af140b6775de3cb@mail.gmail.com> Machine Learning Lunch (http://www.cs.cmu.edu/~learning/) Speaker: Guy Lebanon Venue: GHC 6115 Date: Monday, March 15 Time: 12:00 noon Food will be provided Abstract: I will describe two recent results regarding the value of labels in classification and structured prediction. The first result describes how margin-based classifiers (such as logistic regression and SVM) may be trained without any labels whatsoever. This paradoxical statement is true if the data dimensionality is high and the label marginal p(y) is known. The second result derives the benefit of increasing the number of labeled examples in semisupservised learning and of different labeling policies in structured prediction. Bio: Guy Lebanon is an assistant professor of computing at the Georgia Institute of Technology. His main research area is statistical modelling and visualization of high dimensional discrete data such as text documents and partially ranked data. Additional research interests include privacy preservation in databases and social networks and the use of non-Euclidean geometry in machine learning. Prior to his current appointment at Georgia Tech, Dr. Lebanon was an assistant professor of statistics and electrical and computer engineering at Purdue University. He received his PhD in 2005 from Carnegie Mellon University and BA, MS degrees from Technion - Israel Institute of Technology, all in computer science. Prof. Lebanon received the NSF CAREER Award, Purdue's Teaching for Tomorrow Award, the 2004 LTI SRS Best Presentation Award and is a Siebel scholar. From nasmith at cs.cmu.edu Mon Mar 15 08:44:03 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 15 Mar 2010 07:44:03 -0500 Subject: [Intelligence Seminar] TOMORROW: Yael Niv, GHC 4303, 3:30, "Better Safe Than Sorry? Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement Learning Process" Message-ID: <309df0211003150544l750e2aeci65c1e7f7d517099c@mail.gmail.com> Intelligence Seminar March 16, 2010 3:30 pm GHC 4303 Please contact Sharon Cavlovich (sharonw at cs.cmu.edu) for meetings with Yael. Title: ?Better Safe Than Sorry? Neural Prediction Errors Reveal a Risk-Sensitive Reinforcement Learning Process Yael Niv, Princeton University Which of these would you prefer: getting $10 with certainty or tossing a coin for a 50% chance to win $20? Whatever your answer, you probably were not indifferent between these two options. In general, human choice behavior is influenced not only by the expected reward value of options, but also by their variance, with people differing in the degree to which they are risk-averse or risk-seeking. Economic, psychological and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional reinforcement learning (RL) models of action selection, however, rely on temporal difference methods that learn the mean value of an option, ignoring risk. ?We used fMRI to test this assumption by examining the neural correlates of reinforcement learning and asking whether they are indeed indifferent to risk. Our results show that reinforcement learning is modulated by experienced risk, and reveal a close coupling between the fluctuating, experience-based, evaluations of risky options measured neurally, and fluctuations in behavioral choice. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice and neuroscientific models of learning. Joint work with: Jeffrey A. Edlund, Peter Dayan, John P. O'Doherty Bio: Yael is an assistant professor at the Princeton Neuroscience Institute (PNI) and the Psychology Department at Princeton University since September 2008. She was also a postdoc at Princeton, and earned her PhD at The Hebrew University of Jerusalem (Israel) while conducting most of her research at the Gatsby Computational Neuroscience Unit (UCL, London). Her research focuses on normative computational models of learning and decision making, and in understanding the neural basis for simple day to day trial and error learning. From nasmith at cs.cmu.edu Thu Mar 18 09:54:53 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Thu, 18 Mar 2010 09:54:53 -0400 Subject: [Intelligence Seminar] March 25: Alan Yuille, GHC 6115, 1:00, "Recursive Compositional Models for Computational Vision" Message-ID: <309df0211003180654v3d103aaeh94fe88853c2cafe9@mail.gmail.com> Intelligence Seminar, cosponsored by the Center for the Neural Basis of Cognition and the Machine Learning Department Thursday, March 25, 2010 -- NOTE SPECIAL TIME AND LOCATION! 1:00 pm GHC 6115 Host: Tai Sing Lee Please contact Barbara Dorney (dorney at cnbc.cmu.edu) for meetings; he will be available from 1pm 3/24 to 6pm 3/26. Title: Recursive Compositional Models for Computational Vision Alan Yuille, Departments of Statistics, Computer Science and Psychology, UCLA Recursive Compositional Models (RCMs) are a class of probability models designed to detect, recognize, parse, and segment visual objects and label visual scenes. They take into account the statistical and computational complexities of visual patterns. The key design principle is recursive compositionality. Visual patterns are represented by RCMs in a hierarchical form where complex structures are composed of more elementary structures. Probabilities are defined over these structures exploiting properties of the hierarchy (e.g. long range spatial relationships can be represented by local potentials). The compositional nature of this representation enables efficient learning and inference algorithms. Hence the overall architecture of RCMs provides a balance between statistical and computational complexity. Joint work with L. Zhu and Y. Chen. Bio: Alan received his B.A. in mathematics from the University of Cambridge in 1976, and completed his Ph.D. in theoretical physics at Cambridge in 1980. Following this, he held a postdoc position with the Physics Department, University of Texas at Austin, and the Institute for Theoretical Physics, Santa Barbara. He then joined the Artificial Intelligence Laboratory at MIT (1982-1986), and followed this with a faculty position in the Division of Applied Sciences at Harvard (1986-1995), rising to the position of associate professor. From 1995-2002 Alan worked as a senior scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. In 2002 he accepted a position as full professor in the Department of Statistics at the University of California, Los Angeles. He has joint appointments in the Department of Computer Science and the Department of Psychology. He has over two hundred peer-reviewed publications in vision, neural networks, and physics, and has co-authored two books: Data Fusion for Sensory Information Processing Systems (with J. J. Clark) and Two- and Three- Dimensional Patterns of the Face (with P. W. Hallinan, G. G. Gordon, P. J. Giblin and D. B. Mumford); he also co-edited the book Active Vision (with A. Blake). From nasmith at cs.cmu.edu Tue Mar 23 12:07:52 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 23 Mar 2010 12:07:52 -0400 Subject: [Intelligence Seminar] March 30: Fei-Fei Li, GHC 4303, 3:30, "Story Telling in Images: Modeling Visual Hierarchies Within and Across Images" Message-ID: <309df0211003230907j1b2604ddw7cd04e3d97f54119@mail.gmail.com> Intelligence Seminar Tuesday, March 30, 2010 3:30pm GHC 4304 Host: ?Eric Xing Please contact Michelle Martin (michelle324 at cs.cmu.edu) for meetings. Title: ?Story Telling in Images: Modeling Visual Hierarchies Within and Across Images Fei-Fei Li, Stanford University The human visual system is extremely good at perceiving and understanding the meaning of the visual world. This includes object recognition, scene classification, image segmentation, motion analysis, activity, event understanding, and many more tasks. Pixels in images, and images in the visual world, are not organized in random ways. The human visual system processes information in a hierarchy of visual areas, most likely to achieve efficient and effective processing of the data. In a similar vein, we show that hierarchical representation of the pixel space can be an effective way of modeling increasingly complex visual scenes. We start with a quick review of two past work in basic-level scene classification. Then we show that by putting together over-segmented image regions, objects (and tags) and scenes, we make progress on three fundamental visual recognition tasks (scene classification, object annotation and segmentation) in one coherent, probabilistic model. In an upcoming CVPR paper, we focus on using a hierarchical representation to discover important connectivity between parts of a human body to the object that interacts with the person (e.g. pitching baseball). This hierarchical representation is very effective in providing mutual context to detecting objects and estimating human poses, both are extremely difficult tasks in cluttered visual scenes. And finally, in another upcoming CVPR paper, we show an automatic way of organizing a large number of photographs downloaded from Flickr in a semantically meaningful hierarchy. hierarchy can serve as a useful knowledge structure for visual tasks such as scene classification and annotation. Bio: Prof. Fei-Fei Li's main research interest is in vision, particularly high-level visual recognition. In computer vision, Fei-s interests span from object and natural scene categorization to human activity categorizations in both videos and still images. In human vision, she has studied the interaction of attention and natural scene and object recognition, and decoding the human brain fMRI activities involved in natural scene categorization by using pattern recognition algorithms. Fei-Fei graduated from Princeton University in 1999 with a physics degree. She received PhD in electrical engineering from the California Institute of Technology in 2005. From 2005 to August 2009, Fei-Fei was an assistant professor in the Electrical and Computer Engineering Department at University of Illinois Urbana-Champaign and Computer Science Department at Princeton University, respectively. She is currently an Assistant Professor in the Computer Science Department at Stanford University. Fei-Fei is a recipient of a Microsoft Research New Faculty award and an NSF CAREER award. (Fei-Fei publishes using the name L. Fei-Fei.) From nasmith at cs.cmu.edu Wed Mar 24 09:28:48 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Wed, 24 Mar 2010 09:28:48 -0400 Subject: [Intelligence Seminar] TOMORROW: Alan Yuille, GHC 6115, 1:00, "Recursive Compositional Models for Computational Vision" Message-ID: <309df0211003240628s25f5f855i8bd7322eba2cf4e7@mail.gmail.com> Intelligence Seminar, cosponsored by the Center for the Neural Basis of Cognition and the Machine Learning Department Thursday, March 25, 2010 -- NOTE SPECIAL TIME AND LOCATION! 1:00 pm GHC 6115 Host: ?Tai Sing Lee Please contact Barbara Dorney ?(dorney at cnbc.cmu.edu) for meetings; he will be available from 1pm 3/24 to 6pm 3/26. Title: ?Recursive Compositional Models for Computational Vision Alan Yuille, Departments of Statistics, Computer Science and Psychology, UCLA Recursive Compositional Models (RCMs) are a class of probability models designed to detect, recognize, parse, and segment visual objects and label visual scenes. They take into account the statistical and computational complexities of visual patterns. The key design principle is recursive compositionality. Visual patterns are represented by RCMs in a hierarchical form where complex structures are composed of more elementary structures. Probabilities are defined over these structures exploiting properties of the hierarchy (e.g. long range spatial relationships can be represented by local potentials). The compositional nature of this representation enables efficient learning and inference algorithms. Hence the overall architecture of RCMs provides a balance between statistical and computational complexity. Joint work with L. Zhu and Y. Chen. Bio: Alan received his B.A. in mathematics from the University of Cambridge in 1976, and completed his Ph.D. in theoretical physics at Cambridge in 1980. Following this, he held a postdoc position with the Physics Department, University of Texas at Austin, and the Institute for Theoretical Physics, Santa Barbara. He then joined the Artificial Intelligence Laboratory at MIT (1982-1986), and followed this with a faculty position in the Division of Applied Sciences at Harvard (1986-1995), rising to the position of associate professor. From 1995-2002 Alan worked as a senior scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. ?In 2002 he accepted a position as full professor in the Department of Statistics at the University of California, Los Angeles. ?He has joint appointments in the Department of Computer Science and the Department of Psychology. He has over two hundred peer-reviewed publications in vision, neural networks, and physics, and has co-authored two books: Data Fusion for Sensory Information Processing Systems (with J. J. Clark) and Two- and Three- Dimensional Patterns of the Face (with P. W. Hallinan, G. G. Gordon, P. J. Giblin and D. B. Mumford); he also co-edited the book Active Vision (with A. Blake). From nasmith at cs.cmu.edu Mon Mar 29 10:24:48 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 29 Mar 2010 10:24:48 -0400 Subject: [Intelligence Seminar] TOMORROW: Fei-Fei Li, GHC 4303, 3:30, "Story Telling in Images: Modeling Visual Hierarchies Within and Across Images" Message-ID: <309df0211003290724l7fa1b906hf54ca7d65fbf0d1@mail.gmail.com> Intelligence Seminar Tuesday, March 30, 2010 3:30pm GHC 4304 Host: ?Eric Xing Please contact Michelle Martin (michelle324 at cs.cmu.edu) for meetings. Title: ?Story Telling in Images: Modeling Visual Hierarchies Within and Across Images Fei-Fei Li, Stanford University The human visual system is extremely good at perceiving and understanding the meaning of the visual world. This includes object recognition, scene classification, image segmentation, motion analysis, activity, event understanding, and many more tasks. Pixels in images, and images in the visual world, are not organized in random ways. The human visual system processes information in a hierarchy of visual areas, most likely to achieve efficient and effective processing of the data. In a similar vein, we show that hierarchical representation of the pixel space can be an effective way of modeling increasingly complex visual scenes. We start with a quick review of two past work in basic-level scene classification. Then we show that by putting together over-segmented image regions, objects (and tags) and scenes, we make progress on three fundamental visual recognition tasks (scene classification, object annotation and segmentation) in one coherent, probabilistic model. In an upcoming CVPR paper, we focus on using a hierarchical representation to discover important connectivity between parts of a human body to the object that interacts with the person (e.g. pitching baseball). This hierarchical representation is very effective in providing mutual context to detecting objects and estimating human poses, both are extremely difficult tasks in cluttered visual scenes. And finally, in another upcoming CVPR paper, we show an automatic way of organizing a large number of photographs downloaded from Flickr in a semantically meaningful hierarchy. ?hierarchy can serve as a useful knowledge structure for visual tasks such as scene classification and annotation. Bio: Prof. Fei-Fei Li's main research interest is in vision, particularly high-level visual recognition. In computer vision, Fei-s interests span from object and natural scene categorization to human activity categorizations in both videos and still images. In human vision, she has studied the interaction of attention and natural scene and object recognition, and decoding the human brain fMRI activities involved in natural scene categorization by using pattern recognition algorithms. Fei-Fei graduated from Princeton University in 1999 with a physics degree. She received PhD in electrical engineering from the California Institute of Technology in 2005. From 2005 to August 2009, Fei-Fei was an assistant professor in the Electrical and Computer Engineering Department at University of Illinois Urbana-Champaign and Computer Science Department at Princeton University, respectively. She is currently an Assistant Professor in the Computer Science Department at Stanford University. Fei-Fei is a recipient of a Microsoft Research New Faculty award and an NSF CAREER award. (Fei-Fei publishes using the name L. Fei-Fei.) From nasmith at cs.cmu.edu Tue Mar 30 11:00:49 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 30 Mar 2010 11:00:49 -0400 Subject: [Intelligence Seminar] April 6: Norman Sadeh, GHC 4303, 3:30, "User-Controllable Security and Privacy: Lessons from the Design and Deployment of a Family of Location Sharing Applications" Message-ID: <309df0211003300800y1f74b5dfwe8aa17bcecaeaa22@mail.gmail.com> Joint Intelligence and ISR Seminar Tuesday, April 6, 2010 3:30pm GHC 4303 Title: User-Controllable Security and Privacy: Lessons from the Design and Deployment of a Family of Location Sharing Applications Norman Sadeh, Carnegie Mellon University Abstract: Increasingly users are expected to configure a variety of security and privacy policies on their own, whether s the firewall on their home computer, their privacy preferences on Facebook, or access control policies at work. In practice, research shows that users often have great difficulty specifying such policies. This in turn can result in significant vulnerabilities. This presentation will provide an overview of novel user-controllable security and privacy technologies developed to empower users to more effectively and efficiently specify security and privacy policies. In particular, I will outline a new search-based methodology to design expressive privacy and security policies as well as user-oriented machine learning techniques that show promise in helping users refine their policies. Results from this research shed some light on why despite all the hoopla, most location sharing applications available in the marketplace today have failed to gain much traction. I will attempt to conclude with a few thoughts on the role of AI in the context of usable security and privacy research, an emerging area that is intrinsically inter-disciplinary in nature. Bio: Norman Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University. His broad research interests include Web Security, Privacy and Commerce. He is co-Director of the School of Computer Science's PhD Program in Computation, Organizations and Society and directs the s Mobile Commerce Lab and e-Supply Chain Management Lab. Norman has been on the faculty at Carnegie Mellon since 1991. In the late nineties, he also served as Chief Scientist of the European s $800M e-Work and e-Commerce program, which at the time included all European-level cyber security and online privacy research. He has authored over 160 scientific publications and co-founded two companies. Norman is also well known for his work in scheduling, constraint satisfaction and supply chain management, which resulted in the successful deployment and/or commercialization of several scheduling and supply chain management tools by companies such as IBM, Numetrix (eventually acquired by JD Edwards, PeopleSoft and Oracle), CACI, Ilog (now part of IBM) and others. From nasmith at cs.cmu.edu Tue Mar 30 11:00:26 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 30 Mar 2010 11:00:26 -0400 Subject: [Intelligence Seminar] Correction: TODAY: Fei-Fei Li, GHC 4303, 3:30, "Story Telling in Images: Modeling Visual Hierarchies Within and Across Images" Message-ID: <309df0211003300800v8a1556ft52e4fc43733b3967@mail.gmail.com> Correction: today's talk is in GHC 4303, not GHC 4304. Apologies for the typo. On Mon, Mar 29, 2010 at 10:24 AM, Noah A Smith wrote: > Intelligence Seminar > > Tuesday, March 30, 2010 > 3:30pm > GHC 4303 > Host: ?Eric Xing > Please contact Michelle Martin (michelle324 at cs.cmu.edu) for meetings. > > Title: ?Story Telling in Images: Modeling Visual Hierarchies Within > and Across Images > Fei-Fei Li, Stanford University > > The human visual system is extremely good at perceiving and > understanding the meaning of the visual world. This includes object > recognition, scene classification, image segmentation, motion > analysis, activity, event understanding, and many more tasks. Pixels > in images, and images in the visual world, are not organized in random > ways. The human visual system processes information in a hierarchy of > visual areas, most likely to achieve efficient and effective > processing of the data. In a similar vein, we show that hierarchical > representation of the pixel space can be an effective way of modeling > increasingly complex visual scenes. We start with a quick review of > two past work in basic-level scene classification. Then we show that > by putting together over-segmented image regions, objects (and tags) > and scenes, we make progress on three fundamental visual recognition > tasks (scene classification, object annotation and segmentation) in > one coherent, probabilistic model. In an upcoming CVPR paper, we focus > on using a hierarchical representation to discover important > connectivity between parts of a human body to the object that > interacts with the person (e.g. pitching baseball). This hierarchical > representation is very effective in providing mutual context to > detecting objects and estimating human poses, both are extremely > difficult tasks in cluttered visual scenes. And finally, in another > upcoming CVPR paper, we show an automatic way of organizing a large > number of photographs downloaded from Flickr in a semantically > meaningful hierarchy. ?hierarchy can serve as a useful knowledge > structure for visual tasks such as scene classification and > annotation. > > Bio: > > Prof. Fei-Fei Li's main research interest is in vision, particularly > high-level visual recognition. In computer vision, Fei-s interests > span from object and natural scene categorization to human activity > categorizations in both videos and still images. In human vision, she > has studied the interaction of attention and natural scene and object > recognition, and decoding the human brain fMRI activities involved in > natural scene categorization by using pattern recognition > algorithms. Fei-Fei graduated from Princeton University in 1999 with a > physics degree. She received PhD in electrical engineering from the > California Institute of Technology in 2005. From 2005 to August 2009, > Fei-Fei was an assistant professor in the Electrical and Computer > Engineering Department at University of Illinois Urbana-Champaign and > Computer Science Department at Princeton University, respectively. She > is currently an Assistant Professor in the Computer Science Department > at Stanford University. Fei-Fei is a recipient of a Microsoft Research > New Faculty award and an NSF CAREER award. (Fei-Fei publishes using > the name L. Fei-Fei.) > From nasmith at cs.cmu.edu Mon Apr 5 11:13:25 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 5 Apr 2010 11:13:25 -0400 Subject: [Intelligence Seminar] TOMORROW: Norman Sadeh, GHC 4303, 3:30, "User-Controllable Security and Privacy: Lessons from the Design and Deployment of a Family of Location Sharing Applications" Message-ID: Joint Intelligence and ISR Seminar Tuesday, April 6, 2010 3:30pm GHC 4303 Title: User-Controllable Security and Privacy: Lessons from the Design and Deployment of a Family of Location Sharing Applications Norman Sadeh, Carnegie Mellon University Abstract: Increasingly users are expected to configure a variety of security and privacy policies on their own, whether s the firewall on their home computer, their privacy preferences on Facebook, or access control policies at work. In practice, research shows that users often have great difficulty specifying such policies. This in turn can result in significant vulnerabilities. This presentation will provide an overview of novel user-controllable security and privacy technologies developed to empower users to more effectively and efficiently specify security and privacy policies. In particular, I will outline a new search-based methodology to design expressive privacy and security policies as well as user-oriented machine learning techniques that show promise in helping users refine their policies. Results from this research shed some light on why despite all the hoopla, most location sharing applications available in the marketplace today have failed to gain much traction. I will attempt to conclude with a few thoughts on the role of AI in the context of usable security and privacy research, an emerging area that is intrinsically inter-disciplinary in nature. Bio: Norman Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University. His broad research interests include Web Security, Privacy and Commerce. He is co-Director of the School of Computer Science's PhD Program in Computation, Organizations and Society and directs the s Mobile Commerce Lab and e-Supply Chain Management Lab. Norman has been on the faculty at Carnegie Mellon since 1991. In the late nineties, he also served as Chief Scientist of the European s $800M e-Work and e-Commerce program, which at the time included all European-level cyber security and online privacy research. He has authored over 160 scientific publications and co-founded two companies. Norman is also well known for his work in scheduling, constraint satisfaction and supply chain management, which resulted in the successful deployment and/or commercialization of several scheduling and supply chain management tools by companies such as IBM, Numetrix (eventually acquired by JD Edwards, PeopleSoft and Oracle), CACI, Ilog (now part of IBM) and others. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20100405/e81a4a09/attachment-0001.html From nasmith at cs.cmu.edu Tue Apr 13 09:58:37 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 13 Apr 2010 09:58:37 -0400 Subject: [Intelligence Seminar] April 20: Oren Etzioni, GHC 4303, 3:30, "Machine Reading at Web Scale" Message-ID: Intelligence Seminar Tuesday, April 20, 2010 3:30pm GHC 4303 Title: ?Machine Reading at Web Scale Oren Etzioni, University of Washington Host: Tom Mitchell Please contact Sharon Cavlovich (sharonw at cs.cmu.edu) to request a meeting. Abstract: How can AI research utilize the Web? My talk describes an ambitious effort to address AI's "Knowledge Acquisition Bottleneck" via self-supervised understanding of text. Instead of delving into short, complex texts, we scale shallow understanding to billions of sentences. This scale up has enabled us to investigate the "dark matter of the Web"---common sense knowledge that is implicit in the Web corpus. I will illustrate our progress with an LDA-based method for identifying the selectional preferences of relations at an unprecedented scale. Bio: Oren Etzioni received his Ph.D. from Carnegie Mellon University in January 1991, and joined the University of Washington's faculty in February 1991, where he is now the Washington Research Foundation Entrepreneurship Professor of Computer Science. Etzioni received a National Young Investigator Award in 1993, and was selected as a AAAI Fellow a decade later. In 2007, he received the Robert S. Engelmore Memorial Award. He is the founder and director of the University of Washington's Turing Center. Etzioni is the founder of Farecast, Inc., which was sold to Microsoft in 2008, and became the foundation for Bing Travel. From nasmith at cs.cmu.edu Mon Apr 19 18:46:10 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 19 Apr 2010 18:46:10 -0400 Subject: [Intelligence Seminar] TOMORROW: Oren Etzioni, GHC 4401, 3:30, "Machine Reading at Web Scale" Message-ID: Intelligence Seminar Tuesday, April 20, 2010 3:30pm GHC 4401 (Note special room!) Title: Machine Reading at Web Scale Oren Etzioni, University of Washington Host: Tom Mitchell Please contact Sharon Cavlovich (sharonw at cs.cmu.edu) to request a meeting. Abstract: How can AI research utilize the Web? My talk describes an ambitious effort to address AI's "Knowledge Acquisition Bottleneck" via self-supervised understanding of text. Instead of delving into short, complex texts, we scale shallow understanding to billions of sentences. This scale up has enabled us to investigate the "dark matter of the Web"---common sense knowledge that is implicit in the Web corpus. I will illustrate our progress with an LDA-based method for identifying the selectional preferences of relations at an unprecedented scale. Bio: Oren Etzioni received his Ph.D. from Carnegie Mellon University in January 1991, and joined the University of Washington's faculty in February 1991, where he is now the Washington Research Foundation Entrepreneurship Professor of Computer Science. Etzioni received a National Young Investigator Award in 1993, and was selected as a AAAI Fellow a decade later. In 2007, he received the Robert S. Engelmore Memorial Award. He is the founder and director of the University of Washington's Turing Center. Etzioni is the founder of Farecast, Inc., which was sold to Microsoft in 2008, and became the foundation for Bing Travel. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20100419/ae42aa3d/attachment.html From nasmith at cs.cmu.edu Tue May 4 10:31:39 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 4 May 2010 10:31:39 -0400 Subject: [Intelligence Seminar] May 11: John Laird, NSH 3305, 3:30, "A Task-Independent Approach to Diverse Forms of Prediction for Action Modeling" Message-ID: Intelligence Seminar Tuesday, May 11, 2010 3:30pm NSH 3305 (note special room!) Title: ?A Task-Independent Approach to Diverse Forms of Prediction for Action Modeling John Laird, University of Michigan Host: ?Manuela Veloso Please contact Heather Carney(hcarney at cs.cmu.edu) to request a meeting. Abstract: Researchers in AI have long studied planning, where an agent internally simulates possible actions to determine which action leads to the best future situation. The knowledge to predict results is called action In general, action models are represented using rule-like data structures (think STRIPS) that describe the conditions under which the action can be executed, and the changes the action makes. Our hypothesis is that there are many forms of knowledge for action modeling, not just rule-like structures, and that these can be embedded within a unified task-independent framework where they are used opportunistically based on the s knowledge and the task demands. In this talk, I describe such a framework based on the Soar cognitive architecture, where different processes and sources of knowledge are available for prediction, including rules, episodic memory, semantic memory, mental imagery, and general problem solving. I present results from two simple domains. Bio: John E. Laird is the John L. Tishman Professor of Engineering at the University of Michigan. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1983 working with Allen Newell. From 1984 to 1986, he was a member of research staff at Xerox Palo Alto Research Center. His research interests spring from a desire to understand the nature of the architecture underlying artificial and natural intelligence. He is one of the original developers of the Soar architecture and leads its continued evolution, including the recent development and integration of reinforcement learning, episodic memory, semantic memory, visual and spatial mental imagery, and appraisal-based emotion. He was a founder of Soar Technology, Inc. and he is a Fellow of AAAI, ACM, and the Cognitive Science Society. From nasmith at cs.cmu.edu Mon May 10 12:22:39 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 10 May 2010 12:22:39 -0400 Subject: [Intelligence Seminar] TOMORROW: John Laird, NSH 3305, 3:30, "A Task-Independent Approach to Diverse Forms of Prediction for Action Modeling" Message-ID: Intelligence Seminar Tuesday, May 11, 2010 3:30pm NSH 3305 (note special room!) Title: ?A Task-Independent Approach to Diverse Forms of Prediction for Action Modeling John Laird, University of Michigan Host: ?Manuela Veloso Please contact Diana Hyde (dhyde at cmu.edu) to request a meeting. Abstract: Researchers in AI have long studied planning, where an agent internally simulates possible actions to determine which action leads to the best future situation. The knowledge to predict results is called action In general, action models are represented using rule-like data structures (think STRIPS) that describe the conditions under which the action can be executed, and the changes the action makes. Our hypothesis is that there are many forms of knowledge for action modeling, not just rule-like structures, and that these can be embedded within a unified task-independent framework where they are used opportunistically based on the s knowledge and the task demands. In this talk, I describe such a framework based on the Soar cognitive architecture, where different processes and sources of knowledge are available for prediction, including rules, episodic memory, semantic memory, mental imagery, and general problem solving. I present results from two simple domains. Bio: John E. Laird is the John L. Tishman Professor of Engineering at the University of Michigan. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1983 working with Allen Newell. From 1984 to 1986, he was a member of research staff at Xerox Palo Alto Research Center. His research interests spring from a desire to understand the nature of the architecture underlying artificial and natural intelligence. He is one of the original developers of the Soar architecture and leads its continued evolution, including the recent development and integration of reinforcement learning, episodic memory, semantic memory, visual and spatial mental imagery, and appraisal-based emotion. He was a founder of Soar Technology, Inc. and he is a Fellow of AAAI, ACM, and the Cognitive Science Society. From nasmith at cs.cmu.edu Wed Dec 1 20:05:57 2010 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Wed, 1 Dec 2010 20:05:57 -0500 Subject: [Intelligence Seminar] December 7, 3:30pm: Presentation by Graig Boutilier Message-ID: INTELLIGENCE SEMINAR December 7 at 3:30pm, in GHC 4303 SPEAKER: CRAIG BOUTILIER (University of Toronto) Host: Tuomas Sandholm For meetings, contact Charlotte Yano (yano at cs.cmu.edu) COMPUTATIONAL SOCIAL CHOICE: A DECISION-THEORETIC PERSPECTIVE Social choice, an important topic of study for centuries, has recently been the subject of intense investigation and application within computer science. One reason for this is the increasing ease with which preference data from user populations can be derived, assessed, or estimated, and the variety of settings in which preference data can be aggregated for consensus recommendations. However, much work in computational social choice adopts existing social choice schemes rather uncritically. We adopt an explicit decision-theoretic perspective on computational social choice in which an explicit objective function is articulated for the task at hand. With this in place, one can develop new social choice rules suited to that objective; or one can analyze the performance of existing social choice rules relative to that criterion. We illustrate the approach with two different models. The first is the "unavailable candidate model." In this model, a consensus choice must be selected from a set of candidates, but candidates may become unavailable after agents express their preferences. An aggregate ranking is used as a decision policy in the face of uncertain candidate availability. We define a principled aggregation method that minimizes expected voter dissatisfaction, provide exact and approximation algorithms for optimal rankings, and show experimentally that a simple greedy scheme can be extremely effective. We also describe strong connections to the plurality rule and the Kemeny consensus, showing specifically that Kemeny produces optimal rankings under certain conditions. The second model is the "budgeted social choice" model. In this framework, a limited number of alternatives can be selected for a population of agents. This limit is determined by some form of budget. Our model is general, spanning the continuum from pure consensus decisions (i.e., standard social choice) to fully personalized recommendation. We show that standard rank aggregation rules are not appropriate for such tasks and that good solutions typically involve picking diverse alternatives tailored to different agent types. In this way, it bears a strong connection to both segmentation problems and multi-winner election schemes. The corresponding optimization problems are shown to be NP-complete, but we develop fast greedy algorithms with theoretical guarantees. Experimental results on real-world datasets demonstrate the effectiveness of our greedy algorithms. BIO Craig Boutilier is a Professor of Computer Science at the University of Toronto. He received his Ph.D from the University of Toronto in 1992, and joined the faculty of the University of British Columbia in 1991 (where he remains an Adjunct Professor). He returned to Toronto in 1999, and served as Chair of the Department of Computer Science from 2004-2010. Boutilier was a consulting professor at Stanford University from 1998-2000, a visiting professor at Brown University in 1998, and a visiting professor at Carnegie Mellon University in 2008-2009. He served on the Technical Advisory Board of CombineNet, Inc. from 2001 to 2010. Boutilier has published over 170 refereed articles covering topics ranging from knowledge representation, belief revision, default reasoning, and philosophical logic, to probabilistic reasoning, decision making under uncertainty, multiagent systems, and machine learning. His current research efforts focus on various aspects of decision making under uncertainty: preference elicitation, mechanism design, game theory and multiagent decision processes, economic models, social choice, computational advertising, Markov decision processes and reinforcement learning. Boutilier served as Program Chair for both the 16th Conference on Uncertainty in Artificial Intelligence (UAI-2000) and the 21st International Joint Conference on Artificial Intelligence (IJCAI-2009). He will begin a two-year term as Associate Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR) in 2011 and serve as Editor-in-Chief for a two-year term beginning in 2013. Boutilier is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). He has been awarded the Isaac Walton Killam Research Fellowship and an IBM Faculty Award. He also received the Killam Teaching Award from the University of British Columbia in 1997. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20101201/133219f5/attachment-0001.html From dhouston+ at cs.cmu.edu Mon Dec 6 13:48:00 2010 From: dhouston+ at cs.cmu.edu (Dana Houston) Date: Mon, 06 Dec 2010 13:48:00 -0500 Subject: [Intelligence Seminar] Intelligence Seminar - Craig Boutilier - Tues, Dec. 7, 3:30pm Message-ID: <4CFD2FE0.1080501@cs.cmu.edu> An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/intelligence-seminar-announce/attachments/20101206/6538ff52/attachment.html From dhouston+ at cs.cmu.edu Wed Dec 8 08:50:23 2010 From: dhouston+ at cs.cmu.edu (Dana Houston) Date: Wed, 08 Dec 2010 08:50:23 -0500 Subject: [Intelligence Seminar] OR Seminar on Friday, Dec 10th Message-ID: <4CFF8D1F.6050901@cs.cmu.edu> While this seminar is not part of the Intelligence Seminar series, it is on a related subject and may be of interest to the Intelligence Seminar audience. Name: Gilles Pesant, Ecoly Polytechnique Montreal Date: Friday, December 10, 2010 Time: 1:30 to 3:00 pm Location: Room 151 Posner Hall Title: Counting-Based Branching Heuristics in Constraint Programming Abstract: Constraint Programming (CP) is a powerful technique to solve combinatorial problems. It applies sophisticated inference to reduce the search space and a combination of variable- and value-selection heuristics to guide the exploration of that search space. Like Integer Programming, one states a model of the problem at hand in mathematical language and also builds a search tree through problem decomposition. But there are important differences: CP works directly on discrete variables instead of relying mostly on a continuous relaxation of the model; the modeling language offers many high-level primitives representing common combinatorial substructures of a problem; each of these primitives (constraints) may have its own specific algorithm to help solve the problem; one does not branch on fractional variables but instead on indeterminate variables, which currently can take several possible values (variables are not necessarily fixed to a particular value at a node of the search tree); even though CP can solve optimization problems, it is primarily designed to handle feasibility problems. Until recently, the only visible effect of the inference applied from the constraints had been on the domains of the variables, projecting a constraint's set of solutions on each of the variables. Accordingly, most branching heuristics in CP rely on information at the level of individual variables, essentially looking at the cardinality of their domain or at the number of constraints on them. Constraints play a central role in CP because they encapsulate powerful specialized inference algorithms but firstly because they bring out the underlying structure of combinatorial problems. That exposed structure can also be exploited during search. Information about the number of solutions to a constraint can help a search heuristic focus on critical parts of a problem or on promising solution fragments. This talk describes recent efforts in using solution-counting information about constraints to guide heuristic branching within a Constraint Programming framework, in order to find solutions to combinatorial problems in a more transparent yet efficient way. We present some of the counting algorithms developed for several of the most common combinatorial substructures and propose simple counting- based branching heuristics. Empirical evidence is given on several problem domains in support of such branching heuristics compared to other state-of-the-art heuristics in CP. The seminar has been posted at the following address: http://server1.tepper.cmu.edu/Seminars/seminar.asp?sort=1&short=Y Please take the time to schedule a meeting with the speaker on an individual basis. When you are at the seminar site, proceed to this seminar announcement. To add yourself for a meeting, click on View/Edit Schedule link and then click on the Edit Schedule link. Enter the name portion of your e-mail address (the @andrew.cmu.edu part is not needed) and click Update at bottom of page.