From nasmith at cs.cmu.edu Tue Jan 27 08:32:34 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 27 Jan 2009 08:32:34 -0500 Subject: [Intelligence Seminar] February 3: Craig Boutilier, Wean 5409, 3:30 - "Intelligent preference assessment: the next steps?" Message-ID: <309df0210901270532s2e84612br2dc9c12390501909@mail.gmail.com> Intelligence Seminar February 3, 2009 3:30 pm Wean 5409 For meetings, contact Marilyn Walgora (mwalgora at cs.cmu.edu). Intelligent Preference Assessment: The Next Steps? Craig Boutilier, Department of Computer Science, University of Toronto Preference elicitation is generally required when making or recommending decisions on behalf of users whose utility function is not known with certainty. Full elicitation of user utility functions is infeasible in practice, leading to an emphasis on approaches that (a) attempt to make good recommendations with incomplete utility information; and (b) heuristically minimize the amount of user interaction needed to assess relevant aspects of a utility function. Current techniques are, however, limited in a number of ways: (i) they rely on specific forms of information for assessment; (ii) they require very stylized forms of interaction; (iii) they are limited in the types of decision problems that can be handled. In this talk, I will outline several key research challenges in taking preference assessment to a point where wide user acceptance is possible. I will focus on three three current techniques we're developing that will help move in the direction of greater user acceptance. Each tackles one of the weaknesses discussed above. 1. The first two techniques allows users to define "personalized" features over which they can express their preferences. Users provide (positive and negative) instances of a concept (or feature) over which they have preferences. We relate this to models of concept learning, and discuss how they existence of utility functions allows decisions to be made with very incomplete knowledge of the target concept. I'll also discuss possible means integrating data-intensive collaborative filtering approaches with explicit preference elicitation techniques, especially when tackling "subjective" features. 2. I'll discuss some of our recent work on applying explicit decision-theoretic models to more "conversational" critiquing approaches to recommender systems. We consider several semantics (wrt user preferences) for unstructured user choices and show how these can be integrated into regret-based models. 3. Time permitting, I'll provide a sketch of some recent work on eliciting reward functions in Markov decision processes using the notion of minimax regret. Bio: Craig Boutilier received his Ph.D. in Computer Science (1992) from the University of Toronto, Canada. He is Professor and Chair of the Department of Computer Science at the University of Toronto. He was previously an Associate Professor at the University of British Columbia, a consulting professor at Stanford University, and a visiting professor at Brown University. He has served on the Technical Advisory Board of CombineNet, Inc. since 2001. Dr. Boutilier's research interests span a wide range of topics, with a focus on decision making under uncertainty, including preference elicitation, mechanism design, game theory, Markov decision processes, and reinforcement learning. He is a Fellow of the American Association of Artificial Intelligence (AAAI) and the recipient of the Isaac Walton Killam Research Fellowship, an IBM Faculty Award and the Killam Teaching Award. He has also served in a variety of conference organization and editorial positions, and is Program Chair of the upcoming Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09). From nasmith at cs.cmu.edu Tue Feb 3 13:12:47 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 3 Feb 2009 13:12:47 -0500 Subject: [Intelligence Seminar] TODAY: Craig Boutilier, Wean 5409, 3:30 - "Intelligent preference assessment: the next steps?" Message-ID: <309df0210902031012v56e4148alf9cd956fc8de8f5b@mail.gmail.com> Intelligence Seminar February 3, 2009 3:30 pm Wean 5409 For meetings, contact Marilyn Walgora (mwalgora at cs.cmu.edu). Intelligent Preference Assessment: The Next Steps? Craig Boutilier, Department of Computer Science, University of Toronto Preference elicitation is generally required when making or recommending decisions on behalf of users whose utility function is not known with certainty. Full elicitation of user utility functions is infeasible in practice, leading to an emphasis on approaches that (a) attempt to make good recommendations with incomplete utility information; and (b) heuristically minimize the amount of user interaction needed to assess relevant aspects of a utility function. Current techniques are, however, limited in a number of ways: (i) they rely on specific forms of information for assessment; (ii) they require very stylized forms of interaction; (iii) they are limited in the types of decision problems that can be handled. In this talk, I will outline several key research challenges in taking preference assessment to a point where wide user acceptance is possible. I will focus on three three current techniques we're developing that will help move in the direction of greater user acceptance. Each tackles one of the weaknesses discussed above. 1. The first two techniques allows users to define "personalized" features over which they can express their preferences. Users provide (positive and negative) instances of a concept (or feature) over which they have preferences. We relate this to models of concept learning, and discuss how they existence of utility functions allows decisions to be made with very incomplete knowledge of the target concept. I'll also discuss possible means integrating data-intensive collaborative filtering approaches with explicit preference elicitation techniques, especially when tackling "subjective" features. 2. I'll discuss some of our recent work on applying explicit decision-theoretic models to more "conversational" critiquing approaches to recommender systems. We consider several semantics (wrt user preferences) for unstructured user choices and show how these can be integrated into regret-based models. 3. Time permitting, I'll provide a sketch of some recent work on eliciting reward functions in Markov decision processes using the notion of minimax regret. Bio: Craig Boutilier received his Ph.D. in Computer Science (1992) from the University of Toronto, Canada. He is Professor and Chair of the Department of Computer Science at the University of Toronto. He was previously an Associate Professor at the University of British Columbia, a consulting professor at Stanford University, and a visiting professor at Brown University. He has served on the Technical Advisory Board of CombineNet, Inc. since 2001. Dr. Boutilier's research interests span a wide range of topics, with a focus on decision making under uncertainty, including preference elicitation, mechanism design, game theory, Markov decision processes, and reinforcement learning. He is a Fellow of the American Association of Artificial Intelligence (AAAI) and the recipient of the Isaac Walton Killam Research Fellowship, an IBM Faculty Award and the Killam Teaching Award. He has also served in a variety of conference organization and editorial positions, and is Program Chair of the upcoming Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09). From nasmith at cs.cmu.edu Tue Feb 17 10:13:52 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 17 Feb 2009 10:13:52 -0500 Subject: [Intelligence Seminar] February 24: Kevin Leyton-Brown, Wean 5409, 3:30 - "Scaling Up Game Theory: Representation and Reasoning with Action Graph Games" Message-ID: <309df0210902170713n7a065cb9s1d341e1741c4365d@mail.gmail.com> Intelligence Seminar February 24, 2009 3:30 pm Wean 5409 For meetings, contact Michelle Martin (michelle324w at cs.cmu.edu). Scaling Up Game Theory: Representation and Reasoning with Action Graph Games Kevin Leyton-Brown Computer Science Department, University of British Columbia Abstract: Game theory is the mathematical study of interaction among independent, self-interested agents. It has wide applications, including the design of government auctions (e.g., for distressed securities), urban planning, and the analysis of internet traffic patterns. Interestingly, most work in game theory is analytic; it is less common to analyze a model's properties computationally. Key reasons for this are that game representation size tends to grow exponentially in the number of players--making all but the simplest games infeasible to write down--and that even when games can be represented, existing algorithms (e.g., for finding equilibria) tend to have worst-case performance exponential in the game's size. This talk describes Action-Graph Games (AGG), which make it possible to extend computational analysis to games that were previously far too large to consider. I will give an overview of our five-year effort developing AGGs, emphasizing the twin threads of representational compactness and computational tractability. More specifically, the first part of the talk will describe the core ideas of the AGG representation. AGGs are a fully-expressive, graph-based representation that can compactly express both strict and context-specific independencies in players' utility functions. I will illustrate the representation by describing several practical examples of games that may be compactly represented as AGGs. The second part of the talk will examine algorithmic considerations. First, I'll describe a dynamic programming algorithm for computing a player's expected utility under a given mixed-strategy profile, which is tractable for bounded-in-degree AGGs. This algorithm can be leveraged to provide an exponential speedup in the computation of best response, Nash equilibrium, and correlated equilibrium. Second, I'll describe a message-passing algorithm for computing pure-strategy Nash equilibria in symmetric AGGs, which is tractable for graphs with bounded treewidth; again, this implies an exponential speedup over the previous state of the art. Finally, I'll more briefly describe some current directions in our work on AGGs: the modeling, evaluation, and comparison of different advertising auction designs; the extension of AGGs to both temporal and stochastic settings; and the design of free software tools to make it easier for other researchers to use AGGs. This talk is based on joint work with Albert Xin Jiang, David R.M. Thompson, and Navin A.R. Bhat. Bio: Kevin Leyton-Brown is an assistant professor in computer science at the University of British Columbia. He received a B.Sc. from McMaster University (1998), and an M.Sc. and PhD from Stanford University (2001; 2003). Much of his work is at the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multi-agent systems. He also studies the application of machine learning to the design and analysis of algorithms for solving hard computational problems. He has co-written two Multiagent Essentials of Game and over forty peer-refereed technical articles. He is an associate editor of the Journal of Artificial Intelligence Research (JAIR) and a member of the editorial board of the Artificial Intelligence Journal (AIJ). He has served as a consultant for Trading Dynamics Inc., Ariba Inc., and Cariocas Inc., and is currently scientific advisor to Worio Inc. From nasmith at cs.cmu.edu Mon Feb 23 12:29:52 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 23 Feb 2009 12:29:52 -0500 Subject: [Intelligence Seminar] TOMORROW: Kevin Leyton-Brown, Wean 5409, 3:30 - "Scaling Up Game Theory: Representation and Reasoning with Action Graph Games" Message-ID: <309df0210902230929o7a6d9717m35f62dfbf9027552@mail.gmail.com> Intelligence Seminar February 24, 2009 3:30 pm Wean 5409 For meetings, contact Michelle Martin (michelle324w at cs.cmu.edu). Scaling Up Game Theory: Representation and Reasoning with Action Graph Games Kevin Leyton-Brown Computer Science Department, University of British Columbia Abstract: Game theory is the mathematical study of interaction among independent, self-interested agents. It has wide applications, including the design of government auctions (e.g., for distressed securities), urban planning, and the analysis of internet traffic patterns. Interestingly, most work in game theory is analytic; it is less common to analyze a model's properties computationally. Key reasons for this are that game representation size tends to grow exponentially in the number of players--making all but the simplest games infeasible to write down--and that even when games can be represented, existing algorithms (e.g., for finding equilibria) tend to have worst-case performance exponential in the game's size. This talk describes Action-Graph Games (AGG), which make it possible to extend computational analysis to games that were previously far too large to consider. I will give an overview of our five-year effort developing AGGs, emphasizing the twin threads of representational compactness and computational tractability. More specifically, the first part of the talk will describe the core ideas of the AGG representation. AGGs are a fully-expressive, graph-based representation that can compactly express both strict and context-specific independencies in players' utility functions. I will illustrate the representation by describing several practical examples of games that may be compactly represented as AGGs. The second part of the talk will examine algorithmic considerations. First, I'll describe a dynamic programming algorithm for computing a player's expected utility under a given mixed-strategy profile, which is tractable for bounded-in-degree AGGs. This algorithm can be leveraged to provide an exponential speedup in the computation of best response, Nash equilibrium, and correlated equilibrium. Second, I'll describe a message-passing algorithm for computing pure-strategy Nash equilibria in symmetric AGGs, which is tractable for graphs with bounded treewidth; again, this implies an exponential speedup over the previous state of the art. Finally, I'll more briefly describe some current directions in our work on AGGs: the modeling, evaluation, and comparison of different advertising auction designs; the extension of AGGs to both temporal and stochastic settings; and the design of free software tools to make it easier for other researchers to use AGGs. This talk is based on joint work with Albert Xin Jiang, David R.M. Thompson, and Navin A.R. Bhat. Bio: Kevin Leyton-Brown is an assistant professor in computer science at the University of British Columbia. He received a B.Sc. from McMaster University (1998), and an M.Sc. and PhD from Stanford University (2001; 2003). Much of his work is at the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multi-agent systems. He also studies the application of machine learning to the design and analysis of algorithms for solving hard computational problems. He has co-written two Multiagent Essentials of Game and over forty peer-refereed technical articles. He is an associate editor of the Journal of Artificial Intelligence Research (JAIR) and a member of the editorial board of the Artificial Intelligence Journal (AIJ). He has served as a consultant for Trading Dynamics Inc., Ariba Inc., and Cariocas Inc., and is currently scientific advisor to Worio Inc. From nasmith at cs.cmu.edu Tue Mar 10 21:50:42 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 10 Mar 2009 20:50:42 -0500 Subject: [Intelligence Seminar] March 17: David Blei, Wean 5409, 3:30 - "Supervised and Relational Topic Models" Message-ID: <309df0210903101850t2dcf36e3u16d3589a37c32cc5@mail.gmail.com> Intelligence Seminar March 17, 2009 3:30 pm Wean 5409 For meetings, contact Noah Smith (nasmith at cs.cmu.edu). Supervised and relational topic models David Blei Princeton University Abstract: A surge of recent research in machine learning and statistics has developed new techniques for finding patterns of words in document collections using hierarchical probabilistic models. These models are called "topic models" because the discovered word patterns often reflect the underlying topics that permeate the documents. Topic models also naturally apply to data such as images and biological sequences. In this talk I will review the basics of topic modeling, and discuss some recent extensions: supervised topic modeling and relational topic modeling. Supervised topic models allow us to use topics in a setting where we seek both exploratory and predictive power. Relational topic models---which are built on supervised topic models---consider documents interconnected in a graph. These models can be used to summarize a network of documents, predict links between them, and predict words within them. Joint work with Jonathan Chang and Jon McAuliffe. Bio: David Blei is an assistant professor in the Computer Science department at Princeton University. He received his Ph.D. in 2004 from U.C. Berkeley and was a postdoctoral researcher in the Department of Machine Learning at Carnegie Mellon University. His research interests include graphical models, approximate posterior inference, and nonparametric Bayesian statistics. He focuses on applications to information retrieval and natural language processing. From nasmith at cs.cmu.edu Mon Mar 16 09:33:29 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 16 Mar 2009 09:33:29 -0400 Subject: [Intelligence Seminar] TOMORROW: David Blei, Wean 5409, 3:30 - "Supervised and Relational Topic Models" Message-ID: <309df0210903160633s779a1443yff516549dd7be5c9@mail.gmail.com> Intelligence Seminar March 17, 2009 3:30 pm Wean 5409 For meetings, contact Noah Smith (nasmith at cs.cmu.edu). Supervised and relational topic models David Blei Princeton University Abstract: A surge of recent research in machine learning and statistics has developed new techniques for finding patterns of words in document collections using hierarchical probabilistic models. ?These models are called "topic models" because the discovered word patterns often reflect the underlying topics that permeate the documents. Topic models also naturally apply to data such as images and biological sequences. In this talk I will review the basics of topic modeling, and discuss some recent extensions: supervised topic modeling and relational topic modeling. ?Supervised topic models allow us to use topics in a setting where we seek both exploratory and predictive power. ?Relational topic models---which are built on supervised topic models---consider documents interconnected in a graph. ?These models can be used to summarize a network of documents, predict links between them, and predict words within them. Joint work with Jonathan Chang and Jon McAuliffe. Bio: David Blei is an assistant professor in the Computer Science department at Princeton University. ?He received his Ph.D. in 2004 from U.C. Berkeley and was a postdoctoral researcher in the Department of Machine Learning at Carnegie Mellon University. ?His research interests include graphical models, approximate posterior inference, and nonparametric Bayesian statistics. He focuses on applications to information retrieval and natural language processing. From nasmith at cs.cmu.edu Sat Apr 18 11:37:42 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Sat, 18 Apr 2009 11:37:42 -0400 Subject: [Intelligence Seminar] talk of interest: Ramesh Johari, 4/20 12pm, 1001 HBH Message-ID: <309df0210904180837j7faa42f3rf3bba211ca9ce203@mail.gmail.com> > Ramesh Johari?- Assistant Professor at Stanford University, with a full-time > appointment in the Department of Management Science and Engineering (MS&E), > will?present for the?Faculty Research Seminar Series?Monday, April 20th, > 2009, at 12:00 noon?in?room 1001 HBH?at the Heinz College. > > Ramesh Johari?also has courtesy appointments in the Departments of Computer > Science (CS) and Electrical Engineering (EE). He is a member of the > Operations Research group in MS&E, and the Information Systems Laboratory in > EE. He is also a member of the advisory board of the Stanford Clean Slate > Internet Program. He received an A.B. in Mathematics from Harvard (1998), a > Certificate of Advanced Study in Mathematics from Cambridge (1999), and a > Ph.D. in Electrical Engineering and Computer Science from MIT (2004). > > Title:?The interaction of positive externalities and congestion effects in > services > > Abstract:? ?We study a system where a service is shared by many identical > customers; the service is provided by a single resource. As expected each > customer experiences congestion, a negative externality, from the others' > usage of the shared resource in our model. In addition, we assume each > customer experiences a positive externality from others' usage; this is in > contrast to prior literature that assumes a positive externality that > depends only on the mere presence of other users. We consider two points of > view in studying this model: the behavior of self-interested users who > autonomously form a ``club'', and the behavior of a service manager. We > first characterize the usage patterns of self-interested users, as well as > the size of the club that self-interested users would form autonomously. We > find that this club size is always smaller than that chosen by a service > manager; however, somewhat surprisingly, usage in the autonomous club is > always efficient. Next, we carry out an asymptotic analysis in the regime > where the positive externality is increased without bound. We find that in > this regime, the asymptotic behavior of the autonomous club can be quite > different from that formed by a service manager: for example, the autonomous > club may remain of finite size, even if the club formed by a service manager > has infinitely many members? - Joint work with Sunil Kumar (Stanford GSB). > > Please visit the link below to see his schedule and sign up to meet with > him: > > http://www.cmu.edu/seminars From nasmith at cs.cmu.edu Fri May 8 08:43:05 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Fri, 8 May 2009 08:43:05 -0400 Subject: [Intelligence Seminar] May 15: Dan Klein, NSH 1305, 2:00 - "Latent-Variable Models for Natural Language Processing" Message-ID: <309df0210905080543w5e4e0f4bx3ce65cca38478802@mail.gmail.com> Joint Intelligence/LTI Seminar May 15, 2009 (note special time and place) 2:00 pm NSH 1305 For meetings, contact Noah Smith (nasmith at cs.cmu.edu). Latent-Variable Models for Natural Language Processing Dan Klein Computer Science Division, University of California, Berkeley Abstract: Language is complex, but our labeled data sets generally aren't. For example, treebanks specify coarse categories like noun phrases, but they say nothing about richer phenomena like agreement, case, definiteness, and so on. One solution is to use latent-variable methods to learn these underlying complexities automatically. In this talk, I will present several latent-variable models for natural language processing which take such an approach. In the domain of syntactic parsing, I will describe a state-splitting approach which begins with an X-bar grammar and learns to iteratively refine grammar symbols. For example, noun phrases are split into subjects and objects, singular and plural, and so on. This splitting process in turn admits an efficient coarse-to-fine inference scheme, which reduces parsing times by orders of magnitude. Our method currently produces the best parsing accuracies in a variety of languages, in a fully language-general fashion. The same techniques can also be applied to acoustic modeling, where they induce latent phonological patterns. In the domain of machine translation, we must often analyze sentences and their translations at the same time. In principle, analyzing two languages should be easier than analyzing one: it is well known that two predictors can work better when they must agree. However ``agreement'' across languages is itself a complex, parameterized relation. I show that, for both parsing and entity recognition, bilingual models can be built from monolingual ones using latent-variable methods -- here, the latent variables are bilingual correspondences. The resulting bilingual models are substantially better than their decoupled monolingual versions, giving both error rate reductions in labeling tasks and BLEU score increases in machine translation. Bio: Dan Klein is an assistant professor of computer science at the University of California, Berkeley (PhD Stanford, MSt Oxford, BA Cornell). His research focuses on statistical natural language processing, including unsupervised learning methods, syntactic parsing, information extraction, and machine translation. Academic honors include a Marshall Fellowship, a Microsoft New Faculty Fellowship, the ACM Grace Murray Hopper award, and best paper awards at the ACL, NAACL, and EMNLP conferences. From nasmith at cs.cmu.edu Thu May 14 14:41:13 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Thu, 14 May 2009 14:41:13 -0400 Subject: [Intelligence Seminar] TOMORROW: Dan Klein, NSH 1305, 2:00 - "Latent-Variable Models for Natural Language Processing" Message-ID: <309df0210905141141t1473a47dgbd0c21cb7d8890fc@mail.gmail.com> Joint Intelligence/LTI Seminar May 15, 2009 (note special time and place) 2:00 pm NSH 1305 Latent-Variable Models for Natural Language Processing Dan Klein Computer Science Division, University of California, Berkeley Abstract: Language is complex, but our labeled data sets generally aren't. ?For example, treebanks specify coarse categories like noun phrases, but they say nothing about richer phenomena like agreement, case, definiteness, and so on. ?One solution is to use latent-variable methods to learn these underlying complexities automatically. ?In this talk, I will present several latent-variable models for natural language processing which take such an approach. In the domain of syntactic parsing, I will describe a state-splitting approach which begins with an X-bar grammar and learns to iteratively refine grammar symbols. ?For example, noun phrases are split into subjects and objects, singular and plural, and so on. ?This splitting process in turn admits an efficient coarse-to-fine inference scheme, which reduces parsing times by orders of magnitude. ?Our method currently produces the best parsing accuracies in a variety of languages, in a fully language-general fashion. ?The same techniques can also be applied to acoustic modeling, where they induce latent phonological patterns. In the domain of machine translation, we must often analyze sentences and their translations at the same time. ?In principle, analyzing two languages should be easier than analyzing one: it is well known that two predictors can work better when they must agree. ?However ``agreement'' across languages is itself a complex, parameterized relation. ?I show that, for both parsing and entity recognition, bilingual models can be built from monolingual ones using latent-variable methods -- here, the latent variables are bilingual correspondences. ?The resulting bilingual models are substantially better than their decoupled monolingual versions, giving both error rate reductions in labeling tasks and BLEU score increases in machine translation. Bio: Dan Klein is an assistant professor of computer science at the University of California, Berkeley (PhD Stanford, MSt Oxford, BA Cornell). ?His research focuses on statistical natural language processing, including unsupervised learning methods, syntactic parsing, information extraction, and machine translation. ?Academic honors include a Marshall Fellowship, a Microsoft New Faculty Fellowship, the ACM Grace Murray Hopper award, and best paper awards at the ACL, NAACL, and EMNLP conferences. From nasmith at cs.cmu.edu Thu Jun 18 16:00:50 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Thu, 18 Jun 2009 16:00:50 -0400 Subject: [Intelligence Seminar] June 23: Robert Thibadeau, NSH 1507, 3:30 - "Action Perception" Message-ID: <309df0210906181300o1f37e929hf4674152b6bc9a7d@mail.gmail.com> Intelligence Seminar June 23, 2009 (note special place) 3:30 pm NSH 1507 Host: Jaime Carbonell For meetings, contact Michelle Pagnani (pagnani at cs.cmu.edu). Action Perception Robert Thibadeau Seagate Research Abstract: The human perception of actions has barely been studied, but this study of action perception promises to provide a wealth of interesting hypotheses regarding cognitive processing. Action perception is distinct from motion perception in that the direct perception of causation is central to the percept. Among the interesting hypotheses is that it can be hypothesized that what we know as thought and reasoning is where we perceive and plan actions. Another hypothesis is that what we know as logic and mathematics derives from our direct perceptions of causation in the actions we perceive and think about. I will present a study that attempts to estimate the scale of computation needed to implement a system for visually perceiving meaningful actions and non-trivially producing an English narration of what is being visually perceived, as well as answering questions about what is visually perceived. The scale of the computation for learning could easily reach exaflops over distributed datasets (HADOOP or MapReduce style). This study is partly based on my work (Thibadeau, 1986), and Doug Rohde's 2002 dissertation (http://tedlab.mit.edu:16080/~dr/Thesis/), as well as Simon and Rescher (1966 see summary below). The study includes an explicit proposal for extending Rohde's work to multimodal, multisensory, processing. (Simon and Rescher 1966 From Wikipedia, Causality) Derivation theories The Nobel Prize holder Herbert Simon and Philosopher Nicholas Rescher[20] claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics. From nasmith at cs.cmu.edu Fri Jun 19 10:10:09 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Fri, 19 Jun 2009 10:10:09 -0400 Subject: [Intelligence Seminar] Small correction: June 23: Robert Thibadeau, NSH 1507, 3:30 - "Action Perception" Message-ID: <309df0210906190710p6ec66141m52d2a8b9cb96e84d@mail.gmail.com> Correction: if you would like a meeting with Robert Thibadeau on Tuesday, June 23, please contact Dana Houston (dhouston at cs.cmu.edu) as Michelle Pagnani will be on vacation. Thanks, Noah On Thu, Jun 18, 2009 at 4:00 PM, Noah A Smith wrote: > Intelligence Seminar > > June 23, 2009 (note special place) > 3:30 pm > NSH 1507 > Host: ?Jaime Carbonell > > Action Perception > Robert Thibadeau > Seagate Research > > Abstract: > > The human perception of actions has barely been studied, but this > study of action perception promises to provide a wealth of interesting > hypotheses regarding cognitive processing. ?Action perception is > distinct from motion perception in that the direct perception of > causation is central to the percept. ?Among the interesting hypotheses > is that it can be hypothesized that what we know as thought and > reasoning is where we perceive and plan actions. ?Another hypothesis > is that what we know as logic and mathematics derives from our direct > perceptions of causation in the actions we perceive and think about. > > I will present a study that attempts to estimate the scale of > computation needed to implement a system for visually perceiving > meaningful actions and non-trivially producing an English narration of > what is being visually perceived, as well as answering questions about > what is visually perceived. ?The scale of the computation for learning > could easily reach exaflops over distributed datasets (HADOOP or > MapReduce style). > > This study is partly based on my work (Thibadeau, 1986), and Doug > Rohde's 2002 dissertation (http://tedlab.mit.edu:16080/~dr/Thesis/), > as well as Simon and Rescher (1966 see summary below). ?The study > includes an explicit proposal for extending Rohde's work to > multimodal, multisensory, processing. > > (Simon and Rescher 1966 From Wikipedia, Causality) > Derivation theories > > The Nobel Prize holder Herbert Simon and Philosopher Nicholas > Rescher[20] claim that the asymmetry of the causal relation is > unrelated to the asymmetry of any mode of implication that > contraposes. Rather, a causal relation is not a relation between > values of variables, but a function of one variable (the cause) on to > another (the effect). So, given a system of equations, and a set of > variables appearing in these equations, we can introduce an asymmetric > relation among individual equations and variables that corresponds > perfectly to our commonsense notion of a causal ordering. ?The system > of equations must have certain properties, most importantly, if some > values are chosen arbitrarily, the remaining values will be determined > uniquely through a path of serial discovery that is perfectly > causal. They postulate the inherent serialization of such a system of > equations may correctly capture causation in all empirical fields, > including physics and economics. From nasmith at cs.cmu.edu Mon Jun 22 10:28:43 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 22 Jun 2009 10:28:43 -0400 Subject: [Intelligence Seminar] TOMORROW: Robert Thibadeau, NSH 1507, 3:30 - "Action Perception" Message-ID: <309df0210906220728s116e6cf1n70e981b73988cfd3@mail.gmail.com> Intelligence Seminar June 23, 2009 (note special place) 3:30 pm NSH 1507 Host: ?Jaime Carbonell For meetings, contact Dana Houston (dhouston at cs.cmu.edu). Action Perception Robert Thibadeau Seagate Research Abstract: The human perception of actions has barely been studied, but this study of action perception promises to provide a wealth of interesting hypotheses regarding cognitive processing. ?Action perception is distinct from motion perception in that the direct perception of causation is central to the percept. ?Among the interesting hypotheses is that it can be hypothesized that what we know as thought and reasoning is where we perceive and plan actions. ?Another hypothesis is that what we know as logic and mathematics derives from our direct perceptions of causation in the actions we perceive and think about. I will present a study that attempts to estimate the scale of computation needed to implement a system for visually perceiving meaningful actions and non-trivially producing an English narration of what is being visually perceived, as well as answering questions about what is visually perceived. ?The scale of the computation for learning could easily reach exaflops over distributed datasets (HADOOP or MapReduce style). This study is partly based on my work (Thibadeau, 1986), and Doug Rohde's 2002 dissertation (http://tedlab.mit.edu:16080/~dr/Thesis/), as well as Simon and Rescher (1966 see summary below). ?The study includes an explicit proposal for extending Rohde's work to multimodal, multisensory, processing. (Simon and Rescher 1966 From Wikipedia, Causality) Derivation theories The Nobel Prize holder Herbert Simon and Philosopher Nicholas Rescher[20] claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. ?The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics. From nasmith at cs.cmu.edu Mon Sep 21 17:15:38 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 21 Sep 2009 17:15:38 -0400 Subject: [Intelligence Seminar] September 28: Ashwin Ram, NSH 1507, 11 - "User-Generated AI for Interactive Digital Entertainment" Message-ID: <309df0210909211415k7bbef16cs5c667a356d472544@mail.gmail.com> Joint RI/Intelligence Seminar September 28, 2009 11 am NSH 1507 For meetings, contact Jessica Hodgins (jkh at cs.cmu.edu). User-Generated AI for Interactive Digital Entertainment Ashwin Ram, Georgia Tech Abstract: User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user. To understand why Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers. I will argue that User-Generated AI is the next big frontier in the rapidly growing Social Gaming area. From Sims to Risk to World of Warcraft, end users want to create, modify, and share not only the appearance but the "minds" of their characters. I will present my recent research on intelligent technologies to assist Game AI authors, and show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them?without programming. I will conclude with some thoughts about the future of AI-based Interactive Digital Entertainment. Bio: Dr. Ashwin Ram is an Associate Professor and Director of the Cognitive Computing Lab in the College of Computing at Georgia Tech, an Associate Professor of Cognitive Science, and an Adjunct Professor in Psychology at Georgia Tech and in MathCS at Emory University. He received his PhD from Yale University in 1989, his MS from University of Illinois in 1984, and his BTech from IIT Delhi in 1982. He has published 2 books and over 100 scientific articles in international forums. He is a founder of Enkia Corporation which develops AI software for social media applications, Inquus Corporation which is building an online social learning network called OpenStudy, and Cobot Health Corporation which is developing conversational agents for healthcare information access. From nasmith at cs.cmu.edu Fri Sep 25 14:23:47 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Fri, 25 Sep 2009 14:23:47 -0400 Subject: [Intelligence Seminar] Talk of interest: Friday, Oct. 2, 10:30: "Constrained conditional models: learning and inference in natural language processing, " University of Pittsburgh Message-ID: <309df0210909251123q39db110auc7bad9ae797ab58e@mail.gmail.com> University of Pittsburgh Department of Computer Science Distinguished Lecturer Series CONSTRAINED CONDITIONAL MODELS: LEARNING AND INFERENCE IN NATURAL LANGUAGE UNDERSTANDING DAN ROTH Professor, Department of Computer Science, University of Illinois at Urbana-Champaign Friday October 2, 2009 10:30 am - SENSQ 5317 Hosted by Diane J. Litman ABSTRACT Making decisions in natural language understanding tasks often involves assigning values to sets of interdependent variables where an expressive dependency structure among these can influence, or even dictate, what assignments are possible. Structured learning problems provide one such example, but we are interested in a broader setting where multiple models are involved, global inference is over these is essential, but it may not be ideal, or possible, to learn them jointly. I will present work on Constrained Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. The focus will be on discussing training and inference paradigms for Constrained Conditional Models, with examples drawn from natural language understanding tasks such as semantic role learning, information extraction tasks, and transliteration. BIOGRAPHY OF SPEAKER Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a Willet Faculty Scholar of the College of Engineering. He is the director of a DHS Center for Multimodal Information Access & Synthesis (MIAS) and has faculty positions also at the Statistics and Linguistics Departments. Roth is a fellow of AAAI and has published broadly in machine learning, natural language processing, knowledge representation and reasoning and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community, including an award winning Semantic Parser. Prof. Roth has given keynote talks in major conferences, including AAAI, The Conference of the American Association Artificial Intelligence; ICMLA, The International Conference on Machine Learning and Applications; EMNLP, The Conference on Empirical Methods in Natural Language Processing, and ECML & PKDD, the European Conference on Machine Learning and the Principles and Practice of Knowledge Discovery in Databases. He has also presented several tutorials in universities and conferences including at ACL and the European ACL. Among his paper awards are the best paper award in IJCAI-99 and the 2001 AAAI Innovative Applications of AI Award. Roth was the program chair of CoNLL'02 and of ACL'03, and is or has been on the editorial board of several journals in his research areas; he is currently an associate editor for the Journal of Artificial Intelligence Research and the Machine Learning Journal. Prof. Roth got his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D in Computer Science from Harvard University in 1995. From nasmith at cs.cmu.edu Sun Sep 27 13:41:09 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Sun, 27 Sep 2009 13:41:09 -0400 Subject: [Intelligence Seminar] TOMORROW (Monday): Ashwin Ram, NSH 1507, 11 - "User-Generated AI for Interactive Digital Entertainment" Message-ID: <309df0210909271041j267a24e0ka173de60194b1b44@mail.gmail.com> Joint RI/Intelligence Seminar September 28, 2009 11 am NSH 1507 For meetings, contact Jessica Hodgins (jkh at cs.cmu.edu). User-Generated AI for Interactive Digital Entertainment Ashwin Ram, Georgia Tech Abstract: User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user. To understand why Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers. I will argue that User-Generated AI is the next big frontier in the rapidly growing Social Gaming area. From Sims to Risk to World of Warcraft, end users want to create, modify, and share not only the appearance but the "minds" of their characters. I will present my recent research on intelligent technologies to assist Game AI authors, and show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them?without programming. I will conclude with some thoughts about the future of AI-based Interactive Digital Entertainment. Bio: Dr. Ashwin Ram is an Associate Professor and Director of the Cognitive Computing Lab in the College of Computing at Georgia Tech, an Associate Professor of Cognitive Science, and an Adjunct Professor in Psychology at Georgia Tech and in MathCS at Emory University. He received his PhD from Yale University in 1989, his MS from University of Illinois in 1984, and his BTech from IIT Delhi in 1982. He has published 2 books and over 100 scientific articles in international forums. He is a founder of Enkia Corporation which develops AI software for social media applications, Inquus Corporation which is building an online social learning network called OpenStudy, and Cobot Health Corporation which is developing conversational agents for healthcare information access. From nasmith at cs.cmu.edu Tue Sep 29 08:25:00 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 29 Sep 2009 08:25:00 -0400 Subject: [Intelligence Seminar] CFP of interest: AAAI Spring Symposium on Artificial Intelligence for Development (AI-D) Message-ID: <309df0210909290525y31d5b3d7p4f6f4a0e0400a2e1@mail.gmail.com> From: Roni Rosenfeld Sent: Monday, September 28, 2009 7:55 PM Subject: AAAI Spring Symposium on Artificial Intelligence for Development (AI-D) This may be of interest to some. -Roni AAAI Spring Symposium on Artificial Intelligence for Development (AI-D) Call for Papers There has been great interest in information and communication technology for development (ICT-D) over the last several years. The work is diverse and extends from information technologies that provide infrastructure for micropayments to techniques for monitoring and enhancing the cultivation of crops. While efforts in ICT-D have been interdisciplinary, ICT-D has largely overlooked opportunities for harnessing machine learning and reasoning to create new kinds of services, and to serve a role in analyses of data that may provide insights about socioeconomic development for disadvantaged populations. The unprecedented volume of data currently being generated in the developing world on human health, movement, communication, and financial transactions provides new opportunities for applying machine learning methods to development efforts, however. Our aim is to foster the creation of a subfield of ICT-D, which we refer to as artificial intelligence for development (AI-D), to harness these opportunities. To this end, we hope the AAAI Spring Symposium at Stanford will serve as a focal point to bring together a critical mass of researchers who are interested in applying AI research to development challenges. The goals of the symposium will be to (1) identify a core set of AI-D researchers, (2) explore key topics and representative projects in this realm, and (3) to lay out an ontology of AI-D research challenges and opportunities. We are seeking original contributions in the form of both full papers and position papers on a wide range of related topics. For example, papers could address the potential for machine reasoning to make valuable off-line and real-time inferences from the large-scale mobile phone data sets currently being generated in the developing world. Such analytics could provide a better understanding of social relationships and information flows in disadvantaged societies, as well as guiding and monitoring ICT-D interventions and public policy, and giving insight into population responses to crises. Other topics would include exploring how machine learning and inference could help us understand human mobility patterns, yielding, for example, real-time estimates of the progression of disease outbreaks and guiding public health interventions. Machine reasoning could also provide remote areas with medical support through automated diagnosis, along with guidance for the effective triaging of limited resources and human medical expertise. Additional potential topics include instant machine translation for better communication and coordination among people who speak different languages, user modeling for online tutoring, investment advisory tools, and simulation, modeling, and decision support for agricultural optimization. The AAAI Artificial Intelligence for Development Spring Symposium at Stanford will help define this new research area, and identify the next steps to establishing a sustainable and vibrant AI-D research community. In conjunction with Symposium, we are working to build a community of researchers with interests in AI-D, as well as to identify and make available case libraries of data (such as communication logs, financial transactions, and local market prices) for research.? A site has been created for AI-D researchers at http://AI-D.org . Submissions: Interested participants should submit full papers (6 pages) and position papers (2 pages) in AAAI format to submissions at AI-D.org. Selected papers from the symposium will be published as an AAAI technical report. Organizing Committee: Nathan Eagle, cochair (Santa Fe Institute) Eric Horvitz, cochair (Microsoft Research) Shawndra Hill, data cochair (Wharton) Ravi Jain data cochair (Google) Saleema Amershi (University of Washington) Gaetano Boriello (University of Washington and Google) Neil Ferguson (Imperial, UK) Ashish Kapoor (Microsoft Research) John Quinn (Makerere University, Uganda) Roni Rosenfeld (Carnegie Mellon University) Kentaro Toyama (Microsoft Research) Peter Waiganjo Wagacha (University of Nairobi) Dates: *?????????? Submissions for the symposia are due on October 31, 2009 *?????????? Notification of acceptance will be given by November 27, 2009 *?????????? Camera-ready material must be received by January 22, 2010 *?????????? Symposium at Stanford University will be held on March 22-24, 2010 From nasmith at cs.cmu.edu Tue Sep 29 09:13:00 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 29 Sep 2009 09:13:00 -0400 Subject: [Intelligence Seminar] October 6: Michael Bowling, GHC 4303, 3:30 - "AI After Dark: Computers Playing Poker" Message-ID: <309df0210909290613v44f95889rc20009f9d997bda5@mail.gmail.com> Intelligence Seminar October 6, 2009 3:30 pm GHC 4303 Host: Tuomas Sandholm For meetings, contact Charlotte Yano (yano at cs.cmu.edu) AI After Dark: Computers Playing Poker Michael Bowling, University of Alberta Abstract: The game of poker presents a serious challenge for artificial intelligence. The game is essentially about dealing with many forms of uncertainty: unobservable opponent cards, undetermined future cards, and unknown opponent strategies. Coping with these uncertainties is critical to playing at a high-level. In July 2008, the University of Alberta's poker playing program, Polaris, became the first to defeat top professional players at any variant of poker in a meaningful competition. In this talk, I'll tell the story of this match interleaved with the science that enabled Polaris's accomplishment. Bio: Michael Bowling is an associate professor at the University of Alberta. He received his Ph.D. in 2003 from Carnegie Mellon University in the area of artificial intelligence. His research focuses on machine learning, game theory, and robotics, and he is particularly fascinated by the problem of how computers can learn to play games through experience. From nasmith at cs.cmu.edu Thu Oct 1 10:27:02 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Thu, 1 Oct 2009 10:27:02 -0400 Subject: [Intelligence Seminar] Talk of interest: TOMORROW, 10:30: "Constrained conditional models: learning and inference in natural language processing, " University of Pittsburgh Message-ID: <309df0210910010727k47b8987fq27f99feeac10753f@mail.gmail.com> University of Pittsburgh Department of Computer Science Distinguished Lecturer Series CONSTRAINED CONDITIONAL MODELS: LEARNING AND INFERENCE IN NATURAL LANGUAGE UNDERSTANDING DAN ROTH Professor, Department of Computer Science, University of Illinois at Urbana-Champaign Friday October 2, 2009 10:30 am ? ? ? ? - SENSQ 5317 Hosted by Diane J. Litman ABSTRACT Making decisions in natural language understanding tasks often involves assigning values to sets of interdependent variables where an expressive dependency structure among these can influence, or even dictate, what assignments are possible. Structured learning problems provide one such example, but we are interested in a broader setting where multiple models are involved, global inference is over these is essential, but it may not be ideal, or possible, to learn them jointly. I will present work on Constrained Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. The focus will be on discussing training and inference paradigms for Constrained Conditional Models, with examples drawn from natural language understanding tasks such as semantic role learning, information extraction tasks, and transliteration. BIOGRAPHY OF SPEAKER Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a Willet Faculty Scholar of the College of Engineering. He is the director of a DHS Center for Multimodal Information Access & Synthesis (MIAS) and has faculty positions also at the Statistics and Linguistics Departments. Roth is a fellow of AAAI and has published broadly in machine learning, natural language processing, knowledge representation and reasoning and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community, including an award winning Semantic Parser. Prof. Roth has given keynote talks in major conferences, including AAAI, The Conference of the American Association Artificial Intelligence; ICMLA, The International Conference on Machine Learning and Applications; EMNLP, The Conference on Empirical Methods in Natural Language Processing, and ECML & PKDD, the European Conference on Machine Learning and the Principles and Practice of Knowledge Discovery in Databases. He has also presented several tutorials in universities and conferences including at ACL and the European ACL. Among his paper awards are the best paper award in IJCAI-99 and the 2001 AAAI Innovative Applications of AI Award. Roth was the program chair of CoNLL'02 and of ACL'03, and is or has been on the editorial board of several journals in his research areas; he is currently an associate editor for the Journal of Artificial Intelligence Research and the Machine Learning Journal. Prof. Roth got his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D in Computer Science from Harvard University in 1995. From nasmith at cs.cmu.edu Mon Oct 5 07:17:06 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 5 Oct 2009 07:17:06 -0400 Subject: [Intelligence Seminar] Talk of interest: Thursday October 8, 10:30, David Parkes In-Reply-To: <309df0210910050415u789d49e9ie985b8911ec567b2@mail.gmail.com> References: <309df0210910050415u789d49e9ie985b8911ec567b2@mail.gmail.com> Message-ID: <309df0210910050417m4b1c4810xf8401980d907646e@mail.gmail.com> >From Tuomas Sandholm: David Parkes will be giving a talk in Tuomas' course, Foundations of Electronic Marketplaces, on Thursday Oct 8th 10:30-11:50 am, GHC 4211. ?The title is "Dynamic Knapsack Auctions through Computational Ironing". ?The paper that he will present can be found at http://www.cs.cmu.edu/~iseminar/BonnDMD.PDF Anyone is welcome. From nasmith at cs.cmu.edu Mon Oct 5 07:12:39 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 5 Oct 2009 07:12:39 -0400 Subject: [Intelligence Seminar] TOMORROW: Michael Bowling, GHC 4303, 3:30 - "AI After Dark: Computers Playing Poker" Message-ID: <309df0210910050412v28aa662ekeddad937d4994c6b@mail.gmail.com> Intelligence Seminar October 6, 2009 3:30 pm GHC 4303 Host: ?Tuomas Sandholm For meetings, contact Charlotte Yano (yano at cs.cmu.edu) AI After Dark: Computers Playing Poker Michael Bowling, University of Alberta Abstract: The game of poker presents a serious challenge for artificial intelligence. The game is essentially about dealing with many forms of uncertainty: unobservable opponent cards, undetermined future cards, and unknown opponent strategies. Coping with these uncertainties is critical to playing at a high-level. ?In July 2008, the University of Alberta's poker playing program, Polaris, became the first to defeat top professional players at any variant of poker in a meaningful competition. In this talk, I'll tell the story of this match interleaved with the science that enabled Polaris's accomplishment. Bio: Michael Bowling is an associate professor at the University of Alberta. ?He received his Ph.D. in 2003 from Carnegie Mellon University in the area of artificial intelligence. ?His research focuses on machine learning, game theory, and robotics, and he is particularly fascinated by the problem of how computers can learn to play games through experience. From nasmith at cs.cmu.edu Tue Oct 13 13:33:04 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 13 Oct 2009 13:33:04 -0400 Subject: [Intelligence Seminar] October 20: Donald Burke, GHC 4303, 3:30 - "Modeling Pandemic Influenza: Computation and Simulation of Epidemics and Other Dynamic Public Health Processes" Message-ID: <309df0210910131033l425d0f61q30c3ec2f7f697044@mail.gmail.com> Intelligence Seminar October 20, 2009 3:30 pm GHC 4303 Host: Roni Rosenfeld For meetings, contact Roni Rosenfeld (roni at cs.cmu.edu) Modeling Pandemic Influenza: Computation and Simulation of Epidemics and Other Dynamic Public Health Processes Donald S. Burke, University of Pittsburgh Abstract: Public health is "the science and art of preventing disease, prolonging life and promoting health through the organised efforts of society." In this seminar I will review my own studies of epidemic infectious diseases such as smallpox, dengue, and influenza using computational approaches [eg Nature 2004 427: 344-7; PNAS 2005 102: 15259-64; Nature 2006 442: 448-52; J R Soc Interface 2007 4: 755-62]. For studies of pandemic influenza, we created a large agent based simulation of the USA social structure, then seeded this dynamic human social substrate with a transmissible agent with the characteristics of influenza, and used the model to evaluate various vaccine, drug, and social distancing control interventions. Our models have been highly influential in the development of USA national influenza pandemic strategies. In this seminar I will also discuss ongoing work on our modeling projects supported by the Bill and Melinda Gates Foundation, the NIH, and the CDC, and I will discuss opportunities for experts in computation and simulation to help solve public health problems. Bio: Donald S. Burke, M.D. is Dean of the Graduate School of Public Health, Director of the Center for Vaccine Research, and Associate Vice Chancellor for Global Health at the University of Pittsburgh. He is also first occupant of the UPMC-Jonas Salk Chair in Global Health. A native of Cleveland, Ohio, Dr. Burke received his B.A. from Western Reserve University and his M.D. from Harvard Medical School. He trained in medicine at Boston City and Massachusetts General Hospitals and in infectious diseases at the Walter Reed Army Medical Center. Throughout his professional life he has studied prevention and control of infectious diseases of global concern, including HIV/AIDS, influenza, dengue, and emerging infectious diseases. He has lived six years in Thailand, worked extensively in Cameroon, and conducted field epidemiology and vaccine studies in numerous other developing countries. He has approached epidemic control using from the bench to the He now leads a trans-disciplinary team that develops computational models and simulations of epidemic infectious diseases and uses these simulations to evaluate prevention and control strategies. Dr. Burke has been at the University of Pittsburgh for three years. In October 2009 he was elected to membership in the Institute of Medicine of the National Academies. From nasmith at cs.cmu.edu Mon Oct 19 07:32:38 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 19 Oct 2009 07:32:38 -0400 Subject: [Intelligence Seminar] TOMORROW: Donald Burke, GHC 4303, 3:30 - "Modeling Pandemic Influenza: Computation and Simulation of Epidemics and Other Dynamic Public Health Processes" Message-ID: <309df0210910190432q3cdbf323l3862c22ec2bbfeb2@mail.gmail.com> Intelligence Seminar October 20, 2009 3:30 pm GHC 4303 Host: ?Roni Rosenfeld For meetings, contact Roni Rosenfeld (roni at cs.cmu.edu) Modeling Pandemic Influenza: Computation and Simulation of Epidemics and Other Dynamic Public Health Processes Donald S. Burke, University of Pittsburgh Abstract: Public health is "the science and art of preventing disease, prolonging life and promoting health through the organised efforts of society." ?In this seminar I will review my own studies of epidemic infectious diseases such as smallpox, dengue, and influenza using computational approaches [eg Nature 2004 427: 344-7; PNAS 2005 102: 15259-64; Nature 2006 442: 448-52; J R Soc Interface 2007 4: 755-62]. For studies of pandemic influenza, we created a large agent based simulation of the USA social structure, then seeded this dynamic human social substrate with a transmissible agent with the characteristics of influenza, and used the model to evaluate various vaccine, drug, and social distancing control interventions. Our models have been highly influential in the development of USA national influenza pandemic strategies. ?In this seminar I will also discuss ongoing work on our modeling projects supported by the Bill and Melinda Gates Foundation, the NIH, and the CDC, and I will discuss opportunities for experts in computation and simulation to help solve public health problems. Bio: Donald S. Burke, M.D. is Dean of the Graduate School of Public Health, Director of the Center for Vaccine Research, and Associate Vice Chancellor for Global Health at the University of Pittsburgh. He is also first occupant of the UPMC-Jonas Salk Chair in Global Health. A native of Cleveland, Ohio, Dr. Burke received his B.A. from Western Reserve University and his M.D. from Harvard Medical School. He trained in medicine at Boston City and Massachusetts General Hospitals and in infectious diseases at the Walter Reed Army Medical Center. Throughout his professional life he has studied prevention and control of infectious diseases of global concern, including HIV/AIDS, influenza, dengue, and emerging infectious diseases. He has lived six years in Thailand, worked extensively in Cameroon, and conducted field epidemiology and vaccine studies in numerous other developing countries. He has approached epidemic control using from the bench to the He now leads a trans-disciplinary team that develops computational models and simulations of epidemic infectious diseases and uses these simulations to evaluate prevention and control strategies. Dr. Burke has been at the University of Pittsburgh for three years. In October 2009 he was elected to membership in the Institute of Medicine of the National Academies. From nasmith at cs.cmu.edu Tue Oct 27 08:49:23 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 27 Oct 2009 08:49:23 -0400 Subject: [Intelligence Seminar] November 3: Holger Hoos, GHC 4303, 3:30 - "Taming the Complexity Monster" Message-ID: <309df0210910270549v2a24e85w17ef11ed519f22ea@mail.gmail.com> Intelligence Seminar November 3, 2009 3:30 pm GHC 4303 Host: ?Tuomas Sandholm For meetings, contact Charlotte Yano (yano at cs.cmu.edu) Taming the Complexity Monster Holger Hoos, University of British Columbia Abstract: We live in interesting times - as individuals, as members of various communities and organisations, and as inhabitants of planet Earth, we face many challenges, ranging from climate change to resource limitations, from market risks and uncertainties to complex diseases. To some extent, these challenges arise from the complexity of the systems we are dealing with and of the problems that arise from understanding, modelling and controlling these systems. As computing scientists and IT professionals, we have a lot to contribute: solving complex problems by means of computer systems, software and algorithms is an important part of what our field is about. In this talk, I will focus on one particular type of complexity that has been of central interest in many areas within computing science and its applications, namely computational complexity, and in particular, NP-hardness. I will investigate the question to which extent NP-hard problems are as formidable as is often thought, and I will present an overview of research that fearlessly, and perhaps sometimes foolishly, attempts to deal with these problems in a rather pragmatic way. I will also argue that the area of empirical algorithmics holds the key to solving computationally challenging problems more effectively than many would think possible, while at the same time producing interesting scientific insights. The problems I will be covering include SAT and TSP, two classical and very prominent NP-hard problems; in particular, I will present empirical scaling results for the best-performing complete TSP solver currently known and discuss recent improvements in the state of the art in solving SAT-encoded software verification problems. I will also briefly discuss new results in the areas of timetabling, protein structure prediction and analysis of financial market data. Bio: Holger H. Hoos is an Associate Professor at the Computer Science Department of the University of British Columbia (Canada). His main research areas span empirical algorithmics, artificial intelligence, bioinformatics and computer music. He is a co-author of the book "Stochastic Local Search: Foundations and Applications", and his research has been published in numerous book chapters, journals, and at major conferences in artificial intelligence, operations research, molecular biology and computer music. Holger is a Faculty Associate of the Peter Wall Institute for Advanced Studies and currently serves as President of the Canadian Artificial Intelligence Association (CAIAC). (For further information, see Holger's web page at http://www.cs.ubc.ca/~hoos.) From nasmith at cs.cmu.edu Mon Nov 2 08:44:06 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 2 Nov 2009 08:44:06 -0500 Subject: [Intelligence Seminar] TOMORROW: Holger Hoos, GHC 4303, 3:30 - "Taming the Complexity Monster" Message-ID: <309df0210911020544j23e6e995iede5c30485f55cc0@mail.gmail.com> Intelligence Seminar November 3, 2009 3:30 pm GHC 4303 Host: ?Tuomas Sandholm For meetings, contact Charlotte Yano (yano at cs.cmu.edu) Taming the Complexity Monster Holger Hoos, University of British Columbia Abstract: We live in interesting times - as individuals, as members of various communities and organisations, and as inhabitants of planet Earth, we face many challenges, ranging from climate change to resource limitations, from market risks and uncertainties to complex diseases. To some extent, these challenges arise from the complexity of the systems we are dealing with and of the problems that arise from understanding, modelling and controlling these systems. As computing scientists and IT professionals, we have a lot to contribute: solving complex problems by means of computer systems, software and algorithms is an important part of what our field is about. In this talk, I will focus on one particular type of complexity that has been of central interest in many areas within computing science and its applications, namely computational complexity, and in particular, NP-hardness. I will investigate the question to which extent NP-hard problems are as formidable as is often thought, and I will present an overview of research that fearlessly, and perhaps sometimes foolishly, attempts to deal with these problems in a rather pragmatic way. I will also argue that the area of empirical algorithmics holds the key to solving computationally challenging problems more effectively than many would think possible, while at the same time producing interesting scientific insights. The problems I will be covering include SAT and TSP, two classical and very prominent NP-hard problems; in particular, I will present empirical scaling results for the best-performing complete TSP solver currently known and discuss recent improvements in the state of the art in solving SAT-encoded software verification problems. I will also briefly discuss new results in the areas of timetabling, protein structure prediction and analysis of financial market data. Bio: Holger H. Hoos is an Associate Professor at the Computer Science Department of the University of British Columbia (Canada). His main research areas span empirical algorithmics, artificial intelligence, bioinformatics and computer music. He is a co-author of the book "Stochastic Local Search: Foundations and Applications", and his research has been published in numerous book chapters, journals, and at major conferences in artificial intelligence, operations research, molecular biology and computer music. Holger is a Faculty Associate of the Peter Wall Institute for Advanced Studies and currently serves as President of the Canadian Artificial Intelligence Association (CAIAC). (For further information, see Holger's web page at http://www.cs.ubc.ca/~hoos.) From nasmith at cs.cmu.edu Tue Nov 3 08:58:13 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Tue, 3 Nov 2009 08:58:13 -0500 Subject: [Intelligence Seminar] November 10: Carla Brodley, GHC 4303, 3:30, "Challenges in the Practical Application of Machine Learning" Message-ID: <309df0210911030558h1eed936cw5929fb0cc3a2136f@mail.gmail.com> Intelligence Seminar November 10, 2009 3:30 pm GHC 4303 Host: ?Manuela Veloso For meetings, contact Dana Houston (dhouston at cs.cmu.edu) Challenges in the Practical Application of Machine Learning Carla E. Brodley, Tufts University Abstract: In this talk I will discuss the factors that impact the successful application of supervised machine learning. Driven by several interdisciplinary collaborations, we are addressing the problem of what to do when your your initial accuracy is lower than is acceptable to your domain experts. Low accuracy can be due to three factors: noise in the class labels, insufficient training data, and whether the features describing each training example are able to discriminate the classes. In this talk, I will discuss research efforts at Tufts addressing the second two factors. The first project, introduces a new problem which we have named active class selection (ACS). ACS arises when one can ask the question: given the ability to collect n additional training instances, how should they be distributed with respect to class? The second project examines how one might assess that the class distinctions are not supported by the features and how constraint-based clustering can be used to uncover the true class structure of the data. These two issues and their solutions will be explored in the context of three applications. The first is to create a map of global map of the land cover of the Earth's surface from remotely sensed data (satellite data). The second is to build a classifier based on data collected from an "artificial nose" to discriminate vapors. The "nose" is a collection of sensors that have different reactions to different vapors. The third is to classify HRCT images of the lung. Bio: Carla E. Brodley is a professor in the Department of Computer Science at Tufts University. She received her PhD in computer science from the University of Massachusetts, at Amherst in 1994. From 1994-2004, she was on the faculty of the School of Electrical Engineering at Purdue University. Professor Brodley's research interests include machine learning, knowledge discovery in databases, and computer security. She has worked in the areas of anomaly detection, active learning, classifier formation, unsupervised learning, and applications of machine learning to remote sensing, computer security, digital libraries, astrophysics, content-based image retrieval of medical images, computational biology, saliva diagnostics, evidence-based medicine and chemistry. She was a member of the DSSG in 2004-2005. In 2001 she served as program co-chair for the International Conference on Machine Learning (ICML) and in 2004, she served as the general chair for ICML. Currently she is an associate editor of JMLR and Machine Learning, and she is on the editorial board of DKMD. She is a member of the AAAI Council and is co-chair of the Computing Research Association's Committee on the Status of Women in Computing Research (CRA-W). From nasmith at cs.cmu.edu Mon Nov 9 08:40:11 2009 From: nasmith at cs.cmu.edu (Noah A Smith) Date: Mon, 9 Nov 2009 08:40:11 -0500 Subject: [Intelligence Seminar] TOMORROW: Carla Brodley, GHC 4303, 3:30, "Challenges in the Practical Application of Machine Learning" Message-ID: <309df0210911090540p655c25cfqd9b80c053645be25@mail.gmail.com> Intelligence Seminar November 10, 2009 3:30 pm GHC 4303 Host: ?Manuela Veloso For meetings, contact Dana Houston (dhouston at cs.cmu.edu) Challenges in the Practical Application of Machine Learning Carla E. Brodley, Tufts University Abstract: In this talk I will discuss the factors that impact the successful application of supervised machine learning. ?Driven by several interdisciplinary collaborations, we are addressing the problem of what to do when your your initial accuracy is lower than is acceptable to your domain experts. ?Low accuracy can be due to three factors: noise in the class labels, insufficient training data, and whether the features describing each training example are able to discriminate the classes. ?In this talk, I will discuss research efforts at Tufts addressing the second two factors. ?The first project, introduces a new problem which we have named active class selection (ACS). ?ACS arises when one can ask the question: given the ability to collect n additional training instances, how should they be distributed with respect to class? ?The second project examines how one might assess that the class distinctions are not supported by the features and how constraint-based clustering can be used to uncover the true class structure of the data. ?These two issues and their solutions will be explored in the context of three applications. ?The first is to create a map of global map of the land cover of the Earth's surface from remotely sensed data (satellite data). ?The second is to build a classifier based on data collected from an "artificial nose" to discriminate vapors. ?The "nose" is a collection of sensors that have different reactions to different vapors. The third is to classify HRCT images of the lung. Bio: Carla E. Brodley is a professor in the Department of Computer Science at Tufts University. She received her PhD in computer science from the University of Massachusetts, at Amherst in 1994. From 1994-2004, she was on the faculty of the School of Electrical Engineering at Purdue University. Professor Brodley's research interests include machine learning, knowledge discovery in databases, and computer security. She has worked in the areas of anomaly detection, active learning, classifier formation, unsupervised learning, and applications of machine learning to remote sensing, computer security, digital libraries, astrophysics, content-based image retrieval of medical images, computational biology, saliva diagnostics, evidence-based medicine and chemistry. She was a member of the DSSG in 2004-2005. In 2001 she served as program co-chair for the International Conference on Machine Learning (ICML) and in 2004, she served as the general chair for ICML. Currently she is an associate editor of JMLR and Machine Learning, and she is on the editorial board of DKMD. She is a member of the AAAI Council and is co-chair of the Computing Research Association's Committee on the Status of Women in Computing Research (CRA-W).