From aayushb at cs.cmu.edu Sat Jan 18 10:22:04 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Sat, 18 Jan 2020 10:22:04 -0500 Subject: [AI Seminar] AI Seminar on Jan 21 (NSH 3305) -- Yichong Xu -- Efficient Learning from Diverse Sources of Information Message-ID: Yichong Xu will be giving a seminar on "Efficient Learning from Diverse Sources of Information" from *12:00 - 01:00 PM* in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Efficient Learning from Diverse Sources of Information *Abstract:* Although machine learning has witnessed rapid progress in the last decade, many current learning algorithms are very inefficient in terms of the amount of data it uses, and the time used to train the model. On the other hand, humans excel at many of the learning tasks with very limited data. Why are machines so inefficient, and why can humans learn so well? The key to the answer lies in that humans can learn from diverse sources of information, and are able to use past knowledge to apply in new domains. In this talk, I will study learning from diverse sources of information to make ML algorithms more efficient. In the first part, I will talk about how to incorporate diverse forms of questions into the learning process. Particularly, I will look at the problem of utilizing preference information for learning a regression function and show an interesting connection to nearest neighbors and isotonic regression. In the second part, I will talk about multitask and transfer learning from different domains for natural language understanding. I will explain a sample-reweighting scheme using language models to automatically weight external-domain samples according to their help for the target task. *Bio*: Yichong Xu is a Ph.D. student in Machine Learning Department of Carnegie Mellon University. He works on machine learning, especially on interactive learning problems, with his advisors Profs. Artur Dubrawski and Aarti Singh. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Jan 20 10:02:46 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 20 Jan 2020 10:02:46 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Jan 21 (NSH 3305) -- Yichong Xu -- Efficient Learning from Diverse Sources of Information Message-ID: Yichong Xu will be giving a seminar on "Efficient Learning from Diverse Sources of Information" from *12:00 - 01:00 PM* in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Efficient Learning from Diverse Sources of Information *Abstract:* Although machine learning has witnessed rapid progress in the last decade, many current learning algorithms are very inefficient in terms of the amount of data it uses, and the time used to train the model. On the other hand, humans excel at many of the learning tasks with very limited data. Why are machines so inefficient, and why can humans learn so well? The key to the answer lies in that humans can learn from diverse sources of information, and are able to use past knowledge to apply in new domains. In this talk, I will study learning from diverse sources of information to make ML algorithms more efficient. In the first part, I will talk about how to incorporate diverse forms of questions into the learning process. Particularly, I will look at the problem of utilizing preference information for learning a regression function and show an interesting connection to nearest neighbors and isotonic regression. In the second part, I will talk about multitask and transfer learning from different domains for natural language understanding. I will explain a sample-reweighting scheme using language models to automatically weight external-domain samples according to their help for the target task. *Bio*: Yichong Xu is a Ph.D. student in Machine Learning Department of Carnegie Mellon University. He works on machine learning, especially on interactive learning problems, with his advisors Profs. Artur Dubrawski and Aarti Singh. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Jan 21 14:54:21 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 21 Jan 2020 14:54:21 -0500 Subject: [AI Seminar] Opportunity to talk at CMU AI Seminar Series, Spring 2020 Message-ID: Dear All: We have an exciting line of speakers scheduled for AI seminar series this semester. We do, however, have a few open slots -- http://www.cs.cmu.edu/~aiseminar/. Please let us know if you are interested in presenting your latest-and-greatest work, or know someone who may be interested in giving a talk! Looking forward to seeing you at the AI seminar series. Thanks, Aayush -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Sun Jan 26 19:29:02 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Sun, 26 Jan 2020 19:29:02 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Jan 28 (NSH 3305) -- Han Zhao -- Costs and Benefits of Invariant Representation Learning Message-ID: Han Zhao will be giving a seminar on "Costs and Benefits of Invariant Representation Learning" from *12:00 - 01:00 PM* in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Costs and Benefits of Invariant Representation Learning *Abstract: *The success of supervised machine learning in recent years crucially hinges on the availability of large-scale and unbiased data. However, it is often time-consuming and expensive to collect such data. Recent advances in deep learning focus on learning invariant representations that have found abundant applications in both domain adaptation and algorithmic fairness. However, it is not clear what price we have to pay in terms of task utility for such universal representations. In this talk, I will discuss my recent work on understanding and learning invariant representations. In the first part, I will focus on understanding the costs of existing invariant representations by characterizing a fundamental tradeoff between invariance and utility. In particular, I will use domain adaptation as an example to both theoretically and empirically show such tradeoff in achieving small joint generalization error. This result also implies that when the base rates differ, any fair algorithm has to make a large error on at least one of the groups. In the second part of the talk, I will focus on designing learning algorithms to escape the existing tradeoff and to utilize the benefits of invariant representations. I will show how the algorithm can be used to ensure equalized treatment of individuals between groups, and what additional problem structure that permits efficient domain adaptation through learning invariant representations. *Bio*: Han Zhao is a final-year PhD student at the Machine Learning Department, Carnegie Mellon University. At CMU, he works with Prof. Geoff Gordon. Before coming to CMU, he obtained his BEng degree from the Computer Science Department at Tsinghua University and MMath from the University of Waterloo. He has a broad interest in both the theoretical and applied side of machine learning. In particular, he works on invariant representation learning, probabilistic reasoning with Sum-Product Networks, transfer and multitask learning, and computational social choice. More details are here: https://www.cs.cmu.edu/~hzhao1/ To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Jan 29 22:15:28 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 29 Jan 2020 22:15:28 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Feb 04 (NSH 3305) -- Ashiqur R. KhudaBukhsh -- Hope Speech and Help Speech: Surfacing Positivity Amidst Hate Message-ID: Ashiqur R. KhudaBukhsh will be giving a seminar on "Hope Speech and Help Speech: Surfacing Positivity Amidst Hate" from *12:00 - 01:00 PM* in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Hope Speech and Help Speech: Surfacing Positivity Amidst Hate *Abstract: *Tackling online attacks targeting certain individuals, group of people, or communities is a major modern-day web challenge. Research efforts in hate speech detection thus far have largely focused on identifying and subsequently filtering out negative content that specifically targets such communities. However, this blocking the hate approach alone may not suffice in certain scenarios. We focus on two important cases where amplifying the positives is equally important: refugee crisis in the era of ubiquitous internet, and heated online discussions during heightened political tension between nuclear adversaries. In the context of the Rohingya refugee crisis and the India-Pakistan conflict triggered by the Pulwama terror attack, we describe two lines of work, help speech and hope speech, exhibiting a thematic similarity of surfacing positivity amidst hate. *Bio*: Ashique KhudaBukhsh is currently a Project Scientist at the School of Computer Science, Carnegie Mellon University (CMU). Prior to this role, he was a postdoc mentored by Prof. Jaime Carbonell at CMU. His PhD thesis (Computer Science Department, Carnegie Mellon University, also advised by Prof. Jaime Carbonell) focused on referral networks, an emerging area at the intersection of Active Learning and Game Theory. His Master's thesis at the University of British Columbia (UBC), advised by Prof. Kevin Leyton-Brown and Prof. Holger H. Hoos, focused on automated algorithm design for combinatorial hard problems. More details are here: https://www.cs.cmu.edu/~akhudabu/ To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Sun Feb 2 12:17:35 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Sun, 2 Feb 2020 12:17:35 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Feb 11 (NSH 3305) -- Aude Oliva -- Incorporating Insights from Cognitive Science into AI Message-ID: Aude Oliva will be giving a seminar on "Incorporating Insights from Cognitive Science into AI" from *12:00 - 01:00 PM* on Feb 11 in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Incorporating Insights from Cognitive Science into AI *Abstract: *This talk will cover milestones in visual cognition and human neuroscience research which can inform the ongoing development of artificial intelligence systems. Building on cross-cutting advances in human and machine perception research, I will describe how we can design computational theories and models that are closer to how humans register scenes and events, focus their attention, and remember important information. *Bio*: Aude Oliva is the executive director of the MIT-IBM Watson AI Lab and The MIT Quest for Intelligence. She is also a principal research scientist at the Computer Science and Artificial Intelligence Laboratory. She formerly served as an expert to the National Science Foundation, Directorate of Computer and Information Science and Engineering. Her research interests span computer vision, cognitive science, and human neuroscience. She was honored with the National Science Foundation CAREER Award, a Guggenheim Fellowship, and the Vannevar Bush Faculty Fellowship. She earned a MS and PhD in cognitive science from the Institut National Polytechnique de Grenoble, France. To learn more about the seminar series, please visit the website . *Please send an email to me for a one-on-one meeting with her on Feb 11.* -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Feb 10 08:37:00 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 10 Feb 2020 08:37:00 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Feb 11 (NSH 3305) -- Aude Oliva -- Incorporating Insights from Cognitive Science into AI Message-ID: Aude Oliva will be giving a seminar on "Incorporating Insights from Cognitive Science into AI" from *12:00 - 01:00 PM* on Feb 11 in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Incorporating Insights from Cognitive Science into AI *Abstract: *This talk will cover milestones in visual cognition and human neuroscience research which can inform the ongoing development of artificial intelligence systems. Building on cross-cutting advances in human and machine perception research, I will describe how we can design computational theories and models that are closer to how humans register scenes and events, focus their attention, and remember important information. *Bio*: Aude Oliva is the executive director of the MIT-IBM Watson AI Lab and The MIT Quest for Intelligence. She is also a principal research scientist at the Computer Science and Artificial Intelligence Laboratory. She formerly served as an expert to the National Science Foundation, Directorate of Computer and Information Science and Engineering. Her research interests span computer vision, cognitive science, and human neuroscience. She was honored with the National Science Foundation CAREER Award, a Guggenheim Fellowship, and the Vannevar Bush Faculty Fellowship. She earned a MS and PhD in cognitive science from the Institut National Polytechnique de Grenoble, France. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Sun Feb 16 09:15:31 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Sun, 16 Feb 2020 09:15:31 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Feb 18 (NSH 3305) -- Ben Lengerich -- Interaction Effects: Helpful or Hurtful? Message-ID: Ben Lengerich will be giving a seminar on "Interaction Effects: Helpful or Hurtful?" from *12:00 - 01:00 PM* on Feb 18 in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Interaction Effects: Helpful or Hurtful? *Abstract: *The large representational capacity of deep learning models is often viewed as a positive attribute which allows us to learn interactions of many input variables. However, large model classes can also present challenges for estimation. In this talk, we take special interest in learning interaction effects. First, we define interaction effects through the statistical framework of the functional ANOVA. By giving care to this definition, we encounter several surprising findings about the nature of interaction effects (e.g. all interaction effects look like XOR). Next, we find that traditional machine learning models (such as tree-based models) gain almost all of their predictive power from low-order interaction effects. Turning to deep models, we find that fully-connected networks tend to estimate a large amount of spurious interaction effects. Finally, we present a view of Dropout as a regularizer against interaction effects. *Bio*: Ben Lengerich is a Ph.D. student in the CS Department at Carnegie Mellon University, advised by Prof. Eric Xing. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Feb 24 07:23:28 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 24 Feb 2020 07:23:28 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on Feb 25 (NSH 3305) -- Devendra Chaplot -- Learning to Explore using Active Neural SLAM Message-ID: Devendra Chaplot will be giving a seminar on "Learning to Explore using Active Neural SLAM" from *12:00 - 01:00 PM* on Feb 25 in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Learning to Explore using Active Neural SLAM *Abstract: *In this talk, I will present a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and learning-based navigation methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of CVPR 2019 Habitat Navigation Challenge. *Bio*: Devendra Singh Chaplot is a Ph.D. student in the Machine Learning Department at Carnegie Mellon University working with Prof. Ruslan Salakhutdinov. His research interests lie at the intersection of Machine Learning, Computer Vision and Robotics. He has led the design of several AI systems which won the CVPR-2019 Habitat Navigation Challenge and the Visual-Doom AI Competition 2017. Chaplot is a recipient of Facebook Fellowship Award and his research has received Best Paper and Best Demo awards at leading AI conferences. His research has also been featured in several popular media outlets such as MIT Technology Review, TechCrunch, Engadget, Popular Science, Kotaku, and Daily Mail. Before joining CMU, Chaplot received his Bachelor's degree in Computer Science from IIT Bombay. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Sun Mar 1 08:36:47 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Sun, 1 Mar 2020 08:36:47 -0500 Subject: [AI Seminar] AI Seminar sponsored by Fortive on March 03 (NSH 3305) -- Dravyansh Sharma -- Learning piecewise Lipschitz functions in changing environments Message-ID: Dravyansh Sharma will be giving a seminar on "Learning piecewise Lipschitz functions in changing environments" from *12:00 - 01:00 PM* on March 03 in Newell Simon Hall (NSH) 3305. CMU AI Seminar is sponsored by Fortive. Lunch will be served. Following are the details of the talk: *Title: *Learning piecewise Lipschitz functions in changing environments *Abstract: *When using machine learning algorithms, often one needs to tune hyperparameters optimized for a given data instance. For many problems, including clustering, a small tweak to the parameters can cause a cascade of changes in the algorithm's behavior, so the algorithm's performance is a discontinuous function of the parameters. Optimization in the presence of sharp (non-Lipschitz), unpredictable (w.r.t. time and amount) changes is a challenging and largely unexplored problem of great significance. We consider the class of piecewise Lipschitz functions, which is the most general online setting considered in the literature for the problem. To capture changing environments, we look at the 'shifting regret', which allows for a finite number of environment shifts at unknown times. We provide a shifting regret bound for well-dispersed functions, where dispersion roughly quantifies the rate at which discontinuities appear in the utility functions in expectation. Our near-tight lower bounds further show how dispersion is necessary and sufficient for low regret. We empirically demonstrate a key application of our algorithms to online clustering problems on popular benchmarks. Joint work with Nina Balcan and Travis Dick. *Bio*: Dravy is a graduate student at CMU advised by Nina Balcan . He is interested in designing algorithms for machine learning with strong and provable performance guarantees. Previously he has worked with the Speech team at Google and completed his undergraduate studies at IIT Delhi. The work presented in the present talk has been accepted for publication at AISTATS 2020 , Palermo, and was awarded the first prize in poster competition at YinzOR 2019 , Pittsburgh. To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Thu Mar 26 16:18:49 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Thu, 26 Mar 2020 16:18:49 -0400 Subject: [AI Seminar] Online AI Seminar on March 31 (Zoom) -- Emma Brunskill -- Learning from Limited Samples to Robustly Make Good Decisions. AI seminar is sponsored by Fortive. Message-ID: Emma Brunskill (Stanford University) will be giving an online seminar on "Learning from Limited Samples to Robustly Make Good Decisions" from *12:00 - 01:00 PM* on March 31. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Learning from Limited Samples to Robustly Make Good Decisions *Abstract: *There is increasing excitement and impressive empirical successes using reinforcement learning-- where agents can learn through experience to make decisions. Yet people are remarkably able to do so much faster and for much more complicated objectives. Creating algorithms that can mimic such performance is an essential part of achieving artificial intelligence. Equally importantly, it will help us to use reinforcement learning to assist people in the numerous societal challenges, including education and healthcare, where humans plus AI may be able to do far better than either alone. In this talk, I will discuss our progress on some of the technical challenges that arise in this pursuit, including sample efficiency, counterfactual reasoning, robustness, and applications to health and education. *Bio*: Emma Brunskill is an assistant professor in the Computer Science Department at Stanford University where she leads the AI for Human Impact group. She was previously an assistant professor at Carnegie Mellon University in the Computer Science department. She is the recipient of multiple early faculty career awards (National Science Foundation, Office of Naval Research, Microsoft Research) and her group has received several best research paper nominations (CHI, EDMx3) and awards (UAI, RLDM). To learn more about the seminar series, please visit the website . -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Mar 30 20:58:02 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 30 Mar 2020 20:58:02 -0400 Subject: [AI Seminar] Online AI Seminar on March 31 (Zoom) -- Emma Brunskill -- Learning from Limited Samples to Robustly Make Good Decisions. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: reminder: this is tomorrow at noon. On Thu, Mar 26, 2020 at 4:18 PM Aayush Bansal wrote: > Emma Brunskill (Stanford University) will be giving an online seminar on > "Learning from Limited Samples to Robustly Make Good Decisions" from *12:00 > - 01:00 PM* on March 31. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Learning from Limited Samples to Robustly Make Good Decisions > > *Abstract: *There is increasing excitement and impressive empirical > successes using reinforcement learning-- where agents can learn through > experience to make decisions. Yet people are remarkably able to do so much > faster and for much more complicated objectives. Creating algorithms that > can mimic such performance is an essential part of achieving artificial > intelligence. Equally importantly, it will help us to use reinforcement > learning to assist people in the numerous societal challenges, including > education and healthcare, where humans plus AI may be able to do far better > than either alone. In this talk, I will discuss our progress on some of the > technical challenges that arise in this pursuit, including sample > efficiency, counterfactual reasoning, robustness, and applications to > health and education. > > *Bio*: Emma Brunskill is an assistant professor in the Computer Science > Department at Stanford University where she leads the AI for Human Impact > group. She was previously an assistant professor at Carnegie Mellon > University in the Computer Science department. She is the recipient of > multiple early faculty career awards (National Science Foundation, Office > of Naval Research, Microsoft Research) and her group has received several > best research paper nominations (CHI, EDMx3) and awards (UAI, RLDM). > > To learn more about the seminar series, please visit the website > . > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Mar 31 13:28:55 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 31 Mar 2020 13:28:55 -0400 Subject: [AI Seminar] Online AI Seminar on April 07 (Zoom) -- Cynthia Rudin -- Do Simpler Models Exist and How Can We Find Them?. AI seminar is sponsored by Fortive. Message-ID: Cynthia Rudin (Duke University) will be giving an online seminar on "Do Simpler Models Exist and How Can We Find Them?" from *12:00 - 01:00 PM* on April 07. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Do Simpler Models Exist and How Can We Find Them? *Abstract: *While the trend in machine learning has tended towards more complex hypothesis spaces, it is not clear that this extra complexity is always necessary or helpful for many domains. In particular, models and their predictions are often made easier to understand by adding interpretability constraints. These constraints shrink the hypothesis space; that is, they make the model simpler. Statistical learning theory suggests that generalization may be improved as a result as well. However, adding extra constraints can make optimization (exponentially) harder. For instance, it is much easier in practice to create an accurate neural network than an accurate and sparse decision tree. We address the following question: Can we show that a simple-but-accurate machine learning model might exist for our problem, before actually finding it? If the answer is promising, it would then be worthwhile to solve the harder constrained optimization problem to find such a model. In this talk, I present an easy calculation to check for the possibility of a simpler model. This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Time-permitting, I will then briefly overview our progress towards the challenging problem of finding optimal sparse decision trees. *Bio*: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She is a Thomas Langford Lecturer at Duke University for 2019-2020. *Cynthia is available for one-on-one (virtual) meetings on April 07. Please send an email to me if you would like to schedule a meeting with her.* To learn more about the seminar series, please visit the website. -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Apr 6 17:23:43 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 6 Apr 2020 17:23:43 -0400 Subject: [AI Seminar] Online AI Seminar on April 07 (Zoom) -- Cynthia Rudin -- Do Simpler Models Exist and How Can We Find Them?. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: reminder.. this is tomorrow noon. On Tue, Mar 31, 2020 at 1:28 PM Aayush Bansal wrote: > Cynthia Rudin (Duke University) will be giving an online seminar on "Do > Simpler Models Exist and How Can We Find Them?" from *12:00 - 01:00 PM* on > April 07. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Do Simpler Models Exist and How Can We Find Them? > > *Abstract: *While the trend in machine learning has tended towards more > complex hypothesis spaces, it is not clear that this extra complexity is > always necessary or helpful for many domains. In particular, models and > their predictions are often made easier to understand by adding > interpretability constraints. These constraints shrink the hypothesis > space; that is, they make the model simpler. Statistical learning theory > suggests that generalization may be improved as a result as well. However, > adding extra constraints can make optimization (exponentially) harder. For > instance, it is much easier in practice to create an accurate neural > network than an accurate and sparse decision tree. We address the following > question: Can we show that a simple-but-accurate machine learning model > might exist for our problem, before actually finding it? If the answer is > promising, it would then be worthwhile to solve the harder constrained > optimization problem to find such a model. In this talk, I present an easy > calculation to check for the possibility of a simpler model. This > calculation indicates that simpler-but-accurate models do exist in practice > more often than you might think. Time-permitting, I will then briefly > overview our progress towards the challenging problem of finding optimal > sparse decision trees. > > *Bio*: Cynthia Rudin is a professor of computer science, electrical and > computer engineering, and statistical science at Duke University. > Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her > degrees are from the University at Buffalo and Princeton University. She is > a three-time winner of the INFORMS Innovative Applications in Analytics > Award, was named as one of the "Top 40 Under 40" by Poets and Quants in > 2015, and was named by Businessinsider.com as one of the 12 most impressive > professors at MIT in 2015. She has served on committees for INFORMS, the > National Academies, the American Statistical Association, DARPA, the NIJ, > and AAAI. She is a fellow of both the American Statistical Association and > Institute of Mathematical Statistics. She is a Thomas Langford Lecturer at > Duke University for 2019-2020. > > *Cynthia is available for one-on-one (virtual) meetings on April 07. > Please send an email to me if you would like to schedule a meeting with > her.* > > > To learn more about the seminar series, please visit the website. > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Apr 7 22:39:14 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 7 Apr 2020 22:39:14 -0400 Subject: [AI Seminar] Online AI Seminar on April 14 (Zoom) -- Thomas G. Dietterich -- Advances in Anomaly Detection. AI seminar is sponsored by Fortive. Message-ID: Thomas G. Dietterich (Oregon State University) will be giving an online seminar on "Advances in Anomaly Detection" from *12:00 - 01:00 PM* on April 14. This is a joint seminar with the Institute of Software Research (ISR). Zoom Link: *https://cmu.zoom.us/j/778021232 * (NOTE: different zoom link for this talk) CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Advances in Anomaly Detection *Abstract: *Anomaly detection is important for data cleaning, cyber security, and robust AI systems. This talk will review recent work in our group on (a) benchmarking existing algorithms, (b) developing a theoretical understanding of their behavior, (c) explaining anomaly "alarms" to a data analyst, and (d) interactively re-ranking candidate anomalies in response to analyst feedback. Then the talk will describe two applications: (a) detecting and diagnosing sensor failures in weather networks and (b) open category detection in supervised learning. *Bio*: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University, where he joined the faculty in 1985. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 190 refereed publications and two books. His research is motivated by challenging real world problems with a special focus on robust artificial intelligence and sustainable development. He is best known for his work on ensemble methods in machine learning including the development of error-correcting output coding. Dietterich has also invented the MAXQ method for hierarchical reinforcement learning. He has a long record of service to the AI/ML community including President of AAAI, President of the IMLS, and Executive Editor of Machine Learning. He currently serves as one of the moderators for the arXiv machine learning category. He tweets as @tdietterich. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Apr 13 12:10:21 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 13 Apr 2020 12:10:21 -0400 Subject: [AI Seminar] Online AI Seminar on April 14 (Zoom) -- Thomas G. Dietterich -- Advances in Anomaly Detection. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder -- this is tomorrow at noon.. NOTE -- we will be using a different Zoom link for this talk than our usual meetings -- *https://cmu.zoom.us/j/778021232 * On Tue, Apr 7, 2020 at 10:39 PM Aayush Bansal wrote: > Thomas G. Dietterich (Oregon State University) will be giving an online > seminar on "Advances in Anomaly Detection" from *12:00 - 01:00 PM* on > April 14. This is a joint seminar with the Institute of Software Research > (ISR). > > Zoom Link: *https://cmu.zoom.us/j/778021232 > * > (NOTE: different zoom link for this talk) > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Advances in Anomaly Detection > > *Abstract: *Anomaly detection is important for data cleaning, cyber > security, and robust AI systems. This talk will review recent work in our > group on (a) benchmarking existing algorithms, (b) developing a theoretical > understanding of their behavior, (c) explaining anomaly "alarms" to a data > analyst, and (d) interactively re-ranking candidate anomalies in response > to analyst feedback. Then the talk will describe two applications: (a) > detecting and diagnosing sensor failures in weather networks and (b) open > category detection in supervised learning. > > *Bio*: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois > 1979; PhD Stanford University 1984) is Professor Emeritus in the School of > Electrical Engineering and Computer Science at Oregon State University, > where he joined the faculty in 1985. Dietterich is one of the pioneers of > the field of Machine Learning and has authored more than 190 refereed > publications and two books. His research is motivated by challenging real > world problems with a special focus on robust artificial intelligence and > sustainable development. He is best known for his work on ensemble methods > in machine learning including the development of error-correcting output > coding. Dietterich has also invented the MAXQ method for hierarchical > reinforcement learning. He has a long record of service to the AI/ML > community including President of AAAI, President of the IMLS, and Executive > Editor of Machine Learning. He currently serves as one of the moderators > for the arXiv machine learning category. He tweets as @tdietterich. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Thu Apr 16 07:55:09 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Thu, 16 Apr 2020 07:55:09 -0400 Subject: [AI Seminar] Online AI Seminar on April 21 (Zoom) -- Animesh Garg -- Generalizable Autonomy in Robotics. AI seminar is sponsored by Fortive. Message-ID: Animesh Garg (University of Toronto) will be giving an online seminar on "Generalizable Autonomy in Robotics" from *12:00 - 01:00 PM* on April 21. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Generalizable Autonomy in Robotics *Abstract: *Data-driven methods in Robotics circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify learning with perception, control, and planning. In this talk, I will show how inductive biases and priors help with Generalizable Autonomy. First I will talk about the choice of action representations in RL and imitation from ensembles of suboptimal supervisors. Then I will talk about latent variable models in self-supervised learning. Finally, I will talk about meta-learning for multi-task learning and data gathering in robotics. *Bio*: Animesh Garg is a CIFAR AI Chair Assistant Professor at the University of Toronto and Vector Institute. He is also a Senior Research Scientist at Nvidia. His research interests focus on the intersection of Learning and Perception in Robot Manipulation. He works on efficient generalization in large scale imitation learning. Animesh works on the applications of robot manipulation in surgery and manufacturing as well as personal robotics. Previously, Animesh received his Ph.D. from the University of California, Berkeley and a postdoc at Stanford AI Labs. His work has won multiple best paper awards and nominations including ICRA 2019, ICRA 2015 and IROS 2019, among others and has also featured in press outlets such as New York Times, BBC, and Wired. *Animesh is available for one-on-one (virtual) meetings on April 21. Please send me an email if you would like to schedule a meeting with him.* To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Apr 20 15:31:06 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 20 Apr 2020 15:31:06 -0400 Subject: [AI Seminar] Online AI Seminar on April 21 (Zoom) -- Animesh Garg -- Generalizable Autonomy in Robotics. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. Zoom Link: *https://cmu.zoom.us/j/262225154 * On Thu, Apr 16, 2020 at 7:55 AM Aayush Bansal wrote: > Animesh Garg (University of Toronto) will be giving an online seminar on "Generalizable > Autonomy in Robotics" from *12:00 - 01:00 PM* on April 21. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Generalizable Autonomy in Robotics > > *Abstract: *Data-driven methods in Robotics circumvent hand-tuned feature > engineering, albeit lack guarantees and often incur a massive computational > expense. My research aims to bridge this gap and enable generalizable > imitation for robot autonomy. We need to build systems that can capture > semantic task structures that promote sample efficiency and can generalize > to new task instances across visual, dynamical or semantic variations. And > this involves designing algorithms that unify learning with perception, > control, and planning. In this talk, I will show how inductive biases and > priors help with Generalizable Autonomy. First I will talk about the choice > of action representations in RL and imitation from ensembles of suboptimal > supervisors. Then I will talk about latent variable models in > self-supervised learning. Finally, I will talk about meta-learning for > multi-task learning and data gathering in robotics. > > *Bio*: Animesh Garg is a CIFAR AI Chair Assistant Professor at the > University of Toronto and Vector Institute. He is also a Senior Research > Scientist at Nvidia. His research interests focus on the intersection of > Learning and Perception in Robot Manipulation. He works on efficient > generalization in large scale imitation learning. Animesh works on the > applications of robot manipulation in surgery and manufacturing as well as > personal robotics. Previously, Animesh received his Ph.D. from the > University of California, Berkeley and a postdoc at Stanford AI Labs. His > work has won multiple best paper awards and nominations including ICRA > 2019, ICRA 2015 and IROS 2019, among others and has also featured in press > outlets such as New York Times, BBC, and Wired. > > *Animesh is available for one-on-one (virtual) meetings on April 21. > Please send me an email if you would like to schedule a meeting with him.* > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Apr 21 13:56:58 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 21 Apr 2020 13:56:58 -0400 Subject: [AI Seminar] Online AI Seminar on April 28 (Zoom) -- Hyun Soo Park -- Scaling Up Behavioral Imaging Using Many Cameras. AI seminar is sponsored by Fortive. Message-ID: Hyun Soo Park (University of Minnesota) will be giving an online seminar on "Scaling Up Behavioral Imaging Using Many Cameras" from *12:00 - 01:00 PM* on April 28. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Scaling Up Behavioral Imaging Using Many Cameras *Abstract: *Nonverbal behavioral signals such as gaze direction, facial expression, and body gesture have been ingrained into our interpersonal communications, which often appear at microscopic scale. Despite their omnipresence in all aspects of social interactions, existing AI systems are nearly blinded to them. In this talk, I will walk through our effort towards enabling 3D behavioral imaging---a computational model that allows precise measurements of microscopic social signals from numerous multiview cameras. A key challenge is that social interactions inherently induce self-occlusion, which fundamentally limits accurate 3D reconstruction from the image streams. I will argue that associating semantic meaning with geometry, e.g., holistic finger pose, provides a strong cue to predict the missing data. To learn such visual semantics, I will introduce a new large-scale human behavior dataset called HUMBI scanned by 107 HD cameras at Minnesota State Fair. In the second part of the talk, I will discuss a computational approach to measure free-ranging behaviors of monkeys for neuroscience study. Unlike humans, these monkeys are challenging due to lack of annotation data. To address this, I will introduce a semi-supervised learning framework that leverages multiview geometry and tracking to reconstruct their motion in 3D. *Bio*: Hyun Soo Park is an Assistant Professor at the Department of Computer Science and Engineering, the University of Minnesota (UMN). He is interested in computer vision approaches for behavioral imaging. He has recieved NSF's CRII and CAREER awards. Prior to the UMN, he was a Postdoctoral Fellow in GRASP Lab at University of Pennsylvania. He earned his Ph.D. from Carnegie Mellon University. *Hyun Soo is available for one-on-one (virtual) meetings on April 28. Please send me an email if you would like to schedule a meeting with him.* To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Apr 27 17:55:44 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 27 Apr 2020 17:55:44 -0400 Subject: [AI Seminar] Online AI Seminar on April 28 (Zoom) -- Hyun Soo Park -- Scaling Up Behavioral Imaging Using Many Cameras. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Tue, Apr 21, 2020 at 1:56 PM Aayush Bansal wrote: > Hyun Soo Park (University of Minnesota) will be giving an online seminar > on "Scaling Up Behavioral Imaging Using Many Cameras" from *12:00 - 01:00 > PM* on April 28. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Scaling Up Behavioral Imaging Using Many Cameras > > *Abstract: *Nonverbal behavioral signals such as gaze direction, facial > expression, and body gesture have been ingrained into our interpersonal > communications, which often appear at microscopic scale. Despite their > omnipresence in all aspects of social interactions, existing AI systems are > nearly blinded to them. In this talk, I will walk through our effort > towards enabling 3D behavioral imaging---a computational model that allows > precise measurements of microscopic social signals from numerous multiview > cameras. A key challenge is that social interactions inherently induce > self-occlusion, which fundamentally limits accurate 3D reconstruction from > the image streams. I will argue that associating semantic meaning with > geometry, e.g., holistic finger pose, provides a strong cue to predict the > missing data. To learn such visual semantics, I will introduce a new > large-scale human behavior dataset called HUMBI scanned by 107 HD cameras > at Minnesota State Fair. In the second part of the talk, I will discuss a > computational approach to measure free-ranging behaviors of monkeys for > neuroscience study. Unlike humans, these monkeys are challenging due to > lack of annotation data. To address this, I will introduce a > semi-supervised learning framework that leverages multiview geometry and > tracking to reconstruct their motion in 3D. > > *Bio*: Hyun Soo Park is an Assistant Professor at the Department of > Computer Science and Engineering, the University of Minnesota (UMN). He is > interested in computer vision approaches for behavioral imaging. He has > recieved NSF's CRII and CAREER awards. Prior to the UMN, he was a > Postdoctoral Fellow in GRASP Lab at University of Pennsylvania. He earned > his Ph.D. from Carnegie Mellon University. > > *Hyun Soo is available for one-on-one (virtual) meetings on April 28. > Please send me an email if you would like to schedule a meeting with him.* > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Apr 28 17:17:50 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 28 Apr 2020 17:17:50 -0400 Subject: [AI Seminar] Online AI Seminar on May 05 (Zoom) -- Carl Vondrick -- Learning from Unlabeled Video. AI seminar is sponsored by Fortive. Message-ID: Carl Vondrick (Columbia University) will be giving an online seminar on "Learning from Unlabeled Video" from *12:00 - 01:00 PM* on May 05. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Learning from Unlabeled Video *Abstract: *Unlabeled video is available at massive scales and divulges the realistic complexity of everyday visual dynamics. In this talk, I will discuss our research to capitalize on unlabeled video to train computer vision systems without a human teacher. By creating learning algorithms that use incidental and structural clues naturally available in video, our research shows how to train computers to track objects, recognize human action, and anticipate future outcomes. Visualizations and experiments show that, although the models are not trained with ground-truth labels, rich perceptual representations emerge, which can be transferred across visual analysis tasks. We believe self-supervised learning is a promising approach to train machines to perceive their surroundings. *Bio*: Carl Vondrick is an Assistant Professor of Computer Science at Columbia University. Previously, he was a Research Scientist at Google. He obtained his PhD from MIT. For more details, see his homepage . To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon May 4 19:17:32 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 4 May 2020 19:17:32 -0400 Subject: [AI Seminar] Online AI Seminar on May 05 (Zoom) -- Carl Vondrick -- Learning from Unlabeled Video. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Tue, Apr 28, 2020 at 5:17 PM Aayush Bansal wrote: > Carl Vondrick (Columbia University) will be giving an online seminar on "Learning > from Unlabeled Video" from *12:00 - 01:00 PM* on May 05. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Learning from Unlabeled Video > > *Abstract: *Unlabeled video is available at massive scales and divulges > the realistic complexity of everyday visual dynamics. In this talk, I will > discuss our research to capitalize on unlabeled video to train computer > vision systems without a human teacher. By creating learning algorithms > that use incidental and structural clues naturally available in video, our > research shows how to train computers to track objects, recognize human > action, and anticipate future outcomes. Visualizations and experiments show > that, although the models are not trained with ground-truth labels, rich > perceptual representations emerge, which can be transferred across visual > analysis tasks. We believe self-supervised learning is a promising approach > to train machines to perceive their surroundings. > > *Bio*: Carl Vondrick is an Assistant Professor of Computer Science at > Columbia University. Previously, he was a Research Scientist at Google. He > obtained his PhD from MIT. For more details, see his homepage > . > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed May 6 17:30:15 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 6 May 2020 17:30:15 -0400 Subject: [AI Seminar] Online AI Seminar on May 12 (Zoom) -- Aayush Bansal -- Computational Studio: A computational machinery to enhance social communication. AI seminar is sponsored by Fortive. Message-ID: I will be giving an online seminar on "Computational Studio: A computational machinery to enhance social communication" from *12:00 - 01:00 PM* on May 12. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Computational Studio: A computational machinery to enhance social communication *Abstract: *Licklider and Taylor (1968) envisioned computational machinery that could enable better communication between humans than face-to-face interaction. In the last fifty years, we have used computing to develop various means of communication, such as mail, messaging, phone calls, video conversation, and virtual reality. These are, however, a proxy of face-to-face communication that aims at encoding words, expressions, emotions, and body language at the source and decoding them reliably at the destination. The true revolution of personal computing has not begun yet because we have not been able to tap the real potential of computing for social communication. A computational machinery that can understand and create a four-dimensional audio-visual world can enable humans to describe their imagination and share it with others. In this talk, I will introduce the Computational Studio: an environment that allows non-specialists to construct and creatively edit the 4D audio-visual world from sparse audio and video samples. The Computational Studio aims to enable everyone to relive old memories through a form of virtual time travel, to automatically create new experiences, and share them with others using everyday computational devices. There are three essential components of the Computational Studio: (1) how can we capture 4D audio-visual world?; (2) how can we synthesize the audio-visual world using examples?; and (3) how can we interactively create and edit the audio-visual world? The first part of this talk introduces the work on capturing and browsing in-the-wild 4D audio-visual world in a self-supervised manner and efforts on building a multi-agent capture system. The applications of this work apply to social communication and to digitizing intangible cultural heritage, capturing tribal dances and wildlife in the natural environment, and understanding the social behavior of human beings. In the second part, I will talk about the example-based audio-visual synthesis in an unsupervised manner. Example-based audio-visual synthesis allows us to express ourselves easily. Finally, I will talk about the interactive visual synthesis that allows us to manually create and edit visual experiences. Here I will also stress the importance of thinking about a human user and computational devices when designing content creation applications. The Computational Studio is a first step towards unlocking the full degree of creative imagination, which is currently limited to the human mind by the limits of the individual's expressivity and skill. It has the potential to change the way we audio-visually communicate with others. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon May 11 13:58:39 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 11 May 2020 13:58:39 -0400 Subject: [AI Seminar] Online AI Seminar on May 12 (Zoom) -- Aayush Bansal -- Computational Studio: A computational machinery to enhance social communication. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Wed, May 6, 2020 at 5:30 PM Aayush Bansal wrote: > I will be giving an online seminar on "Computational Studio: A > computational machinery to enhance social communication" from *12:00 - > 01:00 PM* on May 12. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Computational Studio: A computational machinery to enhance > social communication > > *Abstract: *Licklider and Taylor (1968) envisioned computational > machinery that could enable better communication between humans than > face-to-face interaction. In the last fifty years, we have used computing > to develop various means of communication, such as mail, messaging, phone > calls, video conversation, and virtual reality. These are, however, a proxy > of face-to-face communication that aims at encoding words, expressions, > emotions, and body language at the source and decoding them reliably at the > destination. The true revolution of personal computing has not begun yet > because we have not been able to tap the real potential of computing for > social communication. A computational machinery that can understand and > create a four-dimensional audio-visual world can enable humans to describe > their imagination and share it with others. In this talk, I will introduce > the Computational Studio: an environment that allows non-specialists to > construct and creatively edit the 4D audio-visual world from sparse audio > and video samples. The Computational Studio aims to enable everyone to > relive old memories through a form of virtual time travel, to automatically > create new experiences, and share them with others using everyday > computational devices. > > There are three essential components of the Computational Studio: (1) how > can we capture 4D audio-visual world?; (2) how can we synthesize the > audio-visual world using examples?; and (3) how can we interactively create > and edit the audio-visual world? The first part of this talk introduces the > work on capturing and browsing in-the-wild 4D audio-visual world in a > self-supervised manner and efforts on building a multi-agent capture > system. The applications of this work apply to social communication and to > digitizing intangible cultural heritage, capturing tribal dances and > wildlife in the natural environment, and understanding the social behavior > of human beings. In the second part, I will talk about the example-based > audio-visual synthesis in an unsupervised manner. Example-based > audio-visual synthesis allows us to express ourselves easily. Finally, I > will talk about the interactive visual synthesis that allows us to manually > create and edit visual experiences. Here I will also stress the importance > of thinking about a human user and computational devices when designing > content creation applications. > > The Computational Studio is a first step towards unlocking the full degree > of creative imagination, which is currently limited to the human mind by > the limits of the individual's expressivity and skill. It has the potential > to change the way we audio-visually communicate with others. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed May 13 21:53:39 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 13 May 2020 21:53:39 -0400 Subject: [AI Seminar] Online AI Seminar on May 19 (Zoom) -- Laura Leal-Taixe -- Multi-Object Tracking: Towards end-to-end learning and data privacy. AI seminar is sponsored by Fortive. Message-ID: Laura Leal-Taixe (Technical University of Munich, TUM) will be giving an online seminar on "Multi-Object Tracking: Towards end-to-end learning and data privacy" from *12:00 - 01:00 PM* on May 19. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Multi-Object Tracking: Towards end-to-end learning and data privacy *Abstract: *In this talk, I will go over two of our recent works in multi-object tracking that aim to push the field towards the end-to-end learning paradigm. In the first one, we study the power of using the regression head of the detector for tracking, hence converting a detector into a tracktor. Even if this approach yields state-of-the-art results, it has obvious shortcomings which we analyze and aim to overcome in the second work. Towards this end, we leverage the graphical model formulation of multi-object tracking in order to cast the tracking problem as an end-to-end learning problem. I would also like to talk about a brand new aspect of multi-object tracking that we are working on, one which we foresee will have a strong impact in the way the field is applied to society, namely, data privacy. *Bio*: Prof. Dr. Laura Leal-Taixe is a tenure-track professor (W2) at the Technical University of Munich, leading the Dynamic Vision and Learning group. Before that, she spent two years as a postdoctoral researcher at ETH Zurich, Switzerland, and a year as a senior postdoctoral researcher in the Computer Vision Group at the Technical University in Munich. She obtained her PhD from the Leibniz University of Hannover in Germany, spending a year as a visiting scholar at the University of Michigan, Ann Arbor, USA. She pursued B.Sc. and M.Sc. in Telecommunications Engineering at the Technical University of Catalonia (UPC) in her native city of Barcelona. She went to Boston, USA to do her Masters Thesis at Northeastern University with a fellowship from the Vodafone foundation. She is a recipient of the Sofja Kovalevskaja Award of 1.65 million euros for her project socialMaps. *Laura is available for one-on-one (virtual) meetings on May 19. Please send me an email if you would like to schedule a meeting with her.* To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon May 18 12:46:12 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 18 May 2020 12:46:12 -0400 Subject: [AI Seminar] Online AI Seminar on May 19 (Zoom) -- Laura Leal-Taixe -- Multi-Object Tracking: Towards end-to-end learning and data privacy. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Wed, May 13, 2020 at 9:53 PM Aayush Bansal wrote: > Laura Leal-Taixe (Technical University of Munich, TUM) will be giving an > online seminar on "Multi-Object Tracking: Towards end-to-end learning and > data privacy" from *12:00 - 01:00 PM* on May 19. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Multi-Object Tracking: Towards end-to-end learning and data > privacy > > *Abstract: *In this talk, I will go over two of our recent works in > multi-object tracking that aim to push the field towards the end-to-end > learning paradigm. In the first one, we study the power of using the > regression head of the detector for tracking, hence converting a detector > into a tracktor. Even if this approach yields state-of-the-art results, it > has obvious shortcomings which we analyze and aim to overcome in the second > work. Towards this end, we leverage the graphical model formulation of > multi-object tracking in order to cast the tracking problem as an > end-to-end learning problem. I would also like to talk about a brand new > aspect of multi-object tracking that we are working on, one which we > foresee will have a strong impact in the way the field is applied to > society, namely, data privacy. > > *Bio*: Prof. Dr. Laura Leal-Taixe is a tenure-track professor (W2) at the > Technical University of Munich, leading the Dynamic Vision and Learning > group. Before that, she spent two years as a postdoctoral researcher at ETH > Zurich, Switzerland, and a year as a senior postdoctoral researcher in the > Computer Vision Group at the Technical University in Munich. She obtained > her PhD from the Leibniz University of Hannover in Germany, spending a year > as a visiting scholar at the University of Michigan, Ann Arbor, USA. She > pursued B.Sc. and M.Sc. in Telecommunications Engineering at the Technical > University of Catalonia (UPC) in her native city of Barcelona. She went to > Boston, USA to do her Masters Thesis at Northeastern University with a > fellowship from the Vodafone foundation. She is a recipient of the Sofja > Kovalevskaja Award of 1.65 million euros for her project socialMaps. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed May 20 17:07:58 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 20 May 2020 17:07:58 -0400 Subject: [AI Seminar] Online AI Seminar on May 26 (Zoom) -- Thodoris Lykouris -- Corruption robust exploration in episodic reinforcement learning. AI seminar is sponsored by Fortive. Message-ID: Thodoris Lykouris (Microsoft Research, NYC) will be giving an online seminar on "Corruption robust exploration in episodic reinforcement learning" from *12:00 - 01:00 PM* on May 26. Zoom Link: *https://cmu.zoom.us/j/262225154 * CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Corruption robust exploration in episodic reinforcement learning *Abstract: *We initiate the study of multi-stage episodic reinforcement learning under adversarial corruptions in both the rewards and the transition probabilities of the underlying system extending recent results for the special case of stochastic bandits. We provide a framework which modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on "optimism in the face of uncertainty", by complementing them with principles from "action elimination". Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms which (a) attain near-optimal regret in the absence of corruptions and (b) adapt to unknown levels corruption, enjoying regret guarantees which degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) as well as linear-function-approximation settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee which accommodates any deviation from purely i.i.d. transitions in the bandit-feedback model for episodic reinforcement learning. *Bio*: Thodoris Lykouris is a postdoctoral researcher in the machine learning group of Microsoft Research NYC. His research focus is on online decision-making spanning across the disciplines of machine learning, theoretical computer science, operations research, and economics. He completed his Ph.D. in 2019 from Cornell University where he was advised by Eva Tardos. During his Ph.D. years, his research has been generously supported by a Google Ph.D. Fellowship and a Cornell University Fellowship. He was also a finalist in the INFORMS Nicholson and Applied Probability Society best student paper competitions. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon May 25 20:54:45 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 25 May 2020 20:54:45 -0400 Subject: [AI Seminar] Online AI Seminar on May 26 (Zoom) -- Thodoris Lykouris -- Corruption robust exploration in episodic reinforcement learning. AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. It is the last seminar of this semester. Happy Summer! On Wed, May 20, 2020 at 5:07 PM Aayush Bansal wrote: > Thodoris Lykouris (Microsoft Research, NYC) will be giving an online > seminar on "Corruption robust exploration in episodic reinforcement > learning" from *12:00 - 01:00 PM* on May 26. > > Zoom Link: *https://cmu.zoom.us/j/262225154 > * > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Corruption robust exploration in episodic reinforcement learning > > *Abstract: *We initiate the study of multi-stage episodic reinforcement > learning under adversarial corruptions in both the rewards and the > transition probabilities of the underlying system extending recent results > for the special case of stochastic bandits. We provide a framework which > modifies the aggressive exploration enjoyed by existing reinforcement > learning approaches based on "optimism in the face of uncertainty", by > complementing them with principles from "action elimination". Importantly, > our framework circumvents the major challenges posed by naively applying > action elimination in the RL setting, as formalized by a lower bound we > demonstrate. Our framework yields efficient algorithms which (a) attain > near-optimal regret in the absence of corruptions and (b) adapt to unknown > levels corruption, enjoying regret guarantees which degrade gracefully in > the total corruption encountered. To showcase the generality of our > approach, we derive results for both tabular settings (where states and > actions are finite) as well as linear-function-approximation settings > (where the dynamics and rewards admit a linear underlying representation). > Notably, our work provides the first sublinear regret guarantee which > accommodates any deviation from purely i.i.d. transitions in the > bandit-feedback model for episodic reinforcement learning. > > *Bio*: Thodoris Lykouris is a postdoctoral researcher in the machine > learning group of Microsoft Research NYC. His research focus is on online > decision-making spanning across the disciplines of machine learning, > theoretical computer science, operations research, and economics. He > completed his Ph.D. in 2019 from Cornell University where he was advised by > Eva Tardos. During his Ph.D. years, his research has been generously > supported by a Google Ph.D. Fellowship and a Cornell University Fellowship. > He was also a finalist in the INFORMS Nicholson and Applied Probability > Society best student paper competitions. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Aug 4 13:29:38 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 4 Aug 2020 13:29:38 -0400 Subject: [AI Seminar] AI Seminar Series resumes from Sep 01 Message-ID: Hi All: Hope you are doing well and having a good summer break! The virtual AI Seminar series will resume from Sep 01 with the start of the Fall-2020 semester. Here is the schedule -- http://www.cs.cmu.edu/~aiseminar/ We will be sending weekly announcements for each talk. Hoping to see you all at the seminar series! Aayush -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Fri Aug 28 17:42:20 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Fri, 28 Aug 2020 17:42:20 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 01 (Zoom) -- Timnit Gebru -- Computer vision: who is harmed and who benefits? -- AI seminar is sponsored by Fortive. Message-ID: Timnit Gebru (Google) will be giving an online seminar on "Computer vision: who is harmed and who benefits?" from *12:00 - 01:00 PM* on Sep 01. *Zoom Link*: https://cmu.zoom.us/j/92984236320?pwd=dFRhNUVNQXA0b2dKYWlqYjRFSUVWQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Computer vision: who is harmed and who benefits? *Abstract: *Computer vision has ceased to be a purely academic endeavor. >From law enforcement to border control to employment, healthcare diagnostics, and assigning trust scores, computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon, and IBM. In research, there exists work that purports to determine a person?s sexuality from their social network profile images, or claims to classify ?violent individuals? from drone footage. On the other hand, recent works have shown that commercial gender classification systems have high disparities in error rates by skin-type and gender, the existence of the gender bias contained in current image captioning based works, and biases in the widely used CelebA dataset and proposes adversarial learning-based methods to mitigate its effects. Policymakers and other legislators have cited some of these seminal works in their calls to investigate the unregulated usage of computer vision systems. In this talk, I will highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models to learn to mimic and propagate. *Bio*: Timnit Gebru is an Eritrean American computer scientist and the technical co-lead of the Ethical Artificial Intelligence Team at Google. She works on algorithmic bias and data mining. She is an advocate for diversity in technology and is the co-founder of *Black in AI*, a community of black researchers working in artificial intelligence. Prior to this, she did a postdoc at Microsoft Research, New York City in the FATE (Fairness Transparency Accountability and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying any data mining project (see this New York Times article for an example). She received her Ph.D. from the Stanford Artificial Intelligence Laboratory, studying computer vision under Prof. Fei-Fei Li. Her thesis pertains to data mining large scale publicly available images to gain sociological insight and working on computer vision problems that arise as a result. The Economist , The New York Times , and others have covered part of this work. Prior to joining Fei-Fei's lab, she worked at Apple designing circuits and signal processing algorithms for various Apple products including the first iPad. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Aug 31 18:46:50 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 31 Aug 2020 18:46:50 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 01 (Zoom) -- Timnit Gebru -- Computer vision: who is harmed and who benefits? -- AI seminar is sponsored by Fortive. Message-ID: Reminder... Timnit Gebru (Google) will be giving an online seminar on "Computer vision: who is harmed and who benefits?" from *12:00 - 01:00 PM* on Sep 01. *Zoom Link*: https://cmu.zoom.us/j/92984236320?pwd=dFRhNUVNQXA0b2dKYWlqYjRFSUVWQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Computer vision: who is harmed and who benefits? *Abstract: *Computer vision has ceased to be a purely academic endeavor. >From law enforcement to border control to employment, healthcare diagnostics, and assigning trust scores, computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon, and IBM. In research, there exists work that purports to determine a person's sexuality from their social network profile images, or claims to classify "violent individuals" from drone footage. On the other hand, recent works have shown that commercial gender classification systems have high disparities in error rates by skin-type and gender, the existence of the gender bias contained in current image captioning based works, and biases in the widely used CelebA dataset and proposes adversarial learning-based methods to mitigate its effects. Policymakers and other legislators have cited some of these seminal works in their calls to investigate the unregulated usage of computer vision systems. In this talk, I will highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models learn to mimic and propagate. *Bio*: Timnit Gebru is a senior research scientist at Google co-leading the Ethical Artificial Intelligence research team. Her work focuses on mitigating the potential negative impacts of machine learning based systems. Timnit is also the co-founder of *Black in AI*, a non profit supporting Black researchers and practitioners in artificial intelligence. Prior to this, she did a postdoc at Microsoft Research, New York City in the FATE (Fairness Transparency Accountability and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying any data mining project (see this New York Times article for an example). She received her Ph.D. from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her thesis pertains to data mining large scale publicly available images to gain sociological insight and working on computer vision problems that arise as a result. The Economist , The New York Times , and others have covered part of this work. Prior to joining Fei-Fei's lab, she worked at Apple designing circuits and signal processing algorithms for various Apple products including the first iPad. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Sep 2 09:53:14 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 2 Sep 2020 09:53:14 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 08 (Zoom) -- Ankur Handa -- DexPilot: Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System -- AI seminar is sponsored by Fortive. Message-ID: Ankur Handa (Nvidia) will be giving an online seminar on "DexPilot: Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System" from *12:00 - 01:00 PM* on Sep 08. *Zoom Link*: https://cmu.zoom.us/j/93311828075?pwd=RTlCcW0wRHNMNHk1RDJMWUJQWDVNdz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *DexPilot: Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System. *Abstract: *Teleoperation offers the possibility of imparting robotic systems with sophisticated reasoning skills, intuition, and creativity to perform tasks. However, current teleoperation solutions for high degree-of-actuation (DoA), multi-fingered robots are generally cost-prohibitive, while low-cost offerings usually provide reduced degrees of control. Herein, a low-cost, vision-based teleoperation system, DexPilot, was developed that allows for complete control over the full 23 DoA robotic system by merely observing the bare human hand. DexPilot enables operators to carry out a variety of complex manipulation tasks that go beyond simple pick-and-place operations. This allows for the collection of high dimensional, multi-modality, state-action data that can be leveraged in the future to learn sensorimotor policies for challenging manipulation tasks. The system performance was measured through speed and reliability metrics across two human demonstrators on a variety of tasks. Here is the link to the project . *Bio*: Ankur Handa is currently a Research Scientist at NVIDIA Seattle Robotics group led by Dieter Fox. Prior to that he was a Research Scientist at OpenAI and before that he was a Dyson Fellow at Imperial College London. He finished his PhD with Prof. Andrew Davison at Imperial College London and did a two year post-doc with Prof. Roberto Cipolla at the University of Cambridge. His papers have won Best Industry Paper Award at BMVC, 2014 and have been Best Manipulation Paper Award Finalist and Best Student Paper Award Finalist at ICRA 2019. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Sep 7 07:33:49 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 7 Sep 2020 07:33:49 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 08 (Zoom) -- Ankur Handa -- DexPilot: Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Wed, Sep 2, 2020 at 9:53 AM Aayush Bansal wrote: > Ankur Handa (Nvidia) will be giving an online seminar on "DexPilot: > Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System" from *12:00 > - 01:00 PM* on Sep 08. > > *Zoom Link*: > https://cmu.zoom.us/j/93311828075?pwd=RTlCcW0wRHNMNHk1RDJMWUJQWDVNdz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *DexPilot: Vision-Based Teleoperation of Dexterous Robotic > Hand-Arm System. > > *Abstract: *Teleoperation offers the possibility of imparting robotic > systems with sophisticated reasoning skills, intuition, and creativity to > perform tasks. However, current teleoperation solutions for high > degree-of-actuation (DoA), multi-fingered robots are generally > cost-prohibitive, while low-cost offerings usually provide reduced degrees > of control. Herein, a low-cost, vision-based teleoperation system, > DexPilot, was developed that allows for complete control over the full 23 > DoA robotic system by merely observing the bare human hand. DexPilot > enables operators to carry out a variety of complex manipulation tasks that > go beyond simple pick-and-place operations. This allows for the collection > of high dimensional, multi-modality, state-action data that can be > leveraged in the future to learn sensorimotor policies for challenging > manipulation tasks. The system performance was measured through speed and > reliability metrics across two human demonstrators on a variety of tasks. > Here is the link to the project . > > > *Bio*: Ankur Handa is currently a Research Scientist at NVIDIA Seattle > Robotics group led by Dieter Fox. Prior to that he was a Research Scientist > at OpenAI and before that he was a Dyson Fellow at Imperial College London. > He finished his PhD with Prof. Andrew Davison at Imperial College London > and did a two year post-doc with Prof. Roberto Cipolla at the University of > Cambridge. His papers have won Best Industry Paper Award at BMVC, 2014 and > have been Best Manipulation Paper Award Finalist and Best Student Paper > Award Finalist at ICRA 2019. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Sep 8 13:21:14 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 8 Sep 2020 13:21:14 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 15 (Zoom) -- Moshe Bar -- Overarching States of Mind -- AI seminar is sponsored by Fortive. Message-ID: Moshe Bar (Bar-Ilan University) will be giving an online seminar on "Overarching States of Mind" from *12:00 - 01:00 PM* on Sep 15. *Zoom Link*: https://cmu.zoom.us/j/97696348550?pwd=cWdiSUZST2wyd0lza3QxQllNY2d3UT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Overarching States of Mind *Abstract: *Implicitly, we think of our brain and mind as fixed: always with the same inclinations, biases, strengths, and weaknesses. But the human mind is dynamic and seamlessly changing between different states depending on circumstances. We propose that these states of mind are holistic in that they exert all-encompassing and coordinated effects simultaneously on our perception, cognition, thought, affect, and action. Given the apparent breadth of their reach, being able to explain how states of mind operate is essential. We provide a framework for the concept of states of mind (SoM). From this framework, we derive several unique hypotheses and propose an underlying mechanism whereby SoM is determined by the balance between top-down and bottom-up cortical processing. This novel framework opens new directions for understanding the human mind and bears widespread implications for mental health. *Bio*: Moshe Bar is a neuroscientist, director of the Gonda Multidisciplinary Brain Research Center at Bar-Ilan University. He is the head of the Cognitive Neuroscience Laboratory at the Gonda Multidisciplinary Brain Research Center. Prof. Bar assumed the position of the Gonda Multidisciplinary Brain Research Center director following 17 years in the US, where he had served as an associate professor at Harvard University and Massachusetts General Hospital and had led the Cognitive Neuroscience Laboratory at the Athinoula A. Martinos Center for Biomedical Imaging. Prof. Bar has made significant contributions to the field of cognition; ideas and findings that have challenged dominant paradigms in areas of exceptional diversity: from the flow of information in the cortex during visual recognition to the importance of mental simulations for planning and foresight in the brain, and from the effect of form on aesthetic preferences to a clinical theory on mood and depression. Prof. Bar uses methods from cognitive psychology, psychophysics, human brain imaging, computational neuroscience, and psychiatry to explore predictions and contextual processing in the brain, and their role in facilitating visual recognition. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Sep 14 13:45:56 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 14 Sep 2020 13:45:56 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 15 (Zoom) -- Moshe Bar -- Overarching States of Mind -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon! On Tue, Sep 8, 2020 at 1:21 PM Aayush Bansal wrote: > Moshe Bar (Bar-Ilan University) will be giving an online seminar on "Overarching > States of Mind" from *12:00 - 01:00 PM* on Sep 15. > > *Zoom Link*: > https://cmu.zoom.us/j/97696348550?pwd=cWdiSUZST2wyd0lza3QxQllNY2d3UT09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Overarching States of Mind > > *Abstract: *Implicitly, we think of our brain and mind as fixed: always > with the same inclinations, biases, strengths, and weaknesses. But the > human mind is dynamic and seamlessly changing between different states > depending on circumstances. We propose that these states of mind are > holistic in that they exert all-encompassing and coordinated effects > simultaneously on our perception, cognition, thought, affect, and action. > Given the apparent breadth of their reach, being able to explain how states > of mind operate is essential. We provide a framework for the concept of > states of mind (SoM). From this framework, we derive several unique > hypotheses and propose an underlying mechanism whereby SoM is determined by > the balance between top-down and bottom-up cortical processing. This novel > framework opens new directions for understanding the human mind and bears > widespread implications for mental health. > > > *Bio*: Moshe Bar is a neuroscientist, director of the Gonda > Multidisciplinary Brain Research Center at Bar-Ilan University. He is the > head of the Cognitive Neuroscience Laboratory at the Gonda > Multidisciplinary Brain Research Center. Prof. Bar assumed the position of > the Gonda Multidisciplinary Brain Research Center director following 17 > years in the US, where he had served as an associate professor at Harvard > University and Massachusetts General Hospital and had led the Cognitive > Neuroscience Laboratory at the Athinoula A. Martinos Center for Biomedical > Imaging. > > Prof. Bar has made significant contributions to the field of cognition; > ideas and findings that have challenged dominant paradigms in areas of > exceptional diversity: from the flow of information in the cortex during > visual recognition to the importance of mental simulations for planning and > foresight in the brain, and from the effect of form on aesthetic > preferences to a clinical theory on mood and depression. Prof. Bar uses > methods from cognitive psychology, psychophysics, human brain imaging, > computational neuroscience, and psychiatry to explore predictions and > contextual processing in the brain, and their role in facilitating visual > recognition. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Sep 16 19:55:24 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 16 Sep 2020 19:55:24 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 22 (Zoom) -- Anima Anandkumar -- Bridging the gap between artificial and human intelligence: Feedback and Compositionality -- AI seminar is sponsored by Fortive. Message-ID: Anima Anandkumar (Caltech/Nvidia) will be giving an online seminar on "Bridging the gap between artificial and human intelligence: Feedback and Compositionality" from *12:00 - 01:00 PM EDT* on Sep 22. *Zoom Link*: https://cmu.zoom.us/j/91778147295?pwd=U25LazBKaU5Wc2tteHYxSE9CbVhydz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Bridging the gap between artificial and human intelligence: Feedback and Compositionality *Abstract: *Deep learning has yielded impressive performance over the last few years. However, it is no match to human perception and reasoning. Recurrent feedback in the human brain is shown to be critical for robust perception, and is able to correct the potential errors using an internal generative model of the world. Inspired by this, we augment any existing neural network with feedback (NN-F) in a Bayes-consistent manner. We demonstrate inherent robustness in NN-F that is far superior to standard neural networks. Compositionality is another important hallmark of human intelligence. Humans are able to compose concepts to reason about entirely new scenarios. We have created a new dataset for few-shot learning, inspired by the Bongard challenge. We show that all existing meta learning methods severely fall short of human performance. We argue that neuro-symbolic reasoning is critical for tackling such few-shot learning challenges and showcase some success stories. *Bio*: Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Sep 21 09:23:25 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 21 Sep 2020 09:23:25 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 22 (Zoom) -- Anima Anandkumar -- Bridging the gap between artificial and human intelligence: Feedback and Compositionality -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder.. this is tomorrow at noon. On Wed, Sep 16, 2020 at 7:55 PM Aayush Bansal wrote: > Anima Anandkumar (Caltech/Nvidia) will be giving an online seminar on "Bridging > the gap between artificial and human intelligence: Feedback and > Compositionality" from *12:00 - 01:00 PM EDT* on Sep 22. > > *Zoom Link*: > https://cmu.zoom.us/j/91778147295?pwd=U25LazBKaU5Wc2tteHYxSE9CbVhydz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Bridging the gap between artificial and human intelligence: > Feedback and Compositionality > > *Abstract: *Deep learning has yielded impressive performance over the > last few years. However, it is no match to human perception and reasoning. > Recurrent feedback in the human brain is shown to be critical for robust > perception, and is able to correct the potential errors using an internal > generative model of the world. Inspired by this, we augment any existing > neural network with feedback (NN-F) in a Bayes-consistent manner. We > demonstrate inherent robustness in NN-F that is far superior to standard > neural networks. > > > Compositionality is another important hallmark of human intelligence. > Humans are able to compose concepts to reason about entirely new > scenarios. We have created a new dataset for few-shot learning, inspired > by the Bongard challenge. We show that all existing meta learning methods > severely fall short of human performance. We argue that neuro-symbolic > reasoning is critical for tackling such few-shot learning challenges and > showcase some success stories. > > > *Bio*: Anima Anandkumar is a Bren Professor at Caltech and Director of ML > Research at NVIDIA. She was previously a Principal Scientist at Amazon Web > Services. She has received several honors such as Alfred. P. Sloan > Fellowship, NSF Career Award, Young investigator awards from DoD, and > Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is > part of the World Economic Forum's Expert Network. She is passionate about > designing principled AI algorithms and applying them in interdisciplinary > applications. Her research focus is on unsupervised AI, optimization, and > tensor methods. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Sep 22 16:35:34 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 22 Sep 2020 16:35:34 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 29 (Zoom) -- Victor Lempitsky -- Deep Generative Models for Avatars and Landscapes -- AI seminar is sponsored by Fortive. Message-ID: Victor Lempitsky (Samsung) will be giving an online seminar on "Deep Generative Models for Avatars and Landscapes" from *12:00 noon - 01:00 PM ET* on Sep 29. *Zoom Link*: https://cmu.zoom.us/j/99758211058?pwd=VElWMWdEVFJPdjY5R2xiWlBFdjFoZz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Deep Generative Models for Avatars and Landscapes *Abstract: *Deep Generative Models, in particular those based on adversarial learning, have achieved remarkable results at synthesizing and editing 2D photographs. In this talk, I will discuss how a very successful model of this class (StyleGAN) can be extended to two new domains. First, I will discuss an extension that allows to model realistic landscape timelapse videos. After training, the system can synthesize new landscape videos. The resulting model can also reenact static landscape photographs, and we show that such reenactments outperform previous approaches to this task. The second extension allows to model the full-body appearance of humans. Here, we combine StyleGAN with a modern deformable body model (SMPL-X) and a neural rendering approach into a system that can synthesize 3D full-body avatars from scratch or create such avatars from one or a few photographs. *Bio*: Victor Lempitsky leads the Samsung AI Center in Moscow as well as the Vision, Learning, Telepresence (VIOLET) Lab at this center. He is also an associate professor at the Skolkovo Institute of Science and Technology (Skoltech). In the past, Victor was a researcher at Yandex, at the Visual Geometry Group (VGG) of Oxford University, and at the Computer Vision group of Microsoft Research Cambridge. He has a Ph.D. ("kandidat nauk") degree from Moscow State University (2007). Victor's research interests are in various aspects of computer vision and deep learning, in particular, generative deep learning. He has served as an area chair for top computer vision and machine learning conferences (CVPR, ICCV, ECCV, ICLR, NeurIPS) on multiple occasions. His recent work on neural head avatars was recognized as the most-discussed research publication of 2019 by Altmetric Top 100 rating. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Sep 28 11:45:59 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 28 Sep 2020 11:45:59 -0400 Subject: [AI Seminar] Online AI Seminar on Sep 29 (Zoom) -- Victor Lempitsky -- Deep Generative Models for Avatars and Landscapes -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Tue, Sep 22, 2020 at 4:35 PM Aayush Bansal wrote: > Victor Lempitsky (Samsung) will be giving an online seminar on "Deep > Generative Models for Avatars and Landscapes" from *12:00 noon - 01:00 PM > ET* on Sep 29. > > *Zoom Link*: > https://cmu.zoom.us/j/99758211058?pwd=VElWMWdEVFJPdjY5R2xiWlBFdjFoZz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Deep Generative Models for Avatars and Landscapes > > *Abstract: *Deep Generative Models, in particular those based on > adversarial learning, have achieved remarkable results at synthesizing and > editing 2D photographs. In this talk, I will discuss how a very successful > model of this class (StyleGAN) can be extended to two new domains. First, I > will discuss an extension that allows to model realistic landscape > timelapse videos. After training, the system can synthesize new landscape > videos. The resulting model can also reenact static landscape photographs, > and we show that such reenactments outperform previous approaches to this > task. The second extension allows to model the full-body appearance of > humans. Here, we combine StyleGAN with a modern deformable body model > (SMPL-X) and a neural rendering approach into a system that can synthesize > 3D full-body avatars from scratch or create such avatars from one or a few > photographs. > > > > *Bio*: Victor Lempitsky leads the Samsung AI Center in Moscow as well as > the Vision, Learning, Telepresence (VIOLET) Lab at this center. He is also > an associate professor at the Skolkovo Institute of Science and Technology > (Skoltech). In the past, Victor was a researcher at Yandex, at the Visual > Geometry Group (VGG) of Oxford University, and at the Computer Vision group > of Microsoft Research Cambridge. He has a Ph.D. ("kandidat nauk") degree > from Moscow State University (2007). Victor's research interests are in > various aspects of computer vision and deep learning, in particular, > generative deep learning. He has served as an area chair for top computer > vision and machine learning conferences (CVPR, ICCV, ECCV, ICLR, NeurIPS) > on multiple occasions. His recent work on neural head avatars was > recognized as the most-discussed research publication of 2019 by Altmetric > Top 100 rating. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Sep 30 08:04:07 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 30 Sep 2020 08:04:07 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 06 (Zoom) -- Christian Schunn -- Transforming frenemies to collaborators: automated and peer feedback -- AI seminar is sponsored by Fortive. Message-ID: Christian Schunn (University of Pittsburgh) will be giving an online seminar on "Transforming frenemies to collaborators: automated and peer feedback" from *11:30 A.M - 12:30 PM ET* on* Oct 06* (*please note change of time*). *Zoom Link*: https://cmu.zoom.us/j/94514271581?pwd=cEpIN2ZiUGM2S2xrWG95aW1PTGZOdz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Transforming frenemies to collaborators: automated and peer feedback *Abstract: *As NLP and AI have made major advances, there has been a rise of AI systems that seek to provide useful feedback to learners on complex, open-ended tasks such as design projects, extended writing, and presentations. In some circles, such systems are seen as competitive alternatives to traditional teacher feedback or increasingly-popular peer feedback systems. However, such automated feedback systems seem to be struggling to be successful without time-consuming customization to each specific task given to students. I will present the latest findings on why and when peer feedback works, along with some experiences regarding the relatively narrow role automated systems can currently play on their own. The goal of the talk is to suggest new frontiers in AI-augmented peer feedback systems. *Bio*: Christian Schunn is Co-director of the Institute for Learning, Senior Scientist at the Learning Research and Development Center and a Professor of Intelligent Systems, Psychology, and Learning Sciences and Policy at the University of Pittsburgh, and Chief Learning Scientist for Peerceptiv, a local company offering a peer assessment system used throughout the world. He obtained his Ph.D. at CMU in 1995. He directs research and design projects in writing, science, mathematics, technology, and engineering education. He has served on two National Academy of Engineering committees, and he is a Fellow of AAAS, APA, APS, and the International Society for Design & Development in Education. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Oct 5 10:40:25 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 5 Oct 2020 10:40:25 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 06 (Zoom) -- Christian Schunn -- Transforming frenemies to collaborators: automated and peer feedback -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder...this is tomorrow from *11:30 A.M - 12:30 PM ET* (please note change of time). On Wed, Sep 30, 2020 at 8:04 AM Aayush Bansal wrote: > Christian Schunn (University of Pittsburgh) will be giving an online > seminar on "Transforming frenemies to collaborators: automated and peer > feedback" from *11:30 A.M - 12:30 PM ET* on* Oct 06* (*please note change > of time*). > > *Zoom Link*: > https://cmu.zoom.us/j/94514271581?pwd=cEpIN2ZiUGM2S2xrWG95aW1PTGZOdz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Transforming frenemies to collaborators: automated and peer > feedback > > *Abstract: *As NLP and AI have made major advances, there has been a rise > of AI systems that seek to provide useful feedback to learners on complex, > open-ended tasks such as design projects, extended writing, and > presentations. In some circles, such systems are seen as competitive > alternatives to traditional teacher feedback or increasingly-popular peer > feedback systems. However, such automated feedback systems seem to be > struggling to be successful without time-consuming customization to each > specific task given to students. I will present the latest findings on why > and when peer feedback works, along with some experiences regarding the > relatively narrow role automated systems can currently play on their own. > The goal of the talk is to suggest new frontiers in AI-augmented peer > feedback systems. > > > *Bio*: Christian Schunn is Co-director of the Institute for Learning, > Senior Scientist at the Learning Research and Development Center and a > Professor of Intelligent Systems, Psychology, and Learning Sciences and > Policy at the University of Pittsburgh, and Chief Learning Scientist for > Peerceptiv, a local company offering a peer assessment system used > throughout the world. He obtained his Ph.D. at CMU in 1995. He directs > research and design projects in writing, science, mathematics, technology, > and engineering education. He has served on two National Academy of > Engineering committees, and he is a Fellow of AAAS, APA, APS, > and the International Society for Design & Development in Education. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Oct 6 11:34:19 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 6 Oct 2020 11:34:19 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 06 (Zoom) -- Christian Schunn -- Transforming frenemies to collaborators: automated and peer feedback -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: starting now.. On Mon, Oct 5, 2020 at 10:40 AM Aayush Bansal wrote: > Reminder...this is tomorrow from *11:30 A.M - 12:30 PM ET* (please note > change of time). > > > > On Wed, Sep 30, 2020 at 8:04 AM Aayush Bansal wrote: > >> Christian Schunn (University of Pittsburgh) will be giving an online >> seminar on "Transforming frenemies to collaborators: automated and peer >> feedback" from *11:30 A.M - 12:30 PM ET* on* Oct 06* (*please note >> change of time*). >> >> *Zoom Link*: >> https://cmu.zoom.us/j/94514271581?pwd=cEpIN2ZiUGM2S2xrWG95aW1PTGZOdz09 >> >> CMU AI Seminar is sponsored by Fortive. >> >> Following are the details of the talk: >> >> *Title: *Transforming frenemies to collaborators: automated and peer >> feedback >> >> *Abstract: *As NLP and AI have made major advances, there has been a >> rise of AI systems that seek to provide useful feedback to learners on >> complex, open-ended tasks such as design projects, extended writing, and >> presentations. In some circles, such systems are seen as competitive >> alternatives to traditional teacher feedback or increasingly-popular peer >> feedback systems. However, such automated feedback systems seem to be >> struggling to be successful without time-consuming customization to each >> specific task given to students. I will present the latest findings on why >> and when peer feedback works, along with some experiences regarding the >> relatively narrow role automated systems can currently play on their own. >> The goal of the talk is to suggest new frontiers in AI-augmented peer >> feedback systems. >> >> >> *Bio*: Christian Schunn is Co-director of the Institute for Learning, >> Senior Scientist at the Learning Research and Development Center and a >> Professor of Intelligent Systems, Psychology, and Learning Sciences and >> Policy at the University of Pittsburgh, and Chief Learning Scientist for >> Peerceptiv, a local company offering a peer assessment system used >> throughout the world. He obtained his Ph.D. at CMU in 1995. He directs >> research and design projects in writing, science, mathematics, technology, >> and engineering education. He has served on two National Academy of >> Engineering committees, and he is a Fellow of AAAS, APA, APS, >> and the International Society for Design & Development in Education. >> >> >> To learn more about the seminar series, please visit the website: >> http://www.cs.cmu.edu/~aiseminar/ >> >> -- >> Aayush Bansal >> http://www.cs.cmu.edu/~aayushb/ >> >> >> > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Oct 6 15:00:00 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 6 Oct 2020 15:00:00 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 13 (Zoom) -- Hyowon Gweon -- Learning from others, helping others learn: Cognitive foundations of distinctively human social learning -- AI seminar is sponsored by Fortive. Message-ID: Hyowon Gweon (Stanford University) will be giving an online seminar on "Learning from others, helping others learn: Cognitive foundations of distinctively human social learning" from 12:00 noon - 01:00 PM ET on Oct 13. *Zoom Link*: https://cmu.zoom.us/j/94110891366?pwd=clRvTHNrS3o3Tld5RUQ5UVN0dHNRdz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Learning from others, helping others learn: Cognitive foundations of distinctively human social learning *Abstract: *Learning does not occur in isolation. From parent-child interactions to formal classroom environments, humans explore, learn, and communicate in rich, diverse social contexts. Rather than simply observing and copying their conspecifics, humans engage in a range of epistemic practices that actively recruit those around them. What makes human social learning so distinctive, powerful, and smart? In this talk, I will present a series of studies that reveal the remarkably sophisticated inferential abilities that young children show not only in how they learn from others but also in how they help others learn. Children interact with others as learners and as teachers to learn and communicate about the world, about others, and even about the self. The results collectively paint a picture of human social learning that is far more than copying and imitation: It is active, bidirectional, and cooperative. I will end by discussing new efforts to understand what motivates humans to engage in these interactions, and implications for building better machines that learn from and interact with humans. *Bio*: Hyowon (Hyo) Gweon (she/her) is an Associate Professor in the Department of Psychology and Director of Graduate Studies for the Symbolic Systems Program at Stanford University. Hyo received her Ph.D. in Cognitive Science (2012) from MIT, where she continued as a post-doc before joining Stanford in 2014. Hyo is broadly interested in how humans learn from others and help others learn. Taking an interdisciplinary approach that combines developmental, computational, and neuroimaging methods, her research aims to explain the cognitive underpinnings of distinctively human learning, communication, and prosocial behaviors. She has been named as a Richard E. Guggenhime Faculty Scholar (2020) and a David Huntington Dean's Faculty Scholar (2019), and received the APS Janet Spence Award for Transformative Early Career Contributions (2020), Jacobs Early Career Fellowship (2020), James S. McDonnell Scholar Award for Human Cognition (2018), APA Dissertation Award (2014), and Marr Prize (best student paper, Cognitive Science Society 2010). To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Oct 12 14:12:15 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 12 Oct 2020 14:12:15 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 13 (Zoom) -- Hyowon Gweon -- Learning from others, helping others learn: Cognitive foundations of distinctively human social learning -- AI seminar is sponsored by Fortive. In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon (ET). On Tue, Oct 6, 2020 at 3:00 PM Aayush Bansal wrote: > Hyowon Gweon (Stanford University) will be giving an online seminar on "Learning > from others, helping others learn: Cognitive foundations of distinctively > human social learning" from 12:00 noon - 01:00 PM ET on Oct 13. > > *Zoom Link*: > https://cmu.zoom.us/j/94110891366?pwd=clRvTHNrS3o3Tld5RUQ5UVN0dHNRdz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Learning from others, helping others learn: Cognitive > foundations of distinctively human social learning > > *Abstract: *Learning does not occur in isolation. From parent-child > interactions to formal classroom environments, humans explore, learn, and > communicate in rich, diverse social contexts. Rather than simply observing > and copying their conspecifics, humans engage in a range of epistemic > practices that actively recruit those around them. What makes human social > learning so distinctive, powerful, and smart? > > In this talk, I will present a series of studies that reveal the > remarkably sophisticated inferential abilities that young children show not > only in how they learn from others but also in how they help others learn. > Children interact with others as learners and as teachers to learn and > communicate about the world, about others, and even about the self. The > results collectively paint a picture of human social learning that is far > more than copying and imitation: It is active, bidirectional, and > cooperative. I will end by discussing new efforts to understand what > motivates humans to engage in these interactions, and implications for > building better machines that learn from and interact with humans. > > > *Bio*: Hyowon (Hyo) Gweon (she/her) is an Associate Professor in the > Department of Psychology and Director of Graduate Studies for the Symbolic > Systems Program at Stanford University. Hyo received her Ph.D. in Cognitive > Science (2012) from MIT, where she continued as a post-doc before joining > Stanford in 2014. Hyo is broadly interested in how humans learn from others > and help others learn. Taking an interdisciplinary approach that combines > developmental, computational, and neuroimaging methods, her research aims > to explain the cognitive underpinnings of distinctively human learning, > communication, and prosocial behaviors. She has been named as a Richard E. > Guggenhime Faculty Scholar (2020) and a David Huntington Dean's Faculty > Scholar (2019), and received the APS Janet Spence Award for Transformative > Early Career Contributions (2020), Jacobs Early Career Fellowship (2020), > James S. McDonnell Scholar Award for Human Cognition (2018), APA > Dissertation Award (2014), and Marr Prize (best student paper, Cognitive > Science Society 2010). > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Oct 14 10:32:39 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 14 Oct 2020 10:32:39 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 20 (Zoom) -- Tali Dekel -- Learning to Retime People in Videos -- AI seminar is sponsored by Fortive Message-ID: Tali Dekel (Google/Weizmann Institute) will be giving an online seminar on "Learning to Retime People in Videos" from 12:00 noon - 01:00 PM ET on Oct 20. *Zoom Link*: https://cmu.zoom.us/j/98676579132?pwd=QldaWFlVLzdseS9IWE5JdkJKWDBWQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Learning to Retime People in Videos *Abstract: *By changing the speed of frames, or the speed of objects, we can enhance the way we perceive events or actions in videos. In this talk, I will present two of my recent works on retiming videos, and more specifically, manipulating the timings of people?s actions. 1) ?SpeedNet? (CVPR 2020 oral): a method for adaptively speeding up videos based on their content, allowing us to gracefully watch videos faster while avoiding jerky and unnatural motions. 2) ?Layered Neural Rendering for Retiming People? (SIGGRAPH Asia): a method for speeding up, slowing down, or entirely freezing certain people in videos, while automatically re-rendering properly all the scene elements that are related to those people, like shadows, reflections, and loose clothing. Both methods are based on novel deep neural networks that learn concepts of natural motion and scene decomposition just by observing ordinary videos, without requiring any manual labels. I?ll show adaptively sped-up videos of sports, of boring family events (that all of us want to watch faster), and I?ll demonstrate various retiming effects of people dancing, groups running, and kids jumping on trampolines. *Bio*: Tali Dekel is a Senior Research Scientist at Google, Cambridge MA, developing algorithms at the intersection of computer vision, computer graphics, and machine learning. She also just joined the Mathematics and Computer Science Department at the Weizmann Institute, Israel, as a faculty member (Assistant Professor). Before Google, she was a Postdoctoral Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. Tali completed her Ph.D. studies at the school of electrical engineering, Tel-Aviv University, Israel. Her research interests include computational photography, image/video synthesis, geometry and 3D reconstruction. Her awards and honors include the National Postdoctoral Award for Advancing Women in Science (2014), the Rothschild Postdoctoral Fellowship (2015), the SAMSON - Prime Minster's Researcher Recruitment Prize (2019), Best Paper Honorable Mention in CVPR 2019, and Best Paper Award (Marr Prize) in ICCV 2019. She served as the workshop co-chair for CVPR 2020. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Oct 19 12:00:57 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 19 Oct 2020 12:00:57 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 20 (Zoom) -- Tali Dekel -- Learning to Retime People in Videos -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Wed, Oct 14, 2020 at 10:32 AM Aayush Bansal wrote: > Tali Dekel (Google/Weizmann Institute) will be giving an online seminar on > "Learning to Retime People in Videos" from 12:00 noon - 01:00 PM > ET on Oct 20. > > *Zoom Link*: > https://cmu.zoom.us/j/98676579132?pwd=QldaWFlVLzdseS9IWE5JdkJKWDBWQT09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Learning to Retime People in Videos > > *Abstract: *By changing the speed of frames, or the speed of objects, we > can enhance the way we perceive events or actions in videos. In this talk, > I will present two of my recent works on retiming videos, and more > specifically, manipulating the timings of people?s actions. 1) ?SpeedNet? > (CVPR 2020 oral): a method for adaptively speeding up videos based on their > content, allowing us to gracefully watch videos faster while avoiding jerky > and unnatural motions. 2) ?Layered Neural Rendering for Retiming People? > (SIGGRAPH Asia): a method for speeding up, slowing down, or entirely > freezing certain people in videos, while automatically re-rendering > properly all the scene elements that are related to those people, like > shadows, reflections, and loose clothing. Both methods are based on novel > deep neural networks that learn concepts of natural motion and scene > decomposition just by observing ordinary videos, without requiring any > manual labels. I?ll show adaptively sped-up videos of sports, of boring > family events (that all of us want to watch faster), and I?ll demonstrate > various retiming effects of people dancing, groups running, and kids > jumping on trampolines. > > *Bio*: Tali Dekel is a Senior Research Scientist at Google, Cambridge MA, > developing algorithms at the intersection of computer vision, computer > graphics, and machine learning. She also just joined the Mathematics and > Computer Science Department at the Weizmann Institute, Israel, as a faculty > member (Assistant Professor). Before Google, she was a Postdoctoral > Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) > at MIT. Tali completed her Ph.D. studies at the school of electrical > engineering, Tel-Aviv University, Israel. Her research interests include > computational photography, image/video synthesis, geometry and 3D > reconstruction. Her awards and honors include the National Postdoctoral > Award for Advancing Women in Science (2014), the Rothschild Postdoctoral > Fellowship (2015), the SAMSON - Prime Minster's Researcher Recruitment > Prize (2019), Best Paper Honorable Mention in CVPR 2019, and Best Paper > Award (Marr Prize) in ICCV 2019. She served as the workshop co-chair for > CVPR 2020. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Oct 21 10:19:41 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 21 Oct 2020 10:19:41 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 27 (Zoom) -- Devi Parikh -- Multi-modality, Creativity, and Climate Change -- AI seminar is sponsored by Fortive Message-ID: Devi Parikh (Georgia Tech/Facebook AI) will be giving an online seminar on "Multi-modality, Creativity, and Climate Change" from 12:00 noon - 01:00 PM ET on Oct 27. *Zoom Link*: https://cmu.zoom.us/j/94225196187?pwd=ckZnNlNhWHcxWm1wRGM0b0QzMlpMZz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Multi-modality, Creativity, and Climate Change *Abstract: *In this talk, I will talk about three directions I am currently excited about: (1) Multi-modal AI: I will describe some of our recent work on training models for multi-modal (vision and language) data. In particular, I will describe our work on training a single transformer-based model that can perform 12 different tasks. Given an image and a question, it can answer the question. Given an image and a caption, it can score their relevance. Given an image and a phrase, it can identify the image region that matches the phrase. And so on. I will show a demo of this model vilbert.cloudcv.org. For anyone interested in multimodal AI, check out this open-source multimodal-framework https://github.com/facebookresearch/mmf. (2) AI-assisted human-creativity: I will then talk about some of our initial work in seeing how AI can inspire human creativity in the context of thematic typography, dance movements, sketches, and generative art. I will also talk about some of our work on generating a visual abstraction that summarizes how your day was. (3) AI to model and discover new catalysts: Finally, I will talk about our recently announced Open Catalyst Project on using AI to model and discover new catalysts to address the energy challenges posed by climate change. I will provide a brief overview of the domain and problem definition, introduce the large Open Catalyst Dataset we have collected and made publicly available, and benchmark a few existing graph neural network models. Find out more about the project here: https://opencatalystproject.org/. *Bio*: Devi Parikh is an Associate Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). >From 2013 to 2016, she was an Assistant Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. From 2009 to 2012, she was a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC), an academic computer science institute affiliated with University of Chicago. She has held visiting positions at Cornell University, University of Texas at Austin, Microsoft Research, MIT, Carnegie Mellon University, and Facebook AI Research. She received her M.S. and Ph.D. degrees from the Electrical and Computer Engineering department at Carnegie Mellon University in 2007 and 2009 respectively. She received her B.S. in Electrical and Computer Engineering from Rowan University in 2005. Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young Investigator Program (YIP) award, an Army Research Office (ARO) Young Investigator Program (YIP) award, a Sigma Xi Young Faculty Award at Georgia Tech, an Allen Distinguished Investigator Award in Artificial Intelligence from the Paul G. Allen Family Foundation, four Google Faculty Research Awards, an Amazon Academic Research Award, a Lockheed Martin Inspirational Young Faculty Award at Georgia Tech, an Outstanding New Assistant Professor award from the College of Engineering at Virginia Tech, a Rowan University Medal of Excellence for Alumni Achievement, Rowan University's 40 under 40 recognition, a Forbes' list of 20 "Incredible Women Advancing A.I. Research" recognition, and a Marr Best Paper Prize awarded at the International Conference on Computer Vision (ICCV). https://www.cc.gatech.edu/~parikh To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Oct 26 08:56:56 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 26 Oct 2020 08:56:56 -0400 Subject: [AI Seminar] Online AI Seminar on Oct 27 (Zoom) -- Devi Parikh -- Multi-modality, Creativity, and Climate Change -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon (ET). On Wed, Oct 21, 2020 at 10:19 AM Aayush Bansal wrote: > Devi Parikh (Georgia Tech/Facebook AI) will be giving an online seminar on > "Multi-modality, Creativity, and Climate Change" from 12:00 noon - 01:00 > PM ET on Oct 27. > > *Zoom Link*: > https://cmu.zoom.us/j/94225196187?pwd=ckZnNlNhWHcxWm1wRGM0b0QzMlpMZz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Multi-modality, Creativity, and Climate Change > > *Abstract: *In this talk, I will talk about three directions I am > currently excited about: > > (1) Multi-modal AI: I will describe some of our recent work on training > models for multi-modal (vision and language) data. In particular, I will > describe our work on training a single transformer-based model that can > perform 12 different tasks. Given an image and a question, it can answer > the question. Given an image and a caption, it can score their relevance. > Given an image and a phrase, it can identify the image region that matches > the phrase. And so on. I will show a demo of this model > vilbert.cloudcv.org. For anyone interested in multimodal AI, check out > this open-source multimodal-framework > https://github.com/facebookresearch/mmf. > > (2) AI-assisted human-creativity: I will then talk about some of our > initial work in seeing how AI can inspire human creativity in the context > of thematic typography, dance movements, sketches, and generative art. I > will also talk about some of our work on generating a visual abstraction > that summarizes how your day was. > > (3) AI to model and discover new catalysts: Finally, I will talk about our > recently announced Open Catalyst Project on using AI to model and discover > new catalysts to address the energy challenges posed by climate change. I > will provide a brief overview of the domain and problem definition, > introduce the large Open Catalyst Dataset we have collected and made > publicly available, and benchmark a few existing graph neural network > models. Find out more about the project here: > https://opencatalystproject.org/. > > *Bio*: Devi Parikh is an Associate Professor in the School of Interactive > Computing at Georgia Tech, and a Research Scientist at Facebook AI Research > (FAIR). > > From 2013 to 2016, she was an Assistant Professor in the Bradley > Department of Electrical and Computer Engineering at Virginia Tech. From > 2009 to 2012, she was a Research Assistant Professor at Toyota > Technological Institute at Chicago (TTIC), an academic computer science > institute affiliated with University of Chicago. She has held visiting > positions at Cornell University, University of Texas at Austin, Microsoft > Research, MIT, Carnegie Mellon University, and Facebook AI Research. She > received her M.S. and Ph.D. degrees from the Electrical and Computer > Engineering department at Carnegie Mellon University in 2007 and 2009 > respectively. She received her B.S. in Electrical and Computer Engineering > from Rowan University in 2005. > > Her research interests are in computer vision, natural language > processing, embodied AI, human-AI collaboration, and AI for creativity. > > She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought > award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young > Investigator Program (YIP) award, an Army Research Office (ARO) Young > Investigator Program (YIP) award, a Sigma Xi Young Faculty Award at Georgia > Tech, an Allen Distinguished Investigator Award in Artificial Intelligence > from the Paul G. Allen Family Foundation, four Google Faculty Research > Awards, an Amazon Academic Research Award, a Lockheed Martin Inspirational > Young Faculty Award at Georgia Tech, an Outstanding New Assistant Professor > award from the College of Engineering at Virginia Tech, a Rowan University > Medal of Excellence for Alumni Achievement, Rowan University's 40 under 40 > recognition, a Forbes' list of 20 "Incredible Women Advancing A.I. > Research" recognition, and a Marr Best Paper Prize awarded at the > International Conference on Computer Vision (ICCV). > > https://www.cc.gatech.edu/~parikh > > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Oct 28 08:05:02 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 28 Oct 2020 08:05:02 -0400 Subject: [AI Seminar] Online AI Seminar on Nov 03 (Zoom) -- Aaron Courville -- Emerging and preserving compositional structure through iterated learning -- AI seminar is sponsored by Fortive Message-ID: Aaron Courville (University of Montreal) will be giving an online seminar on "Emerging and preserving compositional structure through iterated learning" from 12:00 noon - 01:00 PM ET on Nov 03. *Zoom Link*: https://cmu.zoom.us/j/99054050077?pwd=bjQ5OXN1Z09sQ3pVcTJDMVJBbjJkQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Emerging and preserving compositional structure through iterated learning *Abstract: *Iterated learning is a theory of how the compositional structure of human language emerged. The theory holds that intergenerational language transfer creates learning bottlenecks that privilege compositional structure. Recent work in the machine learning community has shown that the iterated learning mechanism can also promote compositional structure in language emergence in communication between neural-network-base AI-agents. In this talk, I will describe our recent efforts at putting iterated learning to work in applications of Neural Module Network to simple Visual Question Answering and in self-play training scenarios with dialogue models. We find that applying iterated learning to the generation of the program that specifies the assembly of distinct neural network modules leads to higher accuracy in program prediction and supports systematic generalization to testing question templates that are not in the training set. In the context of self-play training of dialogue agents, we find the surprising result that iterated learning can mitigate language drift. *Bio:* Aaron Courville is an Associate Professor in the Department of Computer Science and Operations Research at the Universite de Montreal. He received his PhD from the Robotics Institute, Carnegie Mellon University. He is one of the early contributors to Deep Learning, and is a founding member of Mila and a fellow of the CIFAR program on Learning in Machines and Brains. Together with Ian Goodfellow and Yoshua Bengio, he co-wrote the seminal textbook on Deep Learning. His current research interests focus on the development of DL models and methods. He is particularly interested in deep generative model and multimodal ML with applications such as computer vision and natural language processing. Aaron holds a CIFAR Canadian AI chair and his research is supported in part by Microsoft Research, Samsung, Hitachi, and a Google Focussed Research Award. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Nov 2 10:12:37 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 2 Nov 2020 10:12:37 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 03 (Zoom) -- Aaron Courville -- Emerging and preserving compositional structure through iterated learning -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Wed, Oct 28, 2020 at 8:05 AM Aayush Bansal wrote: > Aaron Courville (University of Montreal) will be giving an > online seminar on "Emerging and preserving compositional structure > through iterated learning" from 12:00 noon - 01:00 PM ET on Nov 03. > > *Zoom Link*: > https://cmu.zoom.us/j/99054050077?pwd=bjQ5OXN1Z09sQ3pVcTJDMVJBbjJkQT09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Emerging and preserving compositional structure through iterated > learning > > *Abstract: *Iterated learning is a theory of how the compositional > structure of human language emerged. The theory holds that > intergenerational language transfer creates learning bottlenecks that > privilege compositional structure. Recent work in the machine learning > community has shown that the iterated learning mechanism can also promote > compositional structure in language emergence in communication between > neural-network-base AI-agents. In this talk, I will describe our recent > efforts at putting iterated learning to work in applications of Neural > Module Network to simple Visual Question Answering and in self-play > training scenarios with dialogue models. We find that applying iterated > learning to the generation of the program that specifies the assembly of > distinct neural network modules leads to higher accuracy in program > prediction and supports systematic generalization to testing question > templates that are not in the training set. In the context of self-play > training of dialogue agents, we find the surprising result that iterated > learning can mitigate language drift. > > > *Bio:* Aaron Courville is an Associate Professor in the Department of > Computer Science and Operations Research at the Universite de Montreal. He > received his PhD from the Robotics Institute, Carnegie Mellon University. > He is one of the early contributors to Deep Learning, and is a founding > member of Mila and a fellow of the CIFAR program on Learning in Machines > and Brains. Together with Ian Goodfellow and Yoshua Bengio, he co-wrote the > seminal textbook on Deep Learning. His current research interests focus on > the development of DL models and methods. He is particularly interested in > deep generative model and multimodal ML with applications such as computer > vision and natural language processing. Aaron holds a CIFAR Canadian AI > chair and his research is supported in part by Microsoft Research, Samsung, > Hitachi, and a Google Focussed Research Award. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Thu Nov 5 09:53:10 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Thu, 5 Nov 2020 09:53:10 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 10 (Zoom) -- Oriol Vinyals -- Model-free vs Model-based Reinforcement Learning -- AI seminar is sponsored by Fortive Message-ID: Oriol Vinyals (Google DeepMind) will be giving an online seminar on "Model-free vs Model-based Reinforcement Learning" from 12:00 noon - 01:00 PM ET on Nov 10. *Zoom Link*: https://cmu.zoom.us/j/95812775909?pwd=ZlBNbDdpbkNvVzFGZ0ZHbm03VVdBZz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Model-free vs Model-based Reinforcement Learning *Abstract: *In this talk, we will review model-free and model-based RL, two paradigms that have enabled global breakthroughs in AI research. This research included the ability to defeat professionals at the games of Go, Poker, StarCraft, or DOTA, and in other fields such as Robotics. Using the examples of the AlphaGo and AlphaStar agents, I'll present two approaches from these paradigms in RL and will conclude the talk by presenting some exciting new research directions that may unlock the power of model-based RL in a wider variety of environments, including stochastic, partial observable, with complex observation and action spaces. *Bio:* Oriol Vinyals is a Principal Scientist at Google DeepMind and a team lead of the Deep Learning group. His work focuses on Deep Learning and Artificial Intelligence. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley, and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, Financial Times, WIRED, BBC, etc., and his articles have been cited over 90000 times. Some of his contributions such as seq2seq, knowledge distillation, or TensorFlow are used in Google Translate, Text-To-Speech, and Speech recognition, serving billions of queries every day, and he was the lead researcher of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft, achieving Grandmaster level, also featured as the cover of Nature. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning, and reinforcement learning. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Nov 9 07:23:05 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 9 Nov 2020 07:23:05 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 10 (Zoom) -- Oriol Vinyals -- Model-free vs Model-based Reinforcement Learning -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Thu, Nov 5, 2020 at 9:53 AM Aayush Bansal wrote: > Oriol Vinyals (Google DeepMind) will be giving an online seminar on > "Model-free vs Model-based Reinforcement Learning" from 12:00 noon - 01:00 > PM ET on Nov 10. > > *Zoom Link*: > https://cmu.zoom.us/j/95812775909?pwd=ZlBNbDdpbkNvVzFGZ0ZHbm03VVdBZz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Model-free vs Model-based Reinforcement Learning > > *Abstract: *In this talk, we will review model-free and model-based RL, > two paradigms that have enabled global breakthroughs in AI research. This > research included the ability to defeat professionals at the games of Go, > Poker, StarCraft, or DOTA, and in other fields such as Robotics. Using the > examples of the AlphaGo and AlphaStar agents, I'll present two approaches > from these paradigms in RL and will conclude the talk by presenting some > exciting new research directions that may unlock the power of model-based > RL in a wider variety of environments, including stochastic, partial > observable, with complex observation and action spaces. > > > *Bio:* Oriol Vinyals is a Principal Scientist at Google DeepMind and a > team lead of the Deep Learning group. His work focuses on Deep Learning and > Artificial Intelligence. Prior to joining DeepMind, Oriol was part of the > Google Brain team. He holds a Ph.D. in EECS from the University of > California, Berkeley, and is a recipient of the 2016 MIT TR35 innovator > award. His research has been featured multiple times at the New York Times, > Financial Times, WIRED, BBC, etc., and his articles have been cited over > 90000 times. Some of his contributions such as seq2seq, knowledge > distillation, or TensorFlow are used in Google Translate, Text-To-Speech, > and Speech recognition, serving billions of queries every day, and he was > the lead researcher of the AlphaStar project, creating an agent that > defeated a top professional at the game of StarCraft, achieving Grandmaster > level, also featured as the cover of Nature. At DeepMind he continues > working on his areas of interest, which include artificial intelligence, > with particular emphasis on machine learning, deep learning, and > reinforcement learning. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Nov 11 16:54:48 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 11 Nov 2020 16:54:48 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 17 (Zoom) -- Hany Farid -- The Accuracy, Fairness, and Limits of Predicting Recidivism -- AI seminar is sponsored by Fortive Message-ID: Hany Farid (Berkeley) will be giving an online seminar on "The Accuracy, Fairness, and Limits of Predicting Recidivism" from 12:00 noon - 01:00 PM ET on Nov 17. *Zoom Link*: https://cmu.zoom.us/j/97150006469?pwd=TjVsTXpQTXgrWWVza1pFUmYreXVxdz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *The Accuracy, Fairness, and Limits of Predicting Recidivism *Abstract: *Predictive algorithms are commonly used in the criminal justice system. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big data and advanced machine learning make these predictions more accurate and less biased than humans. Opponents, however, argue that predictive algorithms may lead to further bias in the criminal justice system. I will discuss an in-depth analysis of one widely used commercial predictive algorithm to determine its appropriateness for use in our courts. *Bio:* Hany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his M.S. in Computer Science from SUNY Albany, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Nov 16 11:59:26 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 16 Nov 2020 11:59:26 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 17 (Zoom) -- Hany Farid -- The Accuracy, Fairness, and Limits of Predicting Recidivism -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Wed, Nov 11, 2020 at 4:54 PM Aayush Bansal wrote: > Hany Farid (Berkeley) will be giving an online seminar on "The Accuracy, > Fairness, and Limits of Predicting Recidivism" from 12:00 noon - 01:00 PM > ET on Nov 17. > > *Zoom Link*: > https://cmu.zoom.us/j/97150006469?pwd=TjVsTXpQTXgrWWVza1pFUmYreXVxdz09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *The Accuracy, Fairness, and Limits of Predicting Recidivism > > *Abstract: *Predictive algorithms are commonly used in the criminal > justice system. These predictions are used in pretrial, parole, and > sentencing decisions. Proponents of these systems argue that big data and > advanced machine learning make these predictions more accurate and less > biased than humans. Opponents, however, argue that predictive algorithms > may lead to further bias in the criminal justice system. I will discuss an > in-depth analysis of one widely used commercial predictive algorithm to > determine its appropriateness for use in our courts. > > > *Bio:* Hany Farid is a Professor at the University of California, > Berkeley with a joint appointment in Electrical Engineering & Computer > Sciences and the School of Information. His research focuses on digital > forensics, image analysis, and human perception. He received his > undergraduate degree in Computer Science and Applied Mathematics from the > University of Rochester in 1989, his M.S. in Computer Science from SUNY > Albany, and his Ph.D. in Computer Science from the University of > Pennsylvania in 1997. Following a two-year post-doctoral fellowship in > Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth > College in 1999 where he remained until 2019. He is the recipient of an > Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a > Fellow of the National Academy of Inventors. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Nov 17 12:20:12 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 17 Nov 2020 12:20:12 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 24 (Zoom) -- Charles Blundell -- Memory and Reinforcement Learning -- AI seminar is sponsored by Fortive Message-ID: Charles Blundell (Google DeepMind) will be giving an online seminar on "Memory and Reinforcement Learning" from 12:00 noon - 01:00 PM ET on Nov 24. *Zoom Link*: https://cmu.zoom.us/j/91248740598?pwd=V1M2L09xRlR6V3gvK2VVNERzOHRuQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title: *Memory and Reinforcement Learning *Abstract: *Deep Reinforcement Learning has seen many recent celebrated successes. In this talk, we shall examine the role of an agent's past on how the agent performs. We will explore different ways in which agents can be augmented with memory to improve their performance. In particular, we show what kind of tasks require memory, how rich allocentric memories can be created from egocentric experience, and how memory can significantly advance the state of the art in exploration. *Bio:* Charles is a senior staff research scientist at DeepMind, leading a team of fellow researchers working on deep learning, probabilistic modeling, neuroscience, and reinforcement learning. He holds a Ph.D. in machine learning at the Gatsby Unit, University College London. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Nov 23 10:32:23 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 23 Nov 2020 10:32:23 -0500 Subject: [AI Seminar] Online AI Seminar on Nov 24 (Zoom) -- Charles Blundell -- Memory and Reinforcement Learning -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon. On Tue, Nov 17, 2020 at 12:20 PM Aayush Bansal wrote: > Charles Blundell (Google DeepMind) will be giving an online seminar on "Memory > and Reinforcement Learning" from 12:00 noon - 01:00 PM ET on Nov 24. > > *Zoom Link*: > https://cmu.zoom.us/j/91248740598?pwd=V1M2L09xRlR6V3gvK2VVNERzOHRuQT09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title: *Memory and Reinforcement Learning > > *Abstract: *Deep Reinforcement Learning has seen many recent celebrated > successes. In this talk, we shall examine the role of an agent's past on > how the agent performs. We will explore different ways in which agents can > be augmented with memory to improve their performance. In particular, we > show what kind of tasks require memory, how rich allocentric memories can > be created from egocentric experience, and how memory can significantly > advance the state of the art in exploration. > > > *Bio:* Charles is a senior staff research scientist at DeepMind, leading > a team of fellow researchers working on deep learning, probabilistic > modeling, neuroscience, and reinforcement learning. He holds a Ph.D. in > machine learning at the Gatsby Unit, University College London. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Nov 25 13:46:18 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 25 Nov 2020 13:46:18 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 01 (Zoom) -- Jia Deng-- Optimization Inspired Deep Architectures for Multiview 3D -- AI seminar is sponsored by Fortive Message-ID: Jia Deng (Princeton) will be giving an online seminar on "Optimization Inspired Deep Architectures for Multiview 3D" from 12:00 noon - 01:00 PM ET on Dec 01. *Zoom Link*: https://cmu.zoom.us/j/94237215126?pwd=VWRLb05tM1p1UU1zV3lvZHN2WE5XQT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title*: Optimization Inspired Deep Architectures for Multiview 3D *Abstract*: Multiview 3D has traditionally been approached as continuous optimization: the solution is produced by an algorithm that solves an optimization problem over continuous variables (camera pose, 3D points, motion) to maximize the satisfaction of known constraints from multiview geometry. In contrast, deep learning offers an alternative strategy where the solution is produced by a general-purpose network with learned weights. In this talk, I will present some recent work using a hybrid approach that takes the best of both worlds. In particular, I will present several new deep architectures inspired by classical optimization-based algorithms. These architectures have substantially improved the state of the art of a range of tasks including optical flow, scene flow, and depth estimation. As an aside, I will also discuss how to perform numerically stable backpropagation on 3D transformation groups, needed for end-to-end training of such architectures. *Bio*: Jia Deng is an Assistant Professor of Computer Science at Princeton University. His research focus is on computer vision and machine learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He has received a number of awards including the Sloan Research Fellowship, the NSF CAREER award, the ONR Young Investigator award, an ICCV Marr Prize, and two ECCV Best Paper Awards. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Wed Nov 25 14:02:55 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Wed, 25 Nov 2020 14:02:55 -0500 Subject: [AI Seminar] Special AI Seminar on Dec 02 (Zoom) -- Vincent Conitzer -- Automated Mechanism Design for Strategic Classification -- AI seminar is sponsored by Fortive Message-ID: Vincent Conitzer (Duke University) will be giving a special online seminar on "Automated Mechanism Design for Strategic Classification" from *03:00 PM - 04:00 PM ET on Dec 02*. *Zoom Link*: https://cmu.zoom.us/j/92891008267?pwd=dXFvSVk3Z1pUa056WERobkN3N010UT09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title*: Automated Mechanism Design for Strategic Classification *Abstract*: AI is increasingly making decisions, not only for us, but also about us -- from whether we are invited for an interview, to whether we are proposed as a match for someone looking for a date, to whether we are released on bail. Often, we have some control over the information that is available to the algorithm; we can self-report some information, and other information we can choose to withhold. This creates a potential circularity: the classifier used, mapping submitted information to outcomes, depends on the (training) data that people provide, but the (test) data depend on the classifier, because people will reveal their information strategically to obtain a more favorable outcome. This setting is not adversarial, but it is also not fully cooperative. Mechanism design provides a framework for making good decisions based on strategically reported information, and it is commonly applied to the design of auctions and matching mechanisms. However, the setting above is unlike these common applications, because in it, preferences tend to be similar across agents, but agents are restricted in what they can report. This creates both new challenges and new opportunities. I will discuss both our theoretical work and our initial experiments. (joint work with Hanrui Zhang, Andrew Kephart, Yu Cheng, Anilesh Krishnaswamy, Haoming Li, and David Rein.) *Bio: *Vincent Conitzer is the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders? Conitzer has received the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC). To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Nov 30 08:24:37 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 30 Nov 2020 08:24:37 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 01 (Zoom) -- Jia Deng-- Optimization Inspired Deep Architectures for Multiview 3D -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon E.T. FYI -- this talk will not be recorded. On Wed, Nov 25, 2020 at 1:46 PM Aayush Bansal wrote: > Jia Deng (Princeton) will be giving an online seminar on "Optimization > Inspired Deep Architectures for Multiview 3D" from 12:00 noon - 01:00 PM > ET on Dec 01. > > *Zoom Link*: > https://cmu.zoom.us/j/94237215126?pwd=VWRLb05tM1p1UU1zV3lvZHN2WE5XQT09 > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title*: Optimization Inspired Deep Architectures for Multiview 3D > > *Abstract*: Multiview 3D has traditionally been approached as continuous > optimization: the solution is produced by an algorithm that solves an > optimization problem over continuous variables (camera pose, 3D points, > motion) to maximize the satisfaction of known constraints from multiview > geometry. In contrast, deep learning offers an alternative strategy where > the solution is produced by a general-purpose network with learned weights. > In this talk, I will present some recent work using a hybrid approach that > takes the best of both worlds. In particular, I will present several new > deep architectures inspired by classical optimization-based algorithms. > These architectures have substantially improved the state of the art of a > range of tasks including optical flow, scene flow, and depth estimation. As > an aside, I will also discuss how to perform numerically stable > backpropagation on 3D transformation groups, needed for end-to-end training > of such architectures. > > *Bio*: Jia Deng is an Assistant Professor of Computer Science at > Princeton University. His research focus is on computer vision and machine > learning. He received his Ph.D. from Princeton University and his B.Eng. > from Tsinghua University, both in computer science. He has received a > number of awards including the Sloan Research Fellowship, the NSF CAREER > award, the ONR Young Investigator award, an ICCV Marr Prize, and two ECCV > Best Paper Awards. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Fri Dec 4 07:57:25 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Fri, 4 Dec 2020 07:57:25 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 08 (Zoom) -- Matthias Niessner -- Why Neural Rendering is getting more amazing every day! -- AI seminar is sponsored by Fortive Message-ID: Matthias Niessner (TUM) will be giving an online seminar on "Why Neural Rendering is getting more amazing every day! " from 12:00 noon - 01:00 PM ET on Dec 08. *Zoom Link*: https://cmu.zoom.us/j/95663250946?pwd=TkMvd0FrdTN2aVlJZ0NhK2tQY0Mxdz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title*: Why Neural Rendering is getting more amazing every day! *Abstract*: In this talk, I will present my research vision in how to create a photo-realistic digital replica of the real world, and how to make holograms become a reality. Eventually, I would like to see photos and videos evolve to become interactive, holographic content indistinguishable from the real world. Imagine taking such 3D photos to share with friends, family, or social media; the ability to fully record historical moments for future generations; or to provide content for upcoming augmented and virtual reality applications. AI-based approaches, such as generative neural networks, are becoming more and more popular in this context since they have the potential to transform existing image synthesis pipelines. I will specifically talk about an avenue towards neural rendering where we can retain the full control of a traditional graphics pipeline but at the same time exploit modern capabilities of deep learning, such as handling the imperfections of content from commodity 3D scans. While the capture and photo-realistic synthesis of imagery open up unbelievable possibilities for applications ranging from entertainment to communication industries, there are also important ethical considerations that must be kept in mind. Specifically, in the content of fabricated news (e.g., fake-news), it is critical to highlight and understand digitally-manipulated content. I believe that media forensics plays an important role in this area, both from an academic standpoint to better understand image and video manipulation, but even more importantly from a societal standpoint to create and raise awareness around the possibilities and moreover, to highlight potential avenues and solutions regarding trust of digital content. *Bio*: Dr. Matthias Nie?ner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nie?ner?s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 26 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI?14, HPG?15, SPG?18, and the SIGGRAPH?16 Emerging Technologies Award for the best Live Demo. Prof. Nie?ner?s work enjoys wide media coverage, with many articles featured in main-stream media including the New York Times, Wall Street Journal, Spiegel, MIT Technological Review, and many more, and his was work led to several TV appearances such as on Jimmy Kimmel Live, where Prof. Nie?ner demonstrated the popular Face2Face technique; Prof. Nie?ner?s academic Youtube channel currently has over 5 million views. For his work, Prof. Nie?ner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 ? ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018 which comes with 1.500.000 Euro in research funding; in 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe. In addition to his academic impact, Prof. Nie?ner is a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Dec 7 11:02:16 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 7 Dec 2020 11:02:16 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 08 (Zoom) -- Matthias Niessner -- Why Neural Rendering is getting more amazing every day! -- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder..this is tomorrow at noon (ET). On Fri, Dec 4, 2020 at 7:57 AM Aayush Bansal wrote: > Matthias Niessner (TUM) will be giving an online seminar on "Why Neural > Rendering is getting more amazing every day! " from 12:00 noon - 01:00 PM > ET on Dec 08. > > *Zoom Link*: > https://cmu.zoom.us/j/95663250946?pwd=TkMvd0FrdTN2aVlJZ0NhK2tQY0Mxdz09 > > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title*: Why Neural Rendering is getting more amazing every day! > > *Abstract*: In this talk, I will present my research vision in how to > create a photo-realistic digital replica of the real world, and how to make > holograms become a reality. Eventually, I would like to see photos and > videos evolve to become interactive, holographic content indistinguishable > from the real world. Imagine taking such 3D photos to share with friends, > family, or social media; the ability to fully record historical moments for > future generations; or to provide content for upcoming augmented and > virtual reality applications. AI-based approaches, such as generative > neural networks, are becoming more and more popular in this context since > they have the potential to transform existing image synthesis pipelines. I > will specifically talk about an avenue towards neural rendering where we > can retain the full control of a traditional graphics pipeline but at the > same time exploit modern capabilities of deep learning, such as handling > the imperfections of content from commodity 3D scans. > > While the capture and photo-realistic synthesis of imagery open up > unbelievable possibilities for applications ranging from entertainment to > communication industries, there are also important ethical considerations > that must be kept in mind. Specifically, in the content of fabricated news > (e.g., fake-news), it is critical to highlight and understand > digitally-manipulated content. I believe that media forensics plays an > important role in this area, both from an academic standpoint to better > understand image and video manipulation, but even more importantly from a > societal standpoint to create and raise awareness around the possibilities > and moreover, to highlight potential avenues and solutions regarding trust > of digital content. > > *Bio*: Dr. Matthias Nie?ner is a Professor at the Technical University of > Munich where he leads the Visual Computing Lab. Before, he was a Visiting > Assistant Professor at Stanford University. Prof. Nie?ner?s research lies > at the intersection of computer vision, graphics, and machine learning, > where he is particularly interested in cutting-edge techniques for 3D > reconstruction, semantic 3D scene understanding, video editing, and > AI-driven video synthesis. In total, he has published over 70 academic > publications, including 22 papers at the prestigious ACM Transactions on > Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 26 works at the leading > vision conferences (CVPR, ECCV, ICCV); several of these works won best > paper awards, including at SIGCHI?14, HPG?15, SPG?18, and the SIGGRAPH?16 > Emerging Technologies Award for the best Live Demo. > > > Prof. Nie?ner?s work enjoys wide media coverage, with many articles > featured in main-stream media including the New York Times, Wall Street > Journal, Spiegel, MIT Technological Review, and many more, and his was work > led to several TV appearances such as on Jimmy Kimmel Live, where Prof. > Nie?ner demonstrated the popular Face2Face technique; Prof. Nie?ner?s > academic Youtube channel currently has over 5 million views. > > For his work, Prof. Nie?ner received several awards: he is a TUM-IAS > Rudolph Moessbauer Fellow (2017 ? ongoing), he won the Google Faculty Award > for Machine Perception (2017), the Nvidia Professor Partnership Award > (2018), as well as the prestigious ERC Starting Grant 2018 which comes with > 1.500.000 Euro in research funding; in 2019, he received the Eurographics > Young Researcher Award honoring the best upcoming graphics researcher in > Europe. In addition to his academic impact, Prof. Nie?ner is a co-founder > and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, > whose aim is to empower storytellers with cutting-edge AI-driven video > synthesis. > > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Thu Dec 10 10:11:26 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Thu, 10 Dec 2020 10:11:26 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 15 (Zoom) -- Michael Arcaro -- Topographic constraints on visual development-- AI seminar is sponsored by Fortive Message-ID: Michael Arcaro (UPenn) will be giving an online seminar on "Topographic constraints on visual development " from 12:00 noon - 01:00 PM ET on Dec 15. *Zoom Link*: https://cmu.zoom.us/j/99938770592?pwd=d3RhMmgrY3hCOEd3a3VRWHRLcDd5Zz09 CMU AI Seminar is sponsored by Fortive. Following are the details of the talk: *Title*: Topographic constraints on visual development *Abstract*: We are remarkably good at recognizing objects and faces in our environment, even after just a brief glimpse. How do we develop the neural circuitry that supports such robust perception? The biological importance of faces for social primates and the stereotyped location of face-selective brain regions across individuals has engendered the idea that face regions are innate neural structures. I will present data challenging this view, where face regions in monkeys were not present at birth but instead emerged in stereotyped locations within the first few postnatal months. Indeed, experience appears to be necessary for the formation of these specialized regions: Monkeys raised without exposure to faces did not develop face regions. But if specialized regions require experience, why do they emerge in such stereotyped locations? At birth, a series of hierarchically organized retinotopic maps, in which adjacent neurons represent adjacent points in visual space, are present throughout the visual system. These retinotopic maps carry with them selectivity biases for low-level features commonly found in faces and are predictive of where face regions will emerge later in development. These findings reveal that experience-driven changes are anchored to the intrinsic topographic architecture of visual cortex, establishing a framework for understanding how neural representations come to support visual perception. *Bio*: Michael Arcaro received his PhD at Princeton working with Drs. Sabine Kastner and Uri Hasson on organizing principles of the adult human and macaque visual system. He went on to do a postdoc with Dr. Margaret Livingstone at Harvard Medical School studying visual development in baby macaque monkeys. He recently moved to UPenn and setup his own lab studying how intrinsic and experience-driven processes interact through development to shape brain organization and behavior. To learn more about the seminar series, please visit the website: http://www.cs.cmu.edu/~aiseminar/ -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Mon Dec 14 13:05:47 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Mon, 14 Dec 2020 13:05:47 -0500 Subject: [AI Seminar] Online AI Seminar on Dec 15 (Zoom) -- Michael Arcaro -- Topographic constraints on visual development-- AI seminar is sponsored by Fortive In-Reply-To: References: Message-ID: Reminder...this is tomorrow at noon (ET). On Thu, Dec 10, 2020 at 10:11 AM Aayush Bansal wrote: > Michael Arcaro (UPenn) will be giving an online seminar on "Topographic > constraints on visual development " from 12:00 noon - 01:00 PM ET on Dec > 15. > > *Zoom Link*: > https://cmu.zoom.us/j/99938770592?pwd=d3RhMmgrY3hCOEd3a3VRWHRLcDd5Zz09 > > > CMU AI Seminar is sponsored by Fortive. > > Following are the details of the talk: > > *Title*: Topographic constraints on visual development > > *Abstract*: We are remarkably good at recognizing objects and faces in > our environment, even after just a brief glimpse. How do we develop the > neural circuitry that supports such robust perception? The biological > importance of faces for social primates and the stereotyped location of > face-selective brain regions across individuals has engendered the idea > that face regions are innate neural structures. I will present data > challenging this view, where face regions in monkeys were not present at > birth but instead emerged in stereotyped locations within the first few > postnatal months. Indeed, experience appears to be necessary for the > formation of these specialized regions: Monkeys raised without exposure to > faces did not develop face regions. But if specialized regions require > experience, why do they emerge in such stereotyped locations? At birth, a > series of hierarchically organized retinotopic maps, in which adjacent > neurons represent adjacent points in visual space, are present throughout > the visual system. These retinotopic maps carry with them selectivity > biases for low-level features commonly found in faces and are predictive of > where face regions will emerge later in development. These findings reveal > that experience-driven changes are anchored to the intrinsic topographic > architecture of visual cortex, establishing a framework for understanding > how neural representations come to support visual perception. > > > *Bio*: Michael Arcaro received his PhD at Princeton working with Drs. > Sabine Kastner and Uri Hasson on organizing principles of the adult human > and macaque visual system. He went on to do a postdoc with Dr. Margaret > Livingstone at Harvard Medical School studying visual development in baby > macaque monkeys. He recently moved to UPenn and setup his own lab studying > how intrinsic and experience-driven processes interact through development > to shape brain organization and behavior. > > To learn more about the seminar series, please visit the website: > http://www.cs.cmu.edu/~aiseminar/ > > > -- > Aayush Bansal > http://www.cs.cmu.edu/~aayushb/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aayushb at cs.cmu.edu Tue Dec 15 15:04:29 2020 From: aayushb at cs.cmu.edu (Aayush Bansal) Date: Tue, 15 Dec 2020 15:04:29 -0500 Subject: [AI Seminar] Recorded Talks Message-ID: Hi All, We have uploaded the recordings of AI seminars (whenever allowed by a speaker) on the webpage -- http://www.cs.cmu.edu/~aiseminar/. We have no seminars during the winter break and hopefully will resume with the next semester. signing off, Aayush -- Aayush Bansal http://www.cs.cmu.edu/~aayushb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: