From vakinwan at andrew.cmu.edu Tue Jan 16 10:44:56 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Tue, 16 Jan 2024 10:44:56 -0500 Subject: Jan 23 at 12pm (NSH 3305) -- Ashique KhudaBukhsh (RIT) -- Down the Toxicity Rabbit Hole: A Novel Framework to Bias Audit LLMs -- AI Seminar sponsored by SambaNova Systems Message-ID: Dear all, We look forward to seeing you *next Tuesday (01/23) from 12:00-1:00 PM (ET) *for the first talk of this semester's CMU AI Seminar, sponsored by SambaNova Systems (https://sambanova.ai). The seminar will be held in *NSH 3305 *with pizza provided and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). Next Tuesday (01/23), Ashique KhudaBukhsh (RIT) will be giving a talk titled "Down the Toxicity Rabbit Hole: A Novel Framework to Bias Audit LLMs". /////////////////// *Talk Abstract:* How safe is generative AI for disadvantaged groups? This paper conducts a bias audit of large language models (LLMs) through a novel toxicity rabbit hole framework introduced here. Starting with a stereotype, the framework instructs the LLM to generate more toxic content than the stereotype. Every subsequent iteration it continues instructing the LLM to generate more toxic content than the previous iteration until the safety guardrails (if any) throw a safety violation or it meets some other halting criteria (e.g., identical generation or rabbit hole depth threshold). Our experiments reveal highly disturbing content, including but not limited to antisemitic, misogynistic, racist, Islamophobic, and homophobic generated content, perhaps shedding light on the underbelly of LLM training data, prompting deeper questions about AI equity and alignment. *Speaker Bio: *Ashique KhudaBukhsh is an assistant professor at the Golisano College of Computing and Information Sciences, Rochester Institute of Technology (RIT). His current research lies at the intersection of NLP and AI for Social Impact as applied to: (i) globally important events arising in linguistically diverse regions requiring methods to tackle practical challenges involving multilingual, noisy, social media texts; (ii) polarization in the context of the current US political crisis; and (iii) auditing AI systems and platforms for unintended harms. In addition to having his research been accepted at top artificial intelligence conferences and journals, his work has also received widespread international media attention that includes coverage from The New York Times, BBC, Wired, Times of India, The Indian Express, The Daily Mail, VentureBeat, and Digital Trends. /////////////////// *In person: NSH 3305* *Zoom Link:* https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 - Victor Akinwande -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Tue Jan 23 08:11:46 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Tue, 23 Jan 2024 08:11:46 -0500 Subject: Jan 23 at 12pm (NSH 3305) -- Ashique KhudaBukhsh (RIT) -- Down the Toxicity Rabbit Hole: A Novel Framework to Bias Audit LLMs -- AI Seminar sponsored by SambaNova Systems In-Reply-To: References: Message-ID: Reminder: this is happening today! On Tue, Jan 16, 2024 at 10:44?AM Victor Akinwande wrote: > Dear all, > > We look forward to seeing you *next Tuesday (01/23) from 12:00-1:00 PM > (ET) *for the first talk of this semester's CMU AI Seminar, sponsored by > SambaNova Systems (https://sambanova.ai). The seminar will be held in *NSH > 3305 *with pizza provided and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). > > Next Tuesday (01/23), Ashique KhudaBukhsh (RIT) will be giving a talk > titled "Down the Toxicity Rabbit Hole: A Novel Framework to Bias Audit > LLMs". > > > /////////////////// > *Talk Abstract:* How safe is generative AI for disadvantaged groups? This > paper conducts a bias audit of large language models (LLMs) through a > novel toxicity rabbit hole framework introduced here. Starting with a > stereotype, the framework instructs the LLM to generate more toxic content > than the stereotype. Every subsequent iteration it continues instructing > the LLM to generate more toxic content than the previous iteration until > the safety guardrails (if any) throw a safety violation or it meets some > other halting criteria (e.g., identical generation or rabbit hole depth > threshold). Our experiments reveal highly disturbing content, including but > not limited to antisemitic, misogynistic, racist, Islamophobic, and > homophobic generated content, perhaps shedding light on the underbelly of > LLM training data, prompting deeper questions about AI equity and alignment. > > *Speaker Bio: *Ashique KhudaBukhsh is an assistant professor at the > Golisano College of Computing and Information Sciences, Rochester Institute > of Technology (RIT). His current research lies at the intersection of NLP > and AI for Social Impact as applied to: (i) globally important events > arising in linguistically diverse regions requiring methods to tackle > practical challenges involving multilingual, noisy, social media texts; > (ii) polarization in the context of the current US political crisis; and > (iii) auditing AI systems and platforms for unintended harms. In addition > to having his research been accepted at top artificial intelligence > conferences and journals, his work has also received widespread > international media attention that includes coverage from The New York > Times, BBC, Wired, Times of India, The Indian Express, The Daily Mail, > VentureBeat, and Digital Trends. > /////////////////// > > > *In person: NSH 3305* > *Zoom Link:* > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > > > - Victor Akinwande > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Sun Feb 25 17:29:16 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Sun, 25 Feb 2024 16:29:16 -0600 Subject: [CMU AI Seminar] February 27 at 12pm (GHC 6115 & Zoom) -- Xiangxiang Xu (MIT) -- A Geometric Perspective of Feature Learning -- AI Seminar sponsored by SambaNova Systems Message-ID: Dear all, We look forward to seeing you *this Tuesday (2/27)* from *1**2:00-1:00 PM (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . On this Tuesday (2/27), *Xiangxiang Xu* (MIT) will be giving a talk titled *"**A Geometric Perspective of Feature Learning**"*. *Title*: A Geometric Perspective of Feature Learning *Talk Abstract*: In this talk, we present a geometric framework for learning and processing information with deep neural networks. We introduce feature geometry, which unifies statistical dependence and feature representations in a function space. We formulate each learning problem as solving the optimal feature representation of the associated dependence component. We will illustrate how this perspective connects distinct learning problems and provides more adaptable solutions, from classification/estimation to feature selection/extraction. We also demonstrate its applications in complicated learning scenarios, including dealing with constraints and incomplete data, incorporating side information, and learning the dependence structures of sequential data. *Speaker Bio:* Xiangxiang Xu received the B.Eng. and Ph.D. degrees in electronic engineering from Tsinghua University, Beijing, China, in 2014 and 2020, respectively. He is a postdoctoral associate in the Department of EECS at MIT. His research focuses on information theory and statistical learning, with applications in understanding and developing learning algorithms. *In person: *GHC 6115 *Zoom Link*: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Tue Feb 27 11:46:51 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Tue, 27 Feb 2024 10:46:51 -0600 Subject: [CMU AI Seminar] February 27 at 12pm (GHC 6115 & Zoom) -- Xiangxiang Xu (MIT) -- A Geometric Perspective of Feature Learning -- AI Seminar sponsored by SambaNova Systems In-Reply-To: References: Message-ID: Reminder that this is happening soon! On Sun, Feb 25, 2024 at 4:29?PM Asher Trockman wrote: > Dear all, > > We look forward to seeing you *this Tuesday (2/27)* from *1**2:00-1:00 PM > (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, > sponsored by SambaNova Systems . The seminar will > be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website . > > On this Tuesday (2/27), *Xiangxiang Xu* (MIT) will be giving a talk > titled *"**A Geometric Perspective of Feature Learning**"*. > > *Title*: A Geometric Perspective of Feature Learning > > *Talk Abstract*: In this talk, we present a geometric framework for > learning and processing information with deep neural networks. We introduce > feature geometry, which unifies statistical dependence and feature > representations in a function space. We formulate each learning problem as > solving the optimal feature representation of the associated dependence > component. We will illustrate how this perspective connects distinct > learning problems and provides more adaptable solutions, from > classification/estimation to feature selection/extraction. We also > demonstrate its applications in complicated learning scenarios, including > dealing with constraints and incomplete data, incorporating side > information, and learning the dependence structures of sequential data. > > *Speaker Bio:* Xiangxiang Xu received the B.Eng. and Ph.D. degrees in > electronic engineering from Tsinghua University, Beijing, China, in 2014 > and 2020, respectively. He is a postdoctoral associate in the Department of > EECS at MIT. His research focuses on information theory and statistical > learning, with applications in understanding and developing learning > algorithms. > > *In person: *GHC 6115 > *Zoom Link*: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > > Thanks, > Asher Trockman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Tue Feb 27 20:24:50 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Tue, 27 Feb 2024 19:24:50 -0600 Subject: [CMU AI Seminar] Special! February 29 at 12pm (GHC 6115 & Zoom) -- Simon Du (U. Washington) -- How Over-Parameterization Slows Down Gradient Descent -- AI Seminar sponsored by SambaNova Systems Message-ID: *Want to meet with Simon? Please send me an email to get the signup sheet.* Dear all, We look forward to seeing you *this Thursday (2/29)* from *1**2:00-1:00 PM (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . On this Thursday (2/29), *Simon Du* (U. Washington) will be giving a talk titled *"**How Over-Parameterization Slows Down Gradient Descent**"*. *Title*: How Over-Parameterization Slows Down Gradient Descent *Talk Abstract*: We investigate how over-parameterization impacts the convergence behaviors of gradient descent through two examples. In the context of learning a single ReLU neuron, we prove that the convergence rate shifts from $exp(?T)$ in the exact-parameterization scenario to an exponentially slower $1/T^3$ rate in the over-parameterized setting. In the canonical matrix sensing problem, specifically for symmetric matrix sensing with symmetric parametrization, the convergence rate transitions from $exp(?T)$ in the exact-parameterization case to $1/T^2$ in the over-parameterized case. Interestingly, employing an asymmetric parameterization restores the $exp(?T)$ rate, though this rate also depends on the initialization scaling. Lastly, we demonstrate that incorporating an additional step within a single gradient descent iteration can achieve a convergence rate independent of the initialization scaling. *Speaker Bio:* Simon S. Du is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. His research interests are broadly in machine learning, such as deep learning, representation learning, and reinforcement learning. Prior to starting as faculty, he was a postdoc at the Institute for Advanced Study of Princeton. He completed his Ph.D. in Machine Learning at Carnegie Mellon University. Simon's research has been recognized by a Samsung AI Researcher of the Year Award, an Intel Rising Star Faculty Award, an NSF CAREER award, a Distinguished Dissertation Award honorable mention from CMU, among others. *In person: *GHC 6115 *Zoom Link*: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at andrew.cmu.edu Thu Feb 29 11:46:02 2024 From: ashert at andrew.cmu.edu (Asher Trockman) Date: Thu, 29 Feb 2024 10:46:02 -0600 Subject: [CMU AI Seminar] Special! February 29 at 12pm (GHC 6115 & Zoom) -- Simon Du (U. Washington) -- How Over-Parameterization Slows Down Gradient Descent -- AI Seminar sponsored by SambaNova Systems In-Reply-To: References: Message-ID: <6ADD7EE8-2CF2-4C9B-B51F-A7D752946099@andrew.cmu.edu> An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Mon Mar 11 12:55:23 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Mon, 11 Mar 2024 12:55:23 -0400 Subject: [CMU AI Seminar] March 12 at 12pm (GHC 6115 & Zoom) -- Misha Khodak (CMU) -- The long tail of AI: Learning from algorithms and diverse tasks -- AI Seminar sponsored by SambaNova Systems Message-ID: Dear all, We look forward to seeing you *this Tuesday (3/12)* from *1**2:00-1:00 PM (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . On this Tuesday (3/12), *Misha Khodak* (CMU) will be giving a talk titled *"**The long tail of AI: Learning from algorithms and diverse tasks**"*. *Title*: The long tail of AI: Learning from algorithms and diverse tasks *Talk Abstract*: Advances in machine learning (ML) have led to skyrocketing demand across diverse applications beyond vision and text, resulting in unique theoretical and practical challenges. I develop principled tools for tackling under-explored and under-resourced ML applications, focusing on two settings: (1) learning from algorithmic data and (2) automating ML for diverse tasks. In this talk, I first introduce a general-purpose way to design and analyze "meta-algorithms" that improve the performance of other algorithms by training on similar instances. My approach yields the first provable guarantees for meta-learning gradient descent and a systematic way to answer a crucial question in the burgeoning field of algorithms with predictions: where do the predictions come from? This theory leads to an effective solution to the challenging problem of federated hyperparameter tuning and to the attainment of near-instance-optimal solver performance across sequences of linear systems. I will then present a line of work on automatically extending the benefits of modern ML to diverse data modalities, especially in healthcare and the sciences. This includes architecture search methods that find the "right" neural operation for new modalities and a technique for cross-modal transfer that enables the fine-tuning of large language models on diverse tasks in genomics, differential equations, and beyond. *Speaker Bio:* Misha Khodak is a PhD student in CS at CMU advised by Nina Balcan and Ameet Talwalkar. He studies foundations and applications of machine learning, especially meta-learning and algorithm design. Misha is a recipient of the Facebook PhD Fellowship and CMU?s TCS Presidential Fellowship, and he has interned at Microsoft Research, Google Research, and the Lawrence Livermore National Lab. *In person: *GHC 6115 *Zoom Link*: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Tue Mar 12 11:48:27 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Tue, 12 Mar 2024 11:48:27 -0400 Subject: [CMU AI Seminar] March 12 at 12pm (GHC 6115 & Zoom) -- Misha Khodak (CMU) -- The long tail of AI: Learning from algorithms and diverse tasks -- AI Seminar sponsored by SambaNova Systems In-Reply-To: References: Message-ID: Reminder this is happening soon! On Mon, Mar 11, 2024 at 12:55?PM Asher Trockman wrote: > Dear all, > > We look forward to seeing you *this Tuesday (3/12)* from *1**2:00-1:00 PM > (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, > sponsored by SambaNova Systems . The seminar will > be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website . > > On this Tuesday (3/12), *Misha Khodak* (CMU) will be giving a talk titled > *"**The long tail of AI: Learning from algorithms and diverse tasks**"*. > > *Title*: The long tail of AI: Learning from algorithms and diverse tasks > > *Talk Abstract*: Advances in machine learning (ML) have led to > skyrocketing demand across diverse applications beyond vision and text, > resulting in unique theoretical and practical challenges. I develop > principled tools for tackling under-explored and under-resourced ML > applications, focusing on two settings: (1) learning from algorithmic data > and (2) automating ML for diverse tasks. In this talk, I first introduce a > general-purpose way to design and analyze "meta-algorithms" that improve > the performance of other algorithms by training on similar instances. My > approach yields the first provable guarantees for meta-learning gradient > descent and a systematic way to answer a crucial question in the burgeoning > field of algorithms with predictions: where do the predictions come from? > This theory leads to an effective solution to the challenging problem of > federated hyperparameter tuning and to the attainment of > near-instance-optimal solver performance across sequences of linear > systems. I will then present a line of work on automatically extending the > benefits of modern ML to diverse data modalities, especially in healthcare > and the sciences. This includes architecture search methods that find the > "right" neural operation for new modalities and a technique for cross-modal > transfer that enables the fine-tuning of large language models on diverse > tasks in genomics, differential equations, and beyond. > > *Speaker Bio:* Misha Khodak is a PhD student in CS at CMU advised by Nina > Balcan and Ameet Talwalkar. He studies foundations and applications of > machine learning, especially meta-learning and algorithm design. Misha is a > recipient of the Facebook PhD Fellowship and CMU?s TCS Presidential > Fellowship, and he has interned at Microsoft Research, Google Research, and > the Lawrence Livermore National Lab. > > *In person: *GHC 6115 > *Zoom Link*: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > > Thanks, > Asher Trockman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Fri Mar 15 16:45:20 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Fri, 15 Mar 2024 16:45:20 -0400 Subject: [CMU AI Seminar] March 19 at 12pm (NSH 3305 & Zoom) -- Sachin Goyal (CMU) -- Think before you speak: Training Language Models With Pause Tokens -- AI Seminar sponsored by SambaNova Systems Message-ID: Dear all, We look forward to seeing you *this Tuesday (3/19)* from *1**2:00-1:00 PM (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . On this Tuesday (3/19), *Sachin Goyal* (CMU) will be giving a talk titled *"Think before you speak: Training Language Models With Pause Tokens**"*. *Title*: Think before you speak: Training Language Models With Pause Tokens *Talk Abstract*: Transformer-based language models generate responses by producing a series of tokens in immediate succession: the (K + 1)th token is an outcome of manipulating K hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, K + 10 hidden vectors, before it outputs the (K + 1)th token? In this talk, we will discuss how we can teach language models to use additional tokens (say pause tokens) to its advantage. Can the language model use these extra tokens for processing extra computations before committing to an answer. We will specifically explore if this can be done just by just finetuning an off-the-shelf language model or if it is necessary to pretrain from scratch to elicit such new behaviours. Finally, we will discuss a range of conceptual and practical future research questions raised by our work, spanning new notions of representation capacity beyond the parametric count and making delayed next-token prediction a widely applicable paradigm. *Speaker Bio:* Sachin Goyal is a PhD student in the Machine Learning Department at CMU. He works on improving pretraining and robust finetuning for foundation models. *In person: *NSH 3305 *Zoom Link*: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Tue Mar 19 11:12:53 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Tue, 19 Mar 2024 11:12:53 -0400 Subject: [CMU AI Seminar] March 19 at 12pm (NSH 3305 & Zoom) -- Sachin Goyal (CMU) -- Think before you speak: Training Language Models With Pause Tokens -- AI Seminar sponsored by SambaNova Systems In-Reply-To: References: Message-ID: Reminder (NSH 3305) this is happening soon! On Fri, Mar 15, 2024 at 4:45?PM Asher Trockman wrote: > Dear all, > > We look forward to seeing you *this Tuesday (3/19)* from *1**2:00-1:00 PM > (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, > sponsored by SambaNova Systems . The seminar will > be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website . > > On this Tuesday (3/19), *Sachin Goyal* (CMU) will be giving a talk titled *"Think > before you speak: Training Language Models With Pause Tokens**"*. > > *Title*: Think before you speak: Training Language Models With Pause > Tokens > > *Talk Abstract*: Transformer-based language models generate responses by > producing a series of tokens in immediate succession: the (K + 1)th token > is an outcome of manipulating K hidden vectors per layer, one vector per > preceding token. What if instead we were to let the model manipulate say, K > + 10 hidden vectors, before it outputs the (K + 1)th token? > In this talk, we will discuss how we can teach language models to use > additional tokens (say pause tokens) to its advantage. Can the language > model use these extra tokens for processing extra computations before > committing to an answer. We will specifically explore if this can be done > just by just finetuning an off-the-shelf language model or if it is > necessary to pretrain from scratch to elicit such new behaviours. > Finally, we will discuss a range of conceptual and practical future > research questions raised by our work, spanning new notions of > representation capacity beyond the parametric count and making delayed > next-token prediction a widely applicable paradigm. > > *Speaker Bio:* Sachin Goyal is a PhD student in the Machine Learning > Department at CMU. He works on improving pretraining and robust finetuning > for foundation models. > > *In person: *NSH 3305 > *Zoom Link*: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > > Thanks, > Asher Trockman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Thu Mar 21 14:02:43 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Thu, 21 Mar 2024 14:02:43 -0400 Subject: =?UTF-8?Q?=5BCMU_AI_Seminar=5D_Special=21_=F0=9F=A7=90_March_25_at_3pm_=28GHC_?= =?UTF-8?Q?6115_=26_Zoom=29_=2D=2D_Sadhika_Malladi_=28Princeton=29_=2D=2D_Theory_and_Pr?= =?UTF-8?Q?actice_in_Language_Model_Fine=2DTuning_=2D=2D_AI_Seminar_sponsored?= =?UTF-8?Q?_by_SambaNova_Systems?= Message-ID: Dear all, We look forward to seeing you *this Monday (3/25)* from *3**:00-4:00 PM (U.S. Eastern time)* for a special installment of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . On this Monday (3/25), *Sadhika Malladi* (Princeton) will be giving a talk titled *"**Theory and Practice in Language Model Fine-Tuning**"*. *Title*: Theory and Practice in Language Model Fine-Tuning *Talk Abstract*: Fine-tuning ever larger and more capable language models (LMs) has proven to be an effective way to solve a variety of language related tasks. Yet little is understood about what fine-tuning does, and most traditional optimization analyses cannot account for a pre-trained initialization. I will start by formalizing the common intuition that fine-tuning makes a small change to the model. Inspired by the neural tangent kernel (NTK), we propose an empirically validated and theoretically sound hypothesis that can approach answering questions like "Why doesn't a giant LM overfit when fine-tuning it on a few dozen examples?" and "Why does LoRA work?" Our simple mental model motivates an efficient, transferable, and optimizer-aware data selection algorithm, dubbed LESS, to elicit specific capabilities during instruction tuning. Using LESS to select 5% of the data outperforms on the full dataset, and we can also use a small model to select data for other models. Finally, I will describe how insights into the dynamics of fine-tuning inspired us to design a memory-efficient zeroth-order algorithm (MeZO) that can tune large LMs. MeZO frequently matches performance while using up to 12x less memory and half as many GPU-hours as standard fine-tuning. These works were done in collaboration with researchers at Princeton University and University of Washington. *Speaker Bio:* Sadhika Malladi is a PhD student at Princeton University advised by Sanjeev Arora. She has worked at OpenAI, Cerebras, and Microsoft Research. She graduated from MIT in 2019 with a degree in mathematics and computer science and a degree in philosophy. Her work focuses on the interplay between theory and empirics, especially with respect to language models. *In person: *GHC 6115 *Zoom Link*: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Mon Mar 25 12:09:52 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Mon, 25 Mar 2024 12:09:52 -0400 Subject: =?UTF-8?Q?Re=3A_=5BCMU_AI_Seminar=5D_Special=21_=F0=9F=A7=90_March_25_at_3pm_=28?= =?UTF-8?Q?GHC_6115_=26_Zoom=29_=2D=2D_Sadhika_Malladi_=28Princeton=29_=2D=2D_Theory_an?= =?UTF-8?Q?d_Practice_in_Language_Model_Fine=2DTuning_=2D=2D_AI_Seminar_spons?= =?UTF-8?Q?ored_by_SambaNova_Systems?= In-Reply-To: References: Message-ID: ? This is happening today at 3pm! (There will be pizza.) On Thu, Mar 21, 2024 at 2:02?PM Asher Trockman wrote: > Dear all, > > We look forward to seeing you *this Monday (3/25)* from *3**:00-4:00 PM > (U.S. Eastern time)* for a special installment of this semester's > *CMU AI Seminar*, sponsored by SambaNova Systems . > The seminar will be held in GHC 6115 *with pizza provided *and will be > streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website . > > On this Monday (3/25), *Sadhika Malladi* (Princeton) will be giving a > talk titled *"**Theory and Practice in Language Model Fine-Tuning**"*. > > *Title*: Theory and Practice in Language Model Fine-Tuning > > *Talk Abstract*: Fine-tuning ever larger and more capable language models > (LMs) has proven to be an effective way to solve a variety of language > related tasks. Yet little is understood about what fine-tuning does, and > most traditional optimization analyses cannot account for a pre-trained > initialization. I will start by formalizing the common intuition that > fine-tuning makes a small change to the model. Inspired by the neural > tangent kernel (NTK), we propose an empirically validated and theoretically > sound hypothesis that can approach answering questions like "Why doesn't a > giant LM overfit when fine-tuning it on a few dozen examples?" and "Why > does LoRA work?" Our simple mental model motivates an efficient, > transferable, and optimizer-aware data selection algorithm, dubbed LESS, to > elicit specific capabilities during instruction tuning. Using LESS to > select 5% of the data outperforms on the full dataset, and we can also use > a small model to select data for other models. Finally, I will describe how > insights into the dynamics of fine-tuning inspired us to design a > memory-efficient zeroth-order algorithm (MeZO) that can tune large LMs. > MeZO frequently matches performance while using up to 12x less memory and > half as many GPU-hours as standard fine-tuning. These works were done in > collaboration with researchers at Princeton University and University of > Washington. > > *Speaker Bio:* Sadhika Malladi is a PhD student at Princeton University > advised by Sanjeev Arora. She has worked at OpenAI, Cerebras, and Microsoft > Research. She graduated from MIT in 2019 with a degree in mathematics and > computer science and a degree in philosophy. Her work focuses on the > interplay between theory and empirics, especially with respect to language > models. > > *In person: *GHC 6115 > *Zoom Link*: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > > Thanks, > Asher Trockman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashert at cs.cmu.edu Mon Apr 1 18:37:07 2024 From: ashert at cs.cmu.edu (Asher Trockman) Date: Mon, 1 Apr 2024 18:37:07 -0400 Subject: [CMU AI Seminar] April 2 at 12pm (GHC 6115 & Zoom) -- Rafael Frongillo (UC Boulder) -- Incentive problems in data science competitions, and how to fix them -- AI Seminar sponsored by SambaNova Systems Message-ID: Dear all, We look forward to seeing you tomorrow, *this Tuesday (4/2)* from *1**2:00-1:00 PM (U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*, sponsored by SambaNova Systems . The seminar will be held in GHC 6115 *with pizza provided *and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website . Tomorrow, this Tuesday (4/2), *Rafael Frongillo* (UC Boulder) will be giving a talk titled *"Incentive problems in data science competitions, and how to fix them**"*. *Title*: Incentive problems in data science competitions, and how to fix them *Talk Abstract*: Machine learning and data science competitions, wherein contestants submit predictions about held-out data points, are an increasingly common way to gather information and identify experts. One of the most prominent platforms is Kaggle, which has run competitions with prizes up to 3 million USD. The traditional mechanism for selecting the winner is simple: score each prediction on each held-out data point, and the contestant with the highest total score wins. Perhaps surprisingly, this reasonable and popular mechanism can incentivize contestants to submit wildly inaccurate predictions. The talk will begin with a series of experiments inspired by Aldous (2019) to build intuition for the incentive issues and what sort of strategic behavior one would expect---and when. One takeaway is that, despite conventional wisdom, large held-out data sets do not always alleviate these incentive issues, and small ones do not necessarily suffer from them, as we confirm with formal results. We will then discuss a new mechanism which is approximately truthful, in the sense that rational contestants will submit predictions which are close to their best guess. If time we will see how the same mechanism solves an open question for online learning from strategic experts. *Speaker Bio:* Rafael (Raf) Frongillo is an Assistant Professor of Computer Science at the University of Colorado Boulder. His research lies at the interface between theoretical machine learning and economics, primarily focusing on information elicitation mechanisms, which incentivize humans or algorithms to predict accurately. Before Boulder, Raf was a postdoc at the Center for Research on Computation and Society at Harvard University and at Microsoft Research New York. He received his PhD in Computer Science at UC Berkeley, advised by Christos Papadimitriou and supported by the NDSEG Fellowship. *In person: *GHC 6115 *Zoom Link*: https://cmu .zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 Thanks, Asher Trockman -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Fri Apr 12 10:22:41 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Fri, 12 Apr 2024 10:22:41 -0400 Subject: Apr 16 at 4pm, NSH 3305 -- Mingjie Sun, CMU, -- Massive Activations in Large Language Models Message-ID: Dear all, We look forward to seeing you next Tuesday (04/16) from 04:00-5:00 PM (ET) for the next talk of this semester's CMU AI Seminar, sponsored by SambaNova Systems (https://sambanova.ai). The seminar will be held in *NSH 3305 *with pizza provided and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). Next Tuesday (04/16), Mingjie Sun (CMU) will be giving a talk titled "Massive Activations in Large Language Models". *Talk Abstract: * In the 2020s, Transformers have dominated the deep learning landscape, powering almost all advanced AI systems. Despite their promising capabilities, their inner workings are often overlooked and poorly understood. In this talk, we delve into an intriguing phenomenon we observe in Large Language Models (LLMs): very few activations within the hidden states exhibit exceptionally high magnitudes, e.g., 100,000 times greater than others. We call them massive activations. We present our investigation of massive activations in LLMs and show how they are closely connected to the self-attention mechanism ? the core building block of Transformers. Last, we go beyond the language domain and discuss the presence of massive activations in Vision Transformers. *Speaker Bio: * Mingjie Sun is a Ph.D. student in the Computer Science Department at CMU. His research focuses on improving the efficiency and empirical understanding of foundation models. *In person: NSH 3305Zoom Link: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 * - Victor & Asher -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Tue Apr 16 13:57:59 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Tue, 16 Apr 2024 13:57:59 -0400 Subject: Apr 16 at 4pm, NSH 3305 -- Mingjie Sun, CMU, -- Massive Activations in Large Language Models In-Reply-To: References: Message-ID: Quick reminder that this talk is happening later today at NSH 3305. See you there! On Fri, Apr 12, 2024 at 10:22?AM Victor Akinwande wrote: > Dear all, > > We look forward to seeing you next Tuesday (04/16) from 04:00-5:00 PM > (ET) for the next talk of this semester's CMU AI Seminar, sponsored by > SambaNova Systems (https://sambanova.ai). The seminar will be held in *NSH > 3305 *with pizza provided and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). > > Next Tuesday (04/16), Mingjie Sun (CMU) will be giving a talk titled > "Massive Activations in Large Language Models". > > *Talk Abstract: * > In the 2020s, Transformers have dominated the deep learning landscape, > powering almost all advanced AI systems. Despite their promising > capabilities, their inner workings are often overlooked and poorly > understood. In this talk, we delve into an intriguing phenomenon we observe > in Large Language Models (LLMs): very few activations within the hidden > states exhibit exceptionally high magnitudes, e.g., 100,000 times greater > than others. We call them massive activations. We present our investigation > of massive activations in LLMs and show how they are closely connected to > the self-attention mechanism ? the core building block of Transformers. > Last, we go beyond the language domain and discuss the presence of massive > activations in Vision Transformers. > > *Speaker Bio: * > Mingjie Sun is a Ph.D. student in the Computer Science Department at CMU. > His research focuses on improving the efficiency and empirical > understanding of foundation models. > > > *In person: NSH 3305Zoom Link: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > * > > > - Victor & Asher > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Wed Apr 17 12:27:23 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Wed, 17 Apr 2024 12:27:23 -0400 Subject: Apr 23 at 12pm (NSH 3305) -- Yexiang Xue (Purdue) -- Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery Message-ID: Dear all, We look forward to seeing you next Tuesday (04/23) from 12:00-1:00 PM (ET) for the next talk of this semester's CMU AI Seminar, sponsored by SambaNova Systems (https://sambanova.ai). The seminar will be held in NSH 3305 with pizza provided and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). Next Tuesday (04/23), Yexiang Xue (Purdue) will be giving a talk titled "Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery". /////////////////// *Talk Abstract: * Automated reasoning and machine learning are two fundamental pillars of artificial intelligence. Despite much recent progress, the full integration of reasoning and learning is beyond reach. This talk presents three cases where integrated vertical reasoning significantly enhances learning. Our first case is in neural generation, where state-of-the-art models struggle to generate pleasing images while satisfying complex specifications. We introduce Spatial Reasoning INtegrated Generator (SPRING). SPRING embeds a spatial reasoning module inside the deep generative network to reason object locations. This embedded approach guarantees constraint satisfaction, offers interpretability, and facilitates zero-shot transfer learning. Our second case is in AI-driven scientific discovery, where we embed vertical reasoning to expedite symbolic regression. Vertical reasoning builds from reduced models that involve a subset of variables (or processes) to full models, inspired by the human scientific approach. Vertical discovery outperforms horizontal ones at discovering equations involving many variables and complex processes, especially in learning PDEs in computational materials science. In the third case, we demonstrate that vertical reasoning enables constant approximation guarantees in solving Satisfiable Modulo Counting (SMC). SMC involves model counting as predicates in Boolean satisfiability. It encompasses many problems that require both symbolic decision-making and statistical reasoning, e.g., stochastic optimization, hypothesis testing, solving quantal-response leader-follower games, and learning (inverse reinforcement learning) with provable guarantees. Using vertical reasoning that streamlines XOR constraints, our proposed XOR-SMC reduces highly intractable SMC problems into solving satisfiability instances, while obtaining a constant approximation guarantee. *Speaker Bio: *Dr. Yexiang Xue is an assistant professor in the Department of Computer Science, Purdue University. The goal of Dr. Xue?s research is to bridge large-scale constraint-based reasoning with state-of-the-art machine learning techniques to enable intelligent agents to make optimal decisions in high-dimensional and uncertain real-world applications. More specifically, Dr Xue?s research focuses on scalable and accurate probabilistic reasoning, statistical modeling of data, and robust decision-making under uncertainty. His work is motivated by key problems across multiple scientific domains, ranging from artificial intelligence, machine learning, renewable energy, materials science, crowdsourcing, citizen science, urban computing, ecology, to behavioral econometrics. Dr. Xue obtained his PhD from Cornell, supervised by Professor Bart Selman and Professor Carla Gomes, before joining Purdue in 2018. Recently, Dr. Xue has been focusing on developing cross-cutting computational methods, with an emphasis in the areas of computational sustainability and AI-driven scientific discovery. /////////////////// *In person: NSH 3305* *Zoom Link: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 * - Victor & Asher -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Tue Apr 23 08:57:16 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Tue, 23 Apr 2024 08:57:16 -0400 Subject: Apr 23 at 12pm (NSH 3305) -- Yexiang Xue (Purdue) -- Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery In-Reply-To: References: Message-ID: Quick reminder that this talk is happening later today at NSH 3305. See you there! On Wed, Apr 17, 2024 at 12:27?PM Victor Akinwande wrote: > Dear all, > > We look forward to seeing you next Tuesday (04/23) from 12:00-1:00 PM (ET) > for the next talk of this semester's CMU AI Seminar, sponsored by SambaNova > Systems (https://sambanova.ai). The seminar will be held in NSH 3305 with > pizza provided and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). > > Next Tuesday (04/23), Yexiang Xue (Purdue) will be giving a talk titled > "Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery". > > /////////////////// > *Talk Abstract: * > Automated reasoning and machine learning are two fundamental pillars of > artificial intelligence. Despite much recent progress, the full integration > of reasoning and learning is beyond reach. This talk presents three cases > where integrated vertical reasoning significantly enhances learning. Our > first case is in neural generation, where state-of-the-art models struggle > to generate pleasing images while satisfying complex specifications. We > introduce Spatial Reasoning INtegrated Generator (SPRING). SPRING embeds a > spatial reasoning module inside the deep generative network to reason > object locations. This embedded approach guarantees constraint > satisfaction, offers interpretability, and facilitates zero-shot transfer > learning. Our second case is in AI-driven scientific discovery, where we > embed vertical reasoning to expedite symbolic regression. Vertical > reasoning builds from reduced models that involve a subset of variables (or > processes) to full models, inspired by the human scientific approach. > Vertical discovery outperforms horizontal ones at discovering equations > involving many variables and complex processes, especially in learning PDEs > in computational materials science. In the third case, we demonstrate that > vertical reasoning enables constant approximation guarantees in solving > Satisfiable Modulo Counting (SMC). SMC involves model counting as > predicates in Boolean satisfiability. It encompasses many problems that > require both symbolic decision-making and statistical reasoning, e.g., > stochastic optimization, hypothesis testing, solving quantal-response > leader-follower games, and learning (inverse reinforcement learning) with > provable guarantees. Using vertical reasoning that streamlines XOR > constraints, our proposed XOR-SMC reduces highly intractable SMC problems > into solving satisfiability instances, while obtaining a constant > approximation guarantee. > > > *Speaker Bio: *Dr. Yexiang Xue is an assistant professor in the > Department of Computer Science, Purdue University. The goal of Dr. Xue?s > research is to bridge large-scale constraint-based reasoning with > state-of-the-art machine learning techniques to enable intelligent agents > to make optimal decisions in high-dimensional and uncertain real-world > applications. More specifically, Dr Xue?s research focuses on scalable and > accurate probabilistic reasoning, statistical modeling of data, and robust > decision-making under uncertainty. His work is motivated by key problems > across multiple scientific domains, ranging from artificial intelligence, > machine learning, renewable energy, materials science, crowdsourcing, > citizen science, urban computing, ecology, to behavioral econometrics. Dr. > Xue obtained his PhD from Cornell, supervised by Professor Bart Selman and > Professor Carla Gomes, before joining Purdue in 2018. Recently, Dr. Xue has > been focusing on developing cross-cutting computational methods, with an > emphasis in the areas of computational sustainability and AI-driven > scientific discovery. > /////////////////// > > *In person: NSH 3305* > *Zoom Link: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > * > > > - Victor & Asher > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Wed Apr 24 11:59:36 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Wed, 24 Apr 2024 11:59:36 -0400 Subject: Special! Apr 30 at 12pm (NSH 3305) -- Olatunji Ruwase (Microsoft Research) -- DeepSpeed: Enabling efficient trillion parameter scale training for deep learning models Message-ID: Dear all, We look forward to seeing you next Tuesday (04/30) from 12:00-1:00 PM (ET) for the next talk of this semester's CMU AI Seminar, sponsored by SambaNova Systems (https://sambanova.ai). The seminar will be held in NSH 3305 with pizza provided and will be streamed on Zoom. To learn more about the seminar series or to see the future schedule, please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). Next Tuesday (04/30), Olatunji (Tunji) Ruwase (Microsoft Research) will be giving a virtual talk titled "DeepSpeed: Enabling efficient trillion parameter scale training for deep learning models". /////////////////// *Talk Abstract: *Deep Learning is driving unprecedented progress in a wide range of Artificial Intelligence domains, including natural language processing, vision, speech, and multimodal. Sustaining this rapid pace of AI revolution, however, requires practical solutions to the extreme demands of model scaling on the compute, memory, communication and storage components of modern computing hardware. To address this challenge, we created a deep learning optimization library called DeepSpeed to make distributed model training and inference efficient, effective, and easy on commodity hardware. This talk will focus on DeepSpeed optimizations for improving memory, compute, communication, and data efficiency of extreme-scale model training. *Speaker Bio: *Olatunji (Tunji) Ruwase is the lead and co-founder of the DeepSpeed project at Microsoft. His broad industry and research background spans compilers, operating systems, and hardware accelerators. His current focus is on systems and convergence optimizations, and frameworks for efficient distributed training and inference of deep learning models. His research results on deep learning training, inference, and hyperparameter search are used in multiple Microsoft systems and products, such as Azure, Ads, Bing, Catapult, and HyperDrive. Tunji earned a PhD in Computer Science from Carnegie Mellon University under the guidance of Professor Todd Mowry. /////////////////// *In person: NSH 3305Zoom Link: https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 * - Victor & Asher -------------- next part -------------- An HTML attachment was scrubbed... URL: From vakinwan at andrew.cmu.edu Tue Apr 30 10:32:19 2024 From: vakinwan at andrew.cmu.edu (Victor Akinwande) Date: Tue, 30 Apr 2024 10:32:19 -0400 Subject: Special! Apr 30 at 12pm (NSH 3305) -- Olatunji Ruwase (Microsoft Research) -- DeepSpeed: Enabling efficient trillion parameter scale training for deep learning models In-Reply-To: References: Message-ID: Quick reminder that this talk is happening later today. On Wed, Apr 24, 2024 at 11:59?AM Victor Akinwande wrote: > Dear all, > > We look forward to seeing you next Tuesday (04/30) from 12:00-1:00 PM > (ET) for the next talk of this semester's CMU AI Seminar, sponsored by > SambaNova Systems (https://sambanova.ai). The seminar will be held in NSH > 3305 with pizza provided and will be streamed on Zoom. > > To learn more about the seminar series or to see the future schedule, > please visit the seminar website (http://www.cs.cmu.edu/~aiseminar/). > > Next Tuesday (04/30), Olatunji (Tunji) Ruwase (Microsoft Research) will > be giving a virtual talk titled "DeepSpeed: Enabling efficient trillion > parameter scale training for deep learning models". > > /////////////////// > > *Talk Abstract: *Deep Learning is driving unprecedented progress in a > wide range of Artificial Intelligence domains, including natural language > processing, vision, speech, and multimodal. Sustaining this rapid pace of > AI revolution, however, requires practical solutions to the extreme demands > of model scaling on the compute, memory, communication and storage > components of modern computing hardware. To address this challenge, we > created a deep learning optimization library called DeepSpeed to make > distributed model training and inference efficient, effective, and easy on > commodity hardware. This talk will focus on DeepSpeed optimizations for > improving memory, compute, communication, and data efficiency of > extreme-scale model training. > > *Speaker Bio: *Olatunji (Tunji) Ruwase is the lead and co-founder of the > DeepSpeed project at Microsoft. His broad industry and research background > spans compilers, operating systems, and hardware accelerators. His current > focus is on systems and convergence optimizations, and frameworks for > efficient distributed training and inference of deep learning models. His > research results on deep learning training, inference, and hyperparameter > search are used in multiple Microsoft systems and products, such as Azure, > Ads, Bing, Catapult, and HyperDrive. Tunji earned a PhD in Computer Science > from Carnegie Mellon University under the guidance of Professor Todd Mowry. > > /////////////////// > > > *In person: NSH 3305Zoom Link: > https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09 > * > > - Victor & Asher > -------------- next part -------------- An HTML attachment was scrubbed... URL: