[CMU AI Seminar] February 14 at 12pm (GHC 6115 & Zoom) -- Uri Alon (CMU) -- Natural Language Reasoning with Language Models of Code -- AI Seminar sponsored by SambaNova Systems

Asher Trockman ashert at cs.cmu.edu
Sun Feb 12 12:10:16 EST 2023


Dear all,

Please join us for this Valentine's Day (*Tuesday, 2/14*) installment of the*
CMU AI Seminar Series* to learn about some research we love.
The seminar will be held in *GHC 6115* from *1**2:00-1:00 PM (U.S. Eastern
time)* *with pizza provided *and will be streamed on Zoom. It is sponsored
by SambaNova Systems <https://sambanova.ai/>.

To learn more about the seminar series or to see the future schedule,
please visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.

This Tuesday (2/14), *Uri Alon* (CMU LTI) will be giving a talk titled
*"**Natural
Language Reasoning with Language Models of Code**".*

*Title*: Natural Language Reasoning with Language Models of Code

*Talk Abstract*: In this talk, I will show that LMs that were pretrained on
*code* can be better natural language reasoners than LMs that were trained
(mostly) on natural language, even when the task does not involve source
code at all.
In a class of structured NL reasoning tasks, I will show how we can frame
the task as code generation; this makes LMs of code such as Codex better
reasoners than LMs of natural language such as T5 and GPT-3.
Another class of mathematical reasoning tasks was recently unlocked by
methods that require LLMs to generate their explicit reasoning steps, such
as “chain-of-thought” (Wei et al., 2022).
Such methods employ LMs for both understanding the problem description by
decomposing it into steps, as well as solving each step of the problem.
While LMs seem adept at the step-by-step decomposition part, they often
make logical and arithmetic mistakes in the solution part. I will show how
LMs of *code* can decompose the natural language problem into runnable
steps, which allows us to offload the solution to a programmatic runtime
such as a Python interpreter. That is, instead of learning to solve the
problem directly, we teach the model to generate a program that solves the
problem. Across a variety of benchmarks, this approach leads to more
accurate results than much larger models such as PALM-540B using
chain-of-thought.

*Speaker Bio:* Uri Alon is a Postdoctoral Researcher at LTI, working with
Prof. Graham Neubig on NLP and learning from source code. Previously, he
obtained his PhD at the Technion (Israel), where he worked on modeling
programming languages and graphs. Currently, he is also interested in the
synergy of neural models with symbolic components such as retrieval,
programs, and automata. His personal website is at https://urialon.ml. Feel
free to reach out with any questions or comments about the talk.

*In person: *GHC 6115
*Zoom Link*:
https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09

Thanks,
Asher Trockman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20230212/2d74aa21/attachment.html>


More information about the ai-seminar-announce mailing list