[CL+NLP Lunch] CL+NLP Lunch Tuesday Nov 13 @ noon

Dani Yogatama dyogatama at cs.cmu.edu
Mon Nov 12 00:14:04 EST 2012


This talk has been canceled.
There will be *no* CL+NLP Lunch on Tuesday Nov 13.


On Sat, Nov 10, 2012 at 3:00 PM, Dani Yogatama <dyogatama at cs.cmu.edu> wrote:

> Hi all,
>
> We are excited to announce that Dirk Hovy will speak to the the CL+NLP
> Lunch.
> Details are included below.
> Lunch will be provided.
>
> If you would like to receive announcements about future CL+NLP Lunch
> talks, subscribe to the mailing list:
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/nlp-lunch
>
>
> Thanks,
> Dani
>
> ===================================================
> *CL+NLP Lunch* (http://www.cs.cmu.edu/~nlp-lunch/)
> *Speaker*: Dirk Hovy, University of Southern California
> *Date*: Tuesday, November 13, 2012
> *Time*: 12:00 noon
> *Venue*: GHC 6115
>
> *Title*:
> Learning Semantic Types and Relations from Syntactic Context
>
> *Abstract*:
> Natural Language Processing (NLP) is moving towards incorporating more
> semantics into its applications, such as Machine Translation (MT) or
> Question Answering (QA). Most semantic frameworks depend on predefined
> "external" resources, such as knowledge bases or ontologies. This requires
> a lot of manual effort, but even then it is impossible to build a complete
> representation of the world. Instead, we would like to learn a sufficient
> representation directly from data.
>
> One way to encode meaning is through syntactic and semantic relations
> between predicates and their arguments. Relations are thus at the core of
> meaning and information. Recently, several approaches have collected large
> corpora of syntactically related word chains (e.g., subject, verb, object).
> However, these chains are extracted at the lexical level and do not
> generalize well, so their use for semantic interpretation is limited. If we
> could use these lexical chains as inputs to generalize beyond the word
> level, we would be able to learn semantic relations and make use of these
> existing resources.
>
> In this talk, I will present a method to learn semantic types and
> relations from raw text.
> I construct unsupervised models on syntactic dependency arcs, using
> potential types as latent variables. The resulting models allow for quick
> domain adaptation and unknown relations, and avoid data sparsity caused by
> intervening words.
>  I show improvements over state-of-the-art systems as well as novel
> approaches to fully exploit the structure contained in the data. My method
> builds on existing triple stores and does not require any external
> knowledge bases, manual annotation, or pre-defined predicates or arguments.
>
> *Bio*:
> Dirk Hovy is a PhD candidate in Natural Language Processing at the
> University of Southern California (USC), and holds a Masters in
> linguistics. He is interested in data-driven models of meaning and
> understanding and has published on unsupervised learning, information
> extraction, word-sense disambiguation, and temporal relation classification
> (see <http://www.dirkhovy.com/portfolio/papers/index.php>).
> His thesis work is concerned with how computers can learn semantic types
> and relations from raw text, without recurrence to external resources.
> His other interests include baking, cooking, crossfit, and medieval art
> and literature.
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/nlp-lunch/attachments/20121112/75afa085/attachment.html>


More information about the nlp-lunch mailing list