Hi all,<div><br></div><div>We are excited to announce that Dirk Hovy will speak to the the CL+NLP Lunch. </div><div>Details are included below.</div><div><div>Lunch will be provided. </div></div><div><br></div><div>If you would like to receive announcements about future CL+NLP Lunch talks, subscribe to the mailing list: <a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/nlp-lunch" target="_blank">https://mailman.srv.cs.cmu.edu/mailman/listinfo/nlp-lunch</a></div>
<div><br></div><div><br></div><div>Thanks,</div><div>Dani</div><div><br></div><div>===================================================</div><div><b>CL+NLP Lunch</b> (<a href="http://www.cs.cmu.edu/~nlp-lunch/" target="_blank">http://www.cs.cmu.edu/~nlp-lunch/</a>)</div>
<div><b>Speaker</b>: Dirk Hovy, University of Southern California</div>
<div><b>Date</b>: Tuesday, November 13, 2012</div>
<div><b>Time</b>: 12:00 noon</div><div><b>Venue</b>: GHC 6115</div><div><br></div><div><span style="font-family:arial,sans-serif;font-size:13px"><b>Title</b>:</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">Learning Semantic Types and Relations from Syntactic Context</span><br style="font-family:arial,sans-serif;font-size:13px">
<br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px"><b>Abstract</b>:</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">Natural Language Processing (NLP) is moving towards incorporating more semantics into its applications, such as Machine Translation (MT) or Question Answering (QA). Most semantic frameworks depend on predefined "external" resources, such as knowledge bases or ontologies. This requires a lot of manual effort, but even then it is impossible to build a complete representation of the world. Instead, we would like to learn a sufficient representation directly from data.</span></div>
<div><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">One way to encode meaning is through syntactic and semantic relations between predicates and their arguments. Relations are thus at the core of meaning and information. Recently, several approaches have collected large corpora of syntactically related word chains (e.g., subject, verb, object). However, these chains are extracted at the lexical level and do not generalize well, so their use for semantic interpretation is limited. If we could use these lexical chains as inputs to generalize beyond the word level, we would be able to learn semantic relations and make use of these existing resources.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13px">In this talk, I will present a method to learn semantic types and relations from raw text.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">I construct unsupervised models on syntactic dependency arcs, using potential types as latent variables. The resulting models allow for quick domain adaptation and unknown relations, and avoid data sparsity caused by intervening words.</span></div>
<div>
<span style="font-family:arial,sans-serif;font-size:13px">I show improvements over state-of-the-art systems as well as novel approaches to fully exploit the structure contained in the data. My method builds on existing triple stores and does not require any external knowledge bases, manual annotation, or pre-defined predicates or arguments.</span><br style="font-family:arial,sans-serif;font-size:13px">
<br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px"><b>Bio</b>:</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">Dirk Hovy is a PhD candidate in Natural Language Processing at the University of Southern California (USC), and holds a Masters in linguistics. He is interested in data-driven models of meaning and understanding and has published on unsupervised learning, information extraction, word-sense disambiguation, and temporal relation classification (see <</span><a href="http://www.dirkhovy.com/portfolio/papers/index.php" style="font-family:arial,sans-serif;font-size:13px" target="_blank">http://www.dirkhovy.com/portfolio/papers/index.php</a><span style="font-family:arial,sans-serif;font-size:13px">>).</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">His thesis work is concerned with how computers can learn semantic types and relations from raw text, without recurrence to external resources.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">His other interests include baking, cooking, crossfit, and medieval art and literature.</span><br></div><div><br></div><div><br></div><div><br></div><div><br></div>
<div><br></div><div><br></div>