<div dir="ltr"><div style="font-size:14px"><span style="font-size:12.8px">Hi all, </span><br></div><div style="font-size:14px"><br></div><div style="font-size:14px">NLP Lunch will happen in 30 mins. </div><div style="font-size:14px"><br></div><div style="font-size:14px">>>>>>>>>>>>>>>>>>>>>>>>>>>>>></div><div style="font-size:14px"><div style="font-size:18px"><span style="font-size:12.8px">Please join us for the next CL+<span class="">NLP</span> lunch</span><span style="font-size:12.8px"> at <b>12:00 on Apr</b></span><span style="font-size:12.8px"><b> 29</b></span><b style="font-size:12.8px"><span style="font-size:12.8px">th</span></b><span style="font-size:12.8px"><b> at 6115</b></span><span style="font-size:12.8px">,</span><br></div><div style="font-size:12.8px"><div style="font-size:13px"><span style="font-size:12.8px">where I</span> <span style="font-size:12.8px">will talk about "</span><span style="color:rgb(0,0,0);font-size:small">Multilingual and Multimodal word representation"</span><span style="font-size:12.8px">. </span></div><div style="font-size:12.8px"><span style="font-size:12.8px">Lunch</span><span style="font-size:12.8px"> will be </span><span style="font-size:12.8px">provided.</span></div><div style="font-size:12.8px"><br></div></div></div><div style="font-size:14px"><font color="#000000"><div><b>Title: </b>Multilingual and Multimodal word representation</div><div><div style="color:rgb(34,34,34)"><b>Time</b>: Apr 29th, 12:00 - 13:00</div><div style="color:rgb(34,34,34)"><b>Location</b>: GHC 6115</div></div><div><b>Abstract:</b><br></div></font><div>Learned word representations are used as features in models of natural language in place of hand-engineered features. Traditionally, type-level representations are learned by aggregating and summarizing word-word concurrence statistics in large corpora. In this talk, I will present two methods for learning word representations using multilingual or multimodal supervision. The first learns representations of words-in-context (rather than context-agnostic word types) using cross-lingual supervision. The motivating hypothesis is that a good representation of a word in context will be one that is sufficient for selecting the correct translation into a second language. These context-sensitive word representations are suitable for, e.g., distinguishing different word senses and other context-modulated variations in meaning.</div><div><br></div><div>In the second part, I will talk about a method for projecting words into three-dimensional color spaces. Using color-name pairs obtained from an online color design forum, we evaluate our model on a “color Turing test” and find that, given a name, the color predicted by our model is often considered by human judges to be no worse than the color that actually inspired the name. This model enables the analysis of words and documents in terms of the colors associated with the words they contain, finding for example that recipes are more evocative of colors than poems or news reports.</div></div><div style="font-size:14px"><br></div><div style="font-size:14px">+++++++</div><div style="font-size:14px">After the series of faculty candidate talks, I would like to continue <span class="">NLP</span> Lunch. </div><div style="font-size:14px">Please suggest next speaker to me. I will find a room and lunch.</div><div style="font-size:14px"><br></div><div style="font-size:14px">Best regards,</div><div style="font-size:14px">Kazuya Kawakami </div><div style="font-size:14px"><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-04-22 15:56 GMT-04:00 Kawakami Kazuya <span dir="ltr"><<a href="mailto:www.kazuya.kawakami@gmail.com" target="_blank">www.kazuya.kawakami@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span style="font-size:12.8px">Hi all, </span><br></div><div><br></div><div><div style="font-size:18px"><span style="font-size:12.8px">Please join us for the next CL+<span>NLP</span> <span>lunch</span></span><span style="font-size:12.8px"> at <b>12:00 on Apr</b></span><span style="font-size:12.8px"><b> 29</b></span><b style="font-size:12.8px"><span style="font-size:12.8px">th</span></b><span style="font-size:12.8px"><b> at 6115</b></span><span style="font-size:12.8px">,</span><br></div><div style="font-size:12.8px"><div style="font-size:13px"><span style="font-size:12.8px">where I</span> <span style="font-size:12.8px">will talk about "</span><span style="color:rgb(0,0,0);font-size:small">Multilingual and Multimodal word representation"</span><span style="font-size:12.8px">. </span></div><div style="font-size:12.8px"><span style="font-size:12.8px">Lunch</span><span style="font-size:12.8px"> will be </span><span style="font-size:12.8px">provided.</span></div><div style="font-size:12.8px"><br></div></div></div><div><span><font color="#000000"><div><b>Title: </b>Multilingual and Multimodal word representation</div><div><div style="color:rgb(34,34,34)"><b>Time</b>: Apr 29th, 12:00 - 13:00</div><div style="color:rgb(34,34,34)"><b>Location</b>: GHC 6115</div></div><div><b>Abstract:</b><br></div></font></span><div>Learned word representations are used as features in models of natural language in place of hand-engineered features. Traditionally, type-level representations are learned by aggregating and summarizing word-word concurrence statistics in large corpora. In this talk, I will present two methods for learning word representations using multilingual or multimodal supervision. The first learns representations of words-in-context (rather than context-agnostic word types) using cross-lingual supervision. The motivating hypothesis is that a good representation of a word in context will be one that is sufficient for selecting the correct translation into a second language. These context-sensitive word representations are suitable for, e.g., distinguishing different word senses and other context-modulated variations in meaning.</div><div><br></div><div>In the second part, I will talk about a method for projecting words into three-dimensional color spaces. Using color-name pairs obtained from an online color design forum, we evaluate our model on a “color Turing test” and find that, given a name, the color predicted by our model is often considered by human judges to be no worse than the color that actually inspired the name. This model enables the analysis of words and documents in terms of the colors associated with the words they contain, finding for example that recipes are more evocative of colors than poems or news reports.</div></div><div><br></div><div>+++++++</div><div>After the series of faculty candidate talks, I would like to continue NLP Lunch. </div><div>Please suggest next speaker to me. I will find a room and lunch.</div><div><br></div><div>Best regards,</div><div>Kazuya Kawakami </div></div>
</blockquote></div><br></div>