<div dir="ltr">A gentle reminder that the talk will be tomorrow (Tuesday) noon in NSH 3305.</div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 28, 2018 at 1:19 AM, Adams Wei Yu <span dir="ltr"><<a href="mailto:weiyu@cs.cmu.edu" target="_blank">weiyu@cs.cmu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)">Dear faculty and students,</div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><br></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><span style="font-weight:400">We look forward to seeing you next Tuesday, May 01, at noon in NSH 3305</span><b style="font-weight:400"> </b>for AI Seminar sponsored by Apple. To learn more about the seminar series, please visit the AI Seminar <a href="http://www.cs.cmu.edu/~aiseminar/" style="color:rgb(17,85,204);font-weight:400" target="_blank">webpage</a>.</div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><br></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px">On Tuesday, <a href="http://www.cs.cmu.edu/~hongyanz/" target="_blank">Hongyang Zhang</a></span><span style="font-size:12.8px"> will give the following talk: </span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><br></div><div style="text-align:start;text-indent:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><div><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px">Title: </span><span style="font-size:12.8px">Testing and Learning from Big Data, Optimally</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px"><br></span></div><div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-transform:none;white-space:normal;word-spacing:0px">Abstract: We are now in an era of big data as well as high-dimensional data: data volume is almost doubling every two years. Fortunately, high-dimensional data are structured. Usually, they are of low rank. This is the basis of dimensionality reduction and compressed sensing. To extract efficient information from the low-rank structure without fully observing the matrix, there are two questions to handle: 1. what is the true rank of the data matrix with only small samples (testing problem)? 2. given the rank of the matrix, how to design computationally efficient algorithm to recover the matrix with only compressed observations (learning problem)?<br></div><div><span style="text-align:start;text-indent:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline;font-size:12.8px"><div style="text-align:start;text-indent:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><div style="text-align:start;text-indent:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><div><br></div><div>In this talk, we will focus on the testing and learning problems regarding the matrix rank with optimal sample complexity, which are new paradigms of information extraction from the big data. In the first part of the talk, we will see how we can test the rank of an unknown matrix via an interesting ladder-shaped sampling scheme. We also supplement our positive results with a hardness result, showing that our sampling scheme is near-optimal.</div><div><br></div><div>In the second part of the talk, we study the matrix completion problem. Matrix completion is known as a non-convex problem in its most original form. To alleviate the computational issue, we show that strong duality holds for the matrix completion with nearly optimal sample complexity. For the hardness result, we also show that generic matrix factorization requires exponential time to be solved.</div><div><br></div><div>Based on joint work with Nina Balcan (CMU), Yi Li (Nanyang Technological University), Yingyu Liang (Wisconsin-Madison), and David P. Woodruff (CMU).</div></div></div></span></div></div></div><br></div>
</blockquote></div><br></div>