<div dir="ltr"><div><span style="color:rgb(54,54,54);font-family:nyt-imperial,georgia,"times new roman",times,serif">I wish the authors of the opinion piece, led by </span>Chomsky, can try to play with ChatGPT a bit more before they assert their claim. At least, that would help them use different, more convincing, examples to make their point. </div><div><br></div><div>Nowadays, everyone can type a paragraph and check what ChatGPT can or cannot do. It's so easy to verify whether Chomsky's argument against ChatGPT's performance is sound or weak. It is not the 80s and 90s when MIT professors throw out claims on possible performance shortcomings, and stronger, competence weakness, on connectionism models and it takes a lot of effort to test whether their claim is really the case. <br></div><div><br></div><div>Here is another example from the piece that sounds quite plausible, but ChatGPT would tell Chomsky and co-authors to think more carefully and choose a better example:</div><div dir="ltr"><span style="color:rgb(54,54,54);font-family:nyt-imperial,georgia,"times new roman",times,serif"><br>> For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.</span><br></div><div><br></div>Asking this question to ChatGPT:<div><br><div><i>ME: <span style="font-family:"Helvetica Neue";font-size:13px">explain what does mean to say "John is too stubborn to talk to"</span></i></div>
<div>ChatGPT: <span style="font-family:"Helvetica Neue";font-size:13px">When someone says "John is too stubborn to talk to", it means that they believe John is unwilling to listen to others or to consider other perspectives. This stubbornness can make it difficult or impossible to have a productive conversation or to reach a mutual understanding with John. </span><span style="font-family:"Helvetica Neue";font-size:13px">The phrase "too stubborn" implies that John's level of stubbornness is excessive or beyond what would be considered reasonable or productive in a conversation. It suggests that attempts to communicate with John may be met with resistance or closed-mindedness, and that it may be difficult to get him to see things from a different point of view. </span><span style="font-family:"Helvetica Neue";font-size:13px">Overall, the phrase "John is too stubborn to talk to" suggests that communicating with John may require extra effort or patience and that it may be a challenging or frustrating experience.</span></div><div><span style="font-family:"Helvetica Neue";font-size:13px"><br></span></div><div><img src="cid:ii_lf4vwwsk0" alt="image.png" width="562" height="346"></div><div>looks like language, at least the assumed complicated example, is not so complicated. </div><div><br></div><div><br></div><div>Criticisms of ChatGPT's performance feel to me like a half-full-half-empty debate. One may treat a borderline performance as fundamental incompetence whereas another treats it as tunable competence and growth potential.</div><div><br></div><div>Let's try two examples that Gary used in old days: an ABA rule learning example, and a binary identity mapping example from <i>The Algebric Mind</i>.</div><div><br></div><div><b>(1) Rule learning?</b></div><div><b>---</b></div><div>Me: <span style="font-family:"Helvetica Neue";font-size:13px">let me give you two groups of triplets: group 1: (green, yellow, green), (blue, red, blue); group 2: (green blue blue), (yellow red red). Which group does ABA belong to?</span></div><div>ChatGPT: <span style="font-size:13px;font-family:"Helvetica Neue"">The triplet ABA belongs to Group 1 because it follows the pattern of "ABA" where the first and third colors are the same (either green or blue) and the second color is different (either yellow or red).</span><br></div><div><span style="font-family:"Helvetica Neue";font-size:13px"><br></span></div><div><span style="font-family:"Helvetica Neue";font-size:13px">Me: </span><span style="font-family:"Helvetica Neue";font-size:13px">what about (hot cold cold)?</span></div><div>ChatGPT: <span style="font-family:"Helvetica Neue";font-size:13px">The triplet (hot, cold, cold) belongs to Group 2 because it follows the pattern of (green, blue, blue), (yellow, red, red) where the first two items are consecutive items in a sequence and the third item is a repeat of the second item.</span></div>
<p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">In (hot, cold, cold), "cold" is repeated twice, similar to the second item in each triplet of Group 2, and "hot" and "cold" are consecutive in the sequence of "hot" followed by "cold". Therefore, (hot, cold, cold) belongs to Group 2.</p><p class="gmail-p3" style="margin:0px 0px 2px;text-align:center;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:16px;line-height:normal;font-family:"Helvetica Neue";min-height:19px">
<b></b></p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">Me: what about (human, insect, human)?</p><div>ChatGPT: <span style="font-family:"Helvetica Neue";font-size:13px">The triplet (human, insect, human) does not belong to either Group 1 or Group 2 based on the patterns given.</span></div>
<p class="gmail-p2" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">Group 1 has the pattern of "ABA" where the first and third items are the same and the second item is different, and Group 2 has the pattern of two consecutive items followed by a repeat of the second item.</p>
<p class="gmail-p2" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">In (human, insect, human), there is no repetition of the second item as required by Group 2, and the first and third items are not the same as required by Group 1. Therefore, (human, insect, human) does not belong to either group.</p><p class="gmail-p2" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue""><br></p><p class="gmail-p2" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">Me: really? check it again?</p><p class="gmail-p2" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">ChatGPT: I apologize for the error in my previous response. Upon reviewing the patterns of the two groups, I realize that (human, insect, human) actually belongs to Group 1.</p>
<p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">Group 1 has the pattern of "ABA" where the first and third items are the same and the second item is different. In (human, insect, human), the first and third items are "human", and the second item is "insect". Therefore, (human, insect, human) follows the pattern of Group 1 and belongs to this group.</p>
<p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">Thank you for bringing this to my attention and please let me know if you have any further questions.</p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">---</p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue""><br></p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue""><b>(2) Algebric mind?</b></p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">> Imagine that you are trained on the input and output data given in table 3.1. If you are like other people whom I have asked, you would guess that in the test item the output that corresponds to input item [1111] is [1111]. But that is not the only inference that you could draw. For example, in the training data, the rightmost column is always 0: there is no direct evidence that the rightmost column could ever be a 1. So you might decide that the output that corresponds to test item [1111] is [1110]. That inference, too, would be perfectly consistent with the data, yet few if any people would make it. (We see later that some networks do.) One way of describing the inference that people tend to draw is to say that they are generalizing a one-to-one function, such as identity or sameness, universally. (The Algebraic Mind, page 37)</p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue""><br></p><p class="gmail-p1" style="margin:0px;font:13px "Helvetica Neue"">---</p><p class="gmail-p1" style="margin:0px;font:13px "Helvetica Neue"">Me: Let is a list of input-output pair: input=1010, output=1010;<span class="gmail-Apple-converted-space"> </span>input=0100, output=0100; input=1110, output=1110; input=0000, output=0000; Now, if input is 1111, what is the output</p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">
</p><p class="gmail-p1" style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;font-kerning:auto;font-feature-settings:normal;font-stretch:normal;font-size:13px;line-height:normal;font-family:"Helvetica Neue"">ChatGPT: Based on the input-output pairs given, the function appears to be an identity function, where the output is the same as the input. Therefore, if the input is 1111, the output would also be 1111.</p>
<div>---</div><div><br></div><div>ChatGPT's answer is now "like other people whom" Gary has asked, and it gives exactly the reasoning Gary gave. Of course, ChatGPT may be cheating, because it may have read Gary's book already.</div><div>ChatGPT can still behave very silly, but its explanations of its "reasoning process"<i> look like</i> it can do symbol manipulation, beyond just simple statistics. Even when its answer is wrong, its explanation articulated a(n) (algebraic) rule inside the input data I sent. <br></div><div><br></div><div>Best regards,<br></div><div>Feitong</div><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Goodhill, Geoffrey <<a href="mailto:g.goodhill@wustl.edu">g.goodhill@wustl.edu</a>> 于2023年3月10日周五 11:38写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg-998525781181495368">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="m_928490687570260381WordSection1">
<p class="MsoNormal"><span style="font-size:11pt">Below is an interesting discussion I had with chat-gpt about the Monty Hall problem. The executive summary is:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- I asked it to solve the problem as it’s formulated on Wikipedia.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- chat-gpt got it right and correctly explained its reasoning.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- I then asked it to solve the problem but with the substitutions host -> friend, car -> cake and goat -> water<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- chat-gpt got it wrong, provided fallacious reasoning, and even explained why it thinks this case is different from the Monty Hall problem.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- I then asked it whether it thinks a person who could get the right answer to the first version of the problem but not the second had understood the problem.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">- chat-gpt said “…. [this] suggests that they may have simply memorized the correct answer to the original formulation of the problem, rather than truly grasping the underlying principles.”<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">Geoff<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">me: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors,
opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">gpt: Yes, it is to your advantage to switch your choice.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">When you initially chose your door, there was a 1/3 chance that you chose the car, and a 2/3 chance that you chose a goat. After the host reveals one of the other doors and shows you a goat, the probability
that the car is behind the remaining unopened door is still 1/3. However, the probability that the car is behind one of the two doors you did not choose initially is 2/3.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">So, by switching to the other unopened door, you increase your chances of winning the car from 1/3 to 2/3. It may seem counterintuitive, but this phenomenon is known as the "Monty Hall problem" and has been
mathematically proven.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">me: Suppose you're at a friend's house, and you're given the choice of three doors: Behind one door is a cake; behind the others, water. You pick a door, say No. 1, and your friend, who knows what's behind
the doors, opens another door, say No. 3, which has water. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">gpt: No, it does not make a difference whether you switch or not.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">In this scenario, after you initially choose door No. 1, there is a 1/3 chance that the cake is behind that door, and a 2/3 chance that the cake is behind one of the other doors. When your friend opens door
No. 3 and reveals water, the probability that the cake is behind the remaining unopened doors is still 2/3.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">However, in this case, switching your choice does not increase your chances of winning the cake. This is because whether you switch or not, you have a 2/3 chance of selecting the door with the cake. The reason
why switching is advantageous in the Monty Hall problem is because the host's action of revealing one of the losing doors gives you additional information that can change the probability of winning. In this scenario, your friend's action does not change the
probabilities, so switching or not switching does not matter.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">me: Consider the Monty Hall problem. If a person got the right answer to the original formulation of the problem, but then got the wrong answer after the problem was restated with cake substituted for car
and water for goats, would you say they had understood the Monty Hall problem?<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">gpt: The Monty Hall problem is a classic probability puzzle that can be counterintuitive for many people, and understanding it can be tricky. The problem involves a game show where a contestant is asked to
choose one of three doors, behind one of which is a valuable prize (e.g., a car), and behind the other two doors are less valuable prizes (e.g., goats). After the contestant chooses a door, the game show host (Monty Hall) opens one of the remaining two doors
to reveal a goat, and then asks the contestant if they want to stick with their original choice or switch to the other unopened door. The puzzle asks whether the contestant should switch or not, given that the host has revealed one of the goats.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">If a person got the right answer to the original formulation of the problem, but then got the wrong answer after the problem was restated with cake substituted for car and water for goats, it would suggest
that they have not fully understood the underlying principles of the Monty Hall problem. The principles at play in the Monty Hall problem are not specific to cars and goats or any other particular set of objects, but rather the problem is about understanding
conditional probabilities and the effect of new information on those probabilities.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">Therefore, if someone fails to apply the same reasoning to a different scenario with different objects, it suggests that they may have simply memorized the correct answer to the original formulation of the
problem, rather than truly grasping the underlying principles.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-right:0in;margin-bottom:12pt;margin-left:0.5in">
<b><span style="font-size:12pt;color:black">From: </span></b><span style="font-size:12pt;color:black">Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> on behalf of Barak A. Pearlmutter <<a href="mailto:barak@pearlmutter.net" target="_blank">barak@pearlmutter.net</a>><br>
<b>Date: </b>Friday, March 10, 2023 at 2:22 AM<br>
<b>To: </b>Terry Sejnowski <<a href="mailto:terry@snl.salk.edu" target="_blank">terry@snl.salk.edu</a>><br>
<b>Cc: </b><a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a> <<a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a>><br>
<b>Subject: </b>Re: Connectionists: Chomsky's apple<u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:0.5in"><span style="font-size:11pt">* External Email - Caution *<br>
<br>
Terry,<br>
<br>
On Thu, 9 Mar 2023 at 15:01, Terry Sejnowski <<a href="mailto:terry@snl.salk.edu" target="_blank">terry@snl.salk.edu</a>> wrote:<br>
> If you ask a nonsense question, you get a nonsense answer.<br>
><br>
> I give several examples of this in my recent paper on<br>
> "Large Language Models and the Reverse Turing Test"<br>
> <a href="https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdirect.mit.edu%2Fneco%2Fissue&data=05%7C01%7Cg.goodhill%40wustl.edu%7C3b625fedd8274c9191d408db21408c37%7C4ccca3b571cd4e6d974b4d9beb96c6d6%7C0%7C0%7C638140333348298223%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TwyBRRJu9D84uhnI5h%2F5V4yyIOAWIFXYZBGlA21N%2FIU%3D&reserved=0" target="_blank">
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdirect.mit.edu%2Fneco%2Fissue&data=05%7C01%7Cg.goodhill%40wustl.edu%7C3b625fedd8274c9191d408db21408c37%7C4ccca3b571cd4e6d974b4d9beb96c6d6%7C0%7C0%7C638140333348298223%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TwyBRRJu9D84uhnI5h%2F5V4yyIOAWIFXYZBGlA21N%2FIU%3D&reserved=0</a><br>
><br>
> LLMs mirror the intelligence of the prompt.<br>
<br>
That is an excellent paper; I quite enjoyed it.<br>
<br>
No disagreement with your direct point! I was trying to highlight more<br>
subtle failure modes of the system, which go to semantics and safety<br>
issues. Maybe I was too roundabout though, so let me be a bit more<br>
explicit.<br>
<br>
In discussing why you're bigger than a breadbox, I was tweaking the<br>
crude "safety rails" that have been bolted on to the underlying LLM.<br>
It refuses to discuss your physical attributes because it has been<br>
primed not to; that's not a property of the underlying LLM, but of the<br>
safety mechanisms intended to keep it from saying nasty things. Of<br>
course that hammer is extremely blunt: it is not in truth offensive to<br>
concede that Terry Sejnowski is an adult human being and adult human<br>
beings are bigger than breadboxes.<br>
<br>
I meant to highlight how inadequate our current tools are wrt<br>
controlling these things, in that case by seeing how it is<br>
inappropriately prevented by the safety stuff from saying something<br>
reasonable and instead goes off on a strange woke tangent. (And also,<br>
Terry, let me say that I do value you for your physical attributes!<br>
Your fun sense of style, the way you always look so put together, your<br>
stage presence, your warm and welcoming demeanor. Must we throw that<br>
baby out with the bathwater?) Alignment is the technical term, I<br>
guess. They cannot circumscribe offensive behavior satisfactorily, so<br>
instead play whack-a-mole. And crudely.<br>
<br>
This issue is problematic in a bunch of domains. E.g., it is not<br>
offensive when asked "why is 'boy in the striped pajamas' like an<br>
extended version of the joke 'my uncle died at Auschwitz, he was drunk<br>
and fell off a guard tower'" to just say "because its plot is<br>
basically 'my nephew died in the gas chambers, he was the commandant's<br>
son and there was a bit of a mixup.'" But it has been constrained to<br>
not ever joke about the Holocaust and to get all bothered at that<br>
combination, which short-circuits its ability to do this particular<br>
bit of seemingly-straightforward analogical reasoning. (Try it. Keep<br>
pushing to get it to make the analogy. It's frustrating!)<br>
<br>
The fallacious proof is similar, but from the other side. It<br>
highlights that the system does not really know what a proof is,<br>
because if it did, in that context, it certainly has the power to not<br>
make blatantly incorrect simple steps. And that is, of course, a<br>
safety issue when people use it as an assistant.<br>
<br>
Cheers,<br>
<br>
--Barak.<u></u><u></u></span></p>
</div>
</div>
</div>
</div></blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr">Feitong Yang</div><div dir="ltr">Software Engineer, Google </div><div dir="ltr">355 Main St. Cambridge, MA, 02142<br></div></div></div></div></div></div></div>