Fwd: Second Paper Presentation - Karen Chen - Thursday, May 4 at noon - Room 2003

Artur Dubrawski awd at cs.cmu.edu
Tue May 2 10:14:44 EDT 2017


Team,

Karen will be presenting her qualifier work at Heinz college this Thursday.
Please join if you can.

Thanks
Artur


-------- Forwarded Message --------
Subject: 	CORRECTION: Second Paper Presentation - Karen Chen - Thursday, 
May 4 at noon - Room 2003
Date: 	Fri, 28 Apr 2017 18:59:15 +0000
From: 	Michelle Wirtz <mwirtz at andrew.cmu.edu>
To: 	Heinz-phd at lists.andrew.cmu.edu <Heinz-phd at lists.andrew.cmu.edu>, 
heinz-faculty at lists.andrew.cmu.edu <heinz-faculty at lists.andrew.cmu.edu>, 
Amy Ogan <aeo at andrew.cmu.edu>



Hi all,

Please join us on Thursday, May 4, 2017 in Hamburg Hall Room 2003 at 
noon when Karen Chen will be presenting her second paper.

*Title:*Peek into the Black Box: A Multimodal Analysis Framework for 
Automatic Characterization of the One-on-one Tutoring Processes


*Committee: *Artur Dubrawski (chair), Daniel Nagin and Amy Ogen (HCII,SCS)

*Abstract:*

Student-teacher interactions during the one-on-one tutoring processes 
are rich forms of inter-personal communications with significant 
educational impact. An ideal teacher is able to pick up student's subtle 
signals in real time and respond optimally to offer cognitive and 
emotional support. However, until recently, the characterization of this 
information rich process has relied upon human observations which do not 
scale well. In this study, I made an attempt to automate the 
characterization process by leveraging the recent advances in affective 
computing and multi-modal machine learning techniques. I analyzed a 
series of video recordings of math problem solving sessions by a young 
student under support of his tutor, demonstrating a multimodal analysis 
framework to characterize several aspects of the student-teacher 
interaction patterns at a fine-grained temporal resolution. I then build 
machine learning models to predict teacher's response using extracted 
multi-modal features. In addition, I validate the performance of 
automatic detector of affect, intent-to-connect behavior, and voice 
activity, using annotated data,  which provides evidence of the 
potential utility of the presented tools in scaling up analysis of this 
type to large number of subjects and in implementing decision support 
tools to guide teachers towards optimal intervention in real time.

*Paper:*https://drive.google.com/open?id=0B8SWduW_x8gYcnN6YkhZSDA3WE0

**

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20170502/41ad253a/attachment.html>


More information about the Autonlab-users mailing list