[Olympus developers 285]: Re: Ravenclaw Olympus taking xyz information from kinect in order to recognize gestures

Alex Rudnicky Alex.Rudnicky at cs.cmu.edu
Wed Feb 23 12:27:25 EST 2011


Anouar,

 

Ravenclaw/Olympus can be used for multi-modal interaction (we have built
several systems that integrate a tablet computer).

 

You can build such a system by adding a module that processes the
gesture input and issues a message containing  the corresponding
'concept'. In this case the concept is equivalent to what would come out
of the Phoenix parser, say: [command] ([point] ( left ) ).

 


Note that your module will need to do the gesture recognition on its
own, up to the semantic level (which is the level at which Ravenclaw
operates). You may also need to figure out cross-modal integration
(gesture, speech) if you plan to experiment with multi-modal inputs. The
current architecture should allow combination to occur within a task
agency, which will allow you to process loosely-coupled inputs. 

 

Alex

 

 

 

 

From: olympus-developers-bounces at mailman.srv.cs.cmu.edu
[mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of
Anouar Znagui Hassani
Sent: Wednesday, February 23, 2011 3:24 AM
To: olympus-developers at mailman.srv.cs.cmu.edu
Subject: [Olympus developers 284]: Ravenclaw Olympus taking xyz
information from kinect in order to recognize gestures

 

Hi Guys,

 

For my thesis i am researching the question which modality will the most
effective for robots.

Therefore i want to develop a prototype using RavenClaw Olympus to take
gestures ( x y z coordinates)  in order to recognize gestures and then
Olympus will provide feedback in spoken text.

 

Is there someone who already did something like this? Can someone tell
me where to start? 

 

I am already able to track my hand with the kinect and see the
coordinates in a command prompt.

 

Is it possible to create a component in ravenclaw which receives these
coordinates?

Is there some kind of HMM model in ravenclaw which i can use to tell how
the gestures look like ( xyz)?

 

I would appreciate any form of feedback.

 

Thanks in advance.

 

Anouar

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.srv.cs.cmu.edu/pipermail/olympus-developers/attachments/20110223/7005215c/attachment.html


More information about the Olympus-developers mailing list