[Olympus developers 284]: Ravenclaw Olympus taking xyz information from kinect in order to recognize gestures

Anouar Znagui Hassani anouarzh at gmail.com
Wed Feb 23 03:23:47 EST 2011


Hi Guys,

For my thesis i am researching the question which modality will the most
effective for robots.
Therefore i want to develop a prototype using RavenClaw Olympus to take
gestures ( x y z coordinates)  in order to recognize gestures and then
Olympus will provide feedback in spoken text.

Is there someone who already did something like this? Can someone tell me
where to start?

I am already able to track my hand with the kinect and see the coordinates
in a command prompt.

Is it possible to create a component in ravenclaw which receives these
coordinates?
Is there some kind of HMM model in ravenclaw which i can use to tell how the
gestures look like ( xyz)?

I would appreciate any form of feedback.

Thanks in advance.

Anouar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.srv.cs.cmu.edu/pipermail/olympus-developers/attachments/20110223/bb156c81/attachment-0001.html


More information about the Olympus-developers mailing list