From anouarzh at gmail.com Wed Feb 23 03:23:47 2011 From: anouarzh at gmail.com (Anouar Znagui Hassani) Date: Wed, 23 Feb 2011 09:23:47 +0100 Subject: [Olympus developers 284]: Ravenclaw Olympus taking xyz information from kinect in order to recognize gestures Message-ID: Hi Guys, For my thesis i am researching the question which modality will the most effective for robots. Therefore i want to develop a prototype using RavenClaw Olympus to take gestures ( x y z coordinates) in order to recognize gestures and then Olympus will provide feedback in spoken text. Is there someone who already did something like this? Can someone tell me where to start? I am already able to track my hand with the kinect and see the coordinates in a command prompt. Is it possible to create a component in ravenclaw which receives these coordinates? Is there some kind of HMM model in ravenclaw which i can use to tell how the gestures look like ( xyz)? I would appreciate any form of feedback. Thanks in advance. Anouar -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/olympus-developers/attachments/20110223/bb156c81/attachment-0001.html From Alex.Rudnicky at cs.cmu.edu Wed Feb 23 12:27:25 2011 From: Alex.Rudnicky at cs.cmu.edu (Alex Rudnicky) Date: Wed, 23 Feb 2011 12:27:25 -0500 Subject: [Olympus developers 285]: Re: Ravenclaw Olympus taking xyz information from kinect in order to recognize gestures In-Reply-To: References: Message-ID: <9C0D1A9F38D23E4290347EE31C22B0AF040C7995@e2k3.srv.cs.cmu.edu> Anouar, Ravenclaw/Olympus can be used for multi-modal interaction (we have built several systems that integrate a tablet computer). You can build such a system by adding a module that processes the gesture input and issues a message containing the corresponding 'concept'. In this case the concept is equivalent to what would come out of the Phoenix parser, say: [command] ([point] ( left ) ). Note that your module will need to do the gesture recognition on its own, up to the semantic level (which is the level at which Ravenclaw operates). You may also need to figure out cross-modal integration (gesture, speech) if you plan to experiment with multi-modal inputs. The current architecture should allow combination to occur within a task agency, which will allow you to process loosely-coupled inputs. Alex From: olympus-developers-bounces at mailman.srv.cs.cmu.edu [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Anouar Znagui Hassani Sent: Wednesday, February 23, 2011 3:24 AM To: olympus-developers at mailman.srv.cs.cmu.edu Subject: [Olympus developers 284]: Ravenclaw Olympus taking xyz information from kinect in order to recognize gestures Hi Guys, For my thesis i am researching the question which modality will the most effective for robots. Therefore i want to develop a prototype using RavenClaw Olympus to take gestures ( x y z coordinates) in order to recognize gestures and then Olympus will provide feedback in spoken text. Is there someone who already did something like this? Can someone tell me where to start? I am already able to track my hand with the kinect and see the coordinates in a command prompt. Is it possible to create a component in ravenclaw which receives these coordinates? Is there some kind of HMM model in ravenclaw which i can use to tell how the gestures look like ( xyz)? I would appreciate any form of feedback. Thanks in advance. Anouar -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/olympus-developers/attachments/20110223/7005215c/attachment.html From ssiiys at hotmail.com Mon Feb 28 10:26:19 2011 From: ssiiys at hotmail.com (pablo san juan) Date: Mon, 28 Feb 2011 15:26:19 +0000 Subject: [Olympus developers 286]: Question Message-ID: I have the following question please: When I compile roomlineDM in the example of roomline of the olympus 2.5 distribution, I obtain the following mistake 1> LINK: fatal mistake LNK1181: cannot open input file 'C:/olympus\bin\debug\RavenClaw_debug.lib'. I understand that it is for the lack of Ravenclaw's libraries (in folder \bin\debug\). In the distribution of Olympus they are these libraries but it was not compiled. It is correct?. what i have to to?. Can i use an old version of Ravenclaw?. Thank you very much for your time Pablo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/olympus-developers/attachments/20110228/6e657ba6/attachment.html From mrmarge at cs.cmu.edu Mon Feb 28 11:23:24 2011 From: mrmarge at cs.cmu.edu (Matthew Marge) Date: Mon, 28 Feb 2011 11:23:24 -0500 Subject: [Olympus developers 287]: Re: Question In-Reply-To: References: Message-ID: You should build in 'Release' mode, not 'Debug' mode. On Mon, Feb 28, 2011 at 10:26 AM, pablo san juan wrote: > I have the following question please: > > When I compile roomlineDM in the example of roomline of the olympus 2.5 > distribution, ?I obtain the following mistake 1> LINK: fatal mistake > LNK1181: cannot open input file 'C:/olympus\bin\debug\RavenClaw_debug.lib'. > I understand that it is for the lack of?Ravenclaw's libraries (in folder > \bin\debug\). In? the distribution of Olympus they are?these libraries but > it was not compiled. It is correct?. what i have to to?. Can i use an old > version of Ravenclaw?. > > Thank you very much for your time > > Pablo > > >