From nitin at imarketingadvantage.com Mon Mar 3 08:28:49 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Mon, 3 Mar 2014 18:58:49 +0530 Subject: [Olympus developers 479]: is there an ontology support available Message-ID: <04e901cf36e4$884413c0$98cc3b40$@imarketingadvantage.com> We are looking to develop multiple templates for dialog control based on an ontology. Is there some support available. We have seen Cookcoach as a system developed as a n academic project, but there is no source code. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Tue Mar 4 06:48:08 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Tue, 4 Mar 2014 17:18:08 +0530 Subject: [Olympus developers 480]: how and when does VAD signal start of utterace? Message-ID: <05a601cf379f$9f1401d0$dd3c0570$@imarketingadvantage.com> We are working on converting the WAV audio to network audio(on the socket). It seems that when the WAV audio is normally processed, it gets a signal about begin and end of utterance from a VAD. We need to use this in the AudioServerStub also . but we can't figure out how this happens. It appears that this may be related to interaction manager(Apollo), but it's not clear. Any guidance/clues? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Tue Mar 4 08:04:46 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Tue, 4 Mar 2014 18:34:46 +0530 Subject: [Olympus developers 481]: Re: how and when does VAD signal start of utterace? Message-ID: <05ae01cf37aa$52a1fef0$f7e5fcd0$@imarketingadvantage.com> Figured this out. This is now working well J. Thanks for the initial guidance and the NetworkOly ! From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, March 4, 2014 5:18 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: how and when does VAD signal start of utterace? We are working on converting the WAV audio to network audio(on the socket). It seems that when the WAV audio is normally processed, it gets a signal about begin and end of utterance from a VAD. We need to use this in the AudioServerStub also . but we can't figure out how this happens. It appears that this may be related to interaction manager(Apollo), but it's not clear. Any guidance/clues? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Mar 20 07:28:23 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Thu, 20 Mar 2014 16:58:23 +0530 Subject: [Olympus developers 482]: how to run JavaTTY without audio Message-ID: <048701cf442f$833da280$89b8e780$@imarketingadvantage.com> We are trying to run Javatty on a server far that does not have audio enabled. How to run Javatty , as it seems to be dependent on Kalliope and Audioserver that will not work in such a setup. Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Mar 29 04:21:31 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sat, 29 Mar 2014 13:51:31 +0530 Subject: [Olympus developers 483]: My Bus Tutorial - init_session Message-ID: <016e01cf4b27$e66e74c0$b34b5e40$@imarketingadvantage.com> Hi, Any ideas why in MyBus the sequence - init_sesssion then close_session then init_session does not work. Once the session is closed, init_session does not re-initiate. One has to restart the process monitor for session to be re-initiated. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: