From nitin at imarketingadvantage.com Fri Feb 7 15:44:46 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sat, 8 Feb 2014 02:14:46 +0530 Subject: [Olympus developers 443]: how to enable multiple users simultaneously. Message-ID: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Fri Feb 7 16:51:13 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Fri, 7 Feb 2014 16:51:13 -0500 Subject: [Olympus developers 444]: Re: how to enable multiple users simultaneously. In-Reply-To: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> Message-ID: The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan wrote: > Hi > > > > This might be a stupid question, but how to enable multiple users to > access the system at the same time. How does the system have to be deployed. > > > > Based on all the documentation we have read so far, one can start all the > servers and get either a speech , or tty r java-tty interfact where one > user can interact at a time. > > > > We also want to enable authentication of this system, using a single sign > on with another system. But thats really the next step. > > > > Thanks > > Nitin Dhawan > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Feb 8 13:39:46 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sun, 9 Feb 2014 00:09:46 +0530 Subject: [Olympus developers 445]: Re: how to enable multiple users simultaneously. In-Reply-To: References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> Message-ID: <006601cf24fd$267a0350$736e09f0$@imarketingadvantage.com> Thanks. This does look a little complicated. We will try to understand this better and come back for further clarifications if required. I do have another question on how this can be done using telephone interface. I will put it in another thread. From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Saturday, February 8, 2014 3:21 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 443]: how to enable multiple users simultaneously. The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan wrote: Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Feb 8 13:53:21 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sun, 9 Feb 2014 00:23:21 +0530 Subject: [Olympus developers 446]: how to integrate olympus/ravenclaw with asterisk - multi user Message-ID: <006b01cf24ff$0b3fba60$21bf2f20$@imarketingadvantage.com> We want to make the system available on the pstn phone system. We have setup asterisk and can route the calls on extensions. The question is a. How to integrate this with the dialog system. This is regularly done with systems like Roomline, but the instructions are not very clear. Here is someone talking about it http://www.cs.cmu.edu/~jsherwan/research/asterisk.ppt but its not quite clear b. How to enable multiple people to connect simultaneously over telephony. I already asked how to enable multiple simultaneous sessions, but the answer looks quite complicated and non-standard. It's hard to believe how any commercial systems would be implemented on Ravenclaw without multi session support being a standard feature. Please help . We are investing serious efforts in integrating Ravenclaw architecture in our system with the belief that this is one of the best systems around. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Sat Feb 8 14:44:19 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Sat, 8 Feb 2014 14:44:19 -0500 Subject: [Olympus developers 447]: Re: how to enable multiple users simultaneously. In-Reply-To: <006601cf24fd$267a0350$736e09f0$@imarketingadvantage.com> References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> <006601cf24fd$267a0350$736e09f0$@imarketingadvantage.com> Message-ID: For a telephone interface, you could use a softswitch like freeswitch that supports multiple sessions (default is 1000) and protocols to communicate with an application. We have tried to provide a preliminary interface between the audioserver component and the freeswitch server. There's a tarball that you can start from. The freeswitch application writes incoming audio data to a named-pipe for every new session. This audio data is read by an instance of audioserver component and forwarded to the recognizer (pocketsphinx). http://www.speech.cs.cmu.edu/apappu/pubdl/fs-dialog-interface.tar.gz There is an example application (with example configuration) in the tarball. It also includes customized audioserver and the pocketsphinx engine. If you want to rest of the olympus components, you have to figure out how they will interact with the freeswitch application. On Sat, Feb 8, 2014 at 1:39 PM, Nitin Dhawan wrote: > Thanks. This does look a little complicated. We will try to understand > this better and come back for further clarifications if required. > > > > I do have another question on how this can be done using telephone > interface. I will put it in another thread. > > > > > > *From:* Aasish Pappu [mailto:aasishp at gmail.com] > *Sent:* Saturday, February 8, 2014 3:21 AM > *To:* Nitin Dhawan > *Cc:* olympus-developers > *Subject:* Re: [Olympus developers 443]: how to enable multiple users > simultaneously. > > > > The architecture natively doesn't support multiple users (I assume you > meant simultaneous sessions). However, there were instances where you > could run multiple instances of some of the components (Audio Server, DM, > InteractionManager, Hub) corresponding to each session. Since those > components are session specific. There are few components (ASR, NLU, NLG) > which can run as services i.e., doesn't have to be instantiated. All of > this > > customization will require understanding of how the components interact. > > > > The short answer is there is no easy recipe to customize the architecture > for multiple sessions. > > > > There are few example systems that were built to handle certain aspects of > the simultaneous sessions. > > > > 1. Multiparty Interaction: System interacting with more than user at a > time in a single interactive session. > > > > repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ > > > > We instantiate dialog manager, hub for each user to keep track of their > slot/value pairs (if different from other user). We don't have to > instantiate the IM and AudioServer in this case because both users are > listening on same channel (loud speakers) and talking on same channel > (kinect microphone array). While talking to the users, we don't want the > dialog manager to request for multiple nlg prompts for each user, instead > we synchronize the system's response from DM to one of the users or both > users. We do this using a new component called conversation manager that > talks to interaction-manager on behalf of both the dialog managers. More > details on this work in > http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 > > > > 2. Human robot teams: Single user interacting with multiple robots. > > > > We instantiate dialog manager, hub, robot backend for each robot. The > dialog context for each robot is independent from each other. Here each > robot's DM responds to the user based on a trigger phrase (e.g., RobotA > listen, RobotB listen). > > > > repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ > > > > > > > > > > On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan < > nitin at imarketingadvantage.com> wrote: > > Hi > > > > This might be a stupid question, but how to enable multiple users to > access the system at the same time. How does the system have to be deployed. > > > > Based on all the documentation we have read so far, one can start all the > servers and get either a speech , or tty r java-tty interfact where one > user can interact at a time. > > > > We also want to enable authentication of this system, using a single sign > on with another system. But thats really the next step. > > > > Thanks > > Nitin Dhawan > > > > > > -- > Aasish Pappu > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Sat Feb 8 14:46:05 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Sat, 8 Feb 2014 14:46:05 -0500 Subject: [Olympus developers 448]: Re: how to integrate olympus/ravenclaw with asterisk - multi user In-Reply-To: <006b01cf24ff$0b3fba60$21bf2f20$@imarketingadvantage.com> References: <006b01cf24ff$0b3fba60$21bf2f20$@imarketingadvantage.com> Message-ID: For a telephone interface, you could use a softswitch like freeswitch that supports multiple sessions (default is 1000) and protocols to communicate with an application. We have tried to provide a preliminary interface between the audioserver component and the freeswitch server. There's a tarball that you can start from. The freeswitch application writes incoming audio data to a named-pipe for every new session. This audio data is read by an instance of audioserver component and forwarded to the recognizer (pocketsphinx). http://www.speech.cs.cmu.edu/apappu/pubdl/fs-dialog-interface.tar.gz There is an example application (with example configuration) in the tarball. It also includes customized audioserver and the pocketsphinx engine. If you want to rest of the olympus components, you have to figure out how they will interact with the freeswitch application. On Sat, Feb 8, 2014 at 1:53 PM, Nitin Dhawan wrote: > We want to make the system available on the pstn phone system. We have > setup asterisk and can route the calls on extensions. The question is > > a. How to integrate this with the dialog system. This is regularly > done with systems like Roomline, but the instructions are not very clear. > Here is someone talking about it > http://www.cs.cmu.edu/~jsherwan/research/asterisk.ppt but its not quite > clear > > b. How to enable multiple people to connect simultaneously over > telephony. I already asked how to enable multiple simultaneous sessions, > but the answer looks quite complicated and non-standard. It's hard to > believe how any commercial systems would be implemented on Ravenclaw > without multi session support being a standard feature. > > > > Please help . > > > > We are investing serious efforts in integrating Ravenclaw architecture in > our system with the belief that this is one of the best systems around. > > > > Regards > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Feb 8 18:26:26 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sun, 9 Feb 2014 04:56:26 +0530 Subject: [Olympus developers 449]: Re: how to integrate olympus/ravenclaw with asterisk - multi user In-Reply-To: References: <006b01cf24ff$0b3fba60$21bf2f20$@imarketingadvantage.com> Message-ID: <008501cf2525$31bb4d50$9531e7f0$@imarketingadvantage.com> Thanks. Will definitely have a look. Is there someone who can help us/work with us in the development of this interface? We need the entire system - with dialog, NLG, backend, speech, telephony and Java interface with simultaneous multi user support. From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Sunday, February 9, 2014 1:16 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 446]: how to integrate olympus/ravenclaw with asterisk - multi user For a telephone interface, you could use a softswitch like freeswitch that supports multiple sessions (default is 1000) and protocols to communicate with an application. We have tried to provide a preliminary interface between the audioserver component and the freeswitch server. There's a tarball that you can start from. The freeswitch application writes incoming audio data to a named-pipe for every new session. This audio data is read by an instance of audioserver component and forwarded to the recognizer (pocketsphinx). http://www.speech.cs.cmu.edu/apappu/pubdl/fs-dialog-interface.tar.gz There is an example application (with example configuration) in the tarball. It also includes customized audioserver and the pocketsphinx engine. If you want to rest of the olympus components, you have to figure out how they will interact with the freeswitch application. On Sat, Feb 8, 2014 at 1:53 PM, Nitin Dhawan wrote: We want to make the system available on the pstn phone system. We have setup asterisk and can route the calls on extensions. The question is a. How to integrate this with the dialog system. This is regularly done with systems like Roomline, but the instructions are not very clear. Here is someone talking about it http://www.cs.cmu.edu/~jsherwan/research/asterisk.ppt but its not quite clear b. How to enable multiple people to connect simultaneously over telephony. I already asked how to enable multiple simultaneous sessions, but the answer looks quite complicated and non-standard. It's hard to believe how any commercial systems would be implemented on Ravenclaw without multi session support being a standard feature. Please help . We are investing serious efforts in integrating Ravenclaw architecture in our system with the belief that this is one of the best systems around. Regards -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sun Feb 9 12:51:35 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sun, 9 Feb 2014 23:21:35 +0530 Subject: [Olympus developers 450]: Re: how to enable multiple users simultaneously. In-Reply-To: References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> Message-ID: <003301cf25bf$95537690$bffa63b0$@imarketingadvantage.com> What do you think about this approach http://www.ijitcs.com/volume%208_No_2/Vincenzo.pdf ? Principally it looks the same , but this is talking about a standard architecture which replaces Ravenclaw with client server modules From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Saturday, February 8, 2014 3:21 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 443]: how to enable multiple users simultaneously. The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan wrote: Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Feb 13 11:18:42 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Thu, 13 Feb 2014 21:48:42 +0530 Subject: [Olympus developers 451]: Audio Server keeps crashing, the application also stops. Message-ID: <00fd01cf28d7$449eef20$cddccd60$@imarketingadvantage.com> We are facing this problem on all test machines that the Audio server keeps crashing after working for a few seconds. The whole application also stops. We noticed in the log that it is unable to connect to the PocketSphinx server for some reason. Any ideas? ["E:/Raven/Olympus-Jan2014Setup\bin\release\AudioServer" -config AudioServer.cfg] Compiled on Jan 23 2014 at 12:36:34 Reading configuration file # This is a configuration file for the Audio_Server for MeetingLine # Sample rate sps = 16000 # Host and port number of Sphinx recognition engines engine_list = desktop:localhost:9990 [STD at 21:43:13.005 (33816891)] 1 engines: [STD at 21:43:13.005 (33816891)] desktop: host localhost port 9990 # Enable/disable logging log_full_session_input = 1 log = 1 verbosity = All # Configuration for the Voice Activity Detector vad = GMM vad_config = no_model=true, energy_threshold=7, sampling_rate=16000, fe_frame_rate=100, fe_window_length=0.0256, fe_fft_size=512, fe_num_filters=35, fe_lower_filter_freq=130, fe_upper_filter_freq=3800, fe_normalize_c0=1, prior_noise_level=8, prior_speech_level=14, snr_estimation_step=80000, window_width=20, log_frame_info=false [STD at 21:43:13.005 (33816891)] Starting GMM-based VAD [STD at 21:43:13.005 (33816891)] Sampling Rate: 16000 [STD at 21:43:13.005 (33816891)] Calling ad_open_sps (16000 Hz) Allocating 32 buffers of 512 samples each [STD at 21:43:13.142 (33817028)] Closing BufferedAD [STD at 21:43:13.142 (33817028)] Closing device Opened listener on port 11000 Accepted connection from 127.0.0.1 (socket 420) *** -> IDLE *** [ERR at 21:43:13.529 (33817415)] desktop decoding engine socket is invalid, try to connect Number total of devices device: 1 Recording from device: -1 ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(316): Connection closed ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(278): Retrying... ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(304): Connected to server [DBG at 21:43:24.542 (33828428)] Sending message to desktop engine: get_acoustic_model ..\..\..\..\Libraries\libOlympusUtility\sock.cpp: connect: ERRNO= 10061(0000274d) [STD at 21:43:24.542 (33828428)] Message sent to 1/1 engines [DBG at 21:43:24.543 (33828429)] Got frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } Rest: [DBG at 21:43:24.543 (33828429)] Converting frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } [DBG at 21:43:24.551 (33828437)] Sending message to desktop engine: set_acoustic_model:E:\Raven\Olympus-Jan2014Setup\Resources\DecoderConfig\Aco usticModels\wsj_all_sc.cd_semi_5000 [STD at 21:43:24.551 (33828437)] Message sent to 1/1 engines [DBG at 21:43:25.451 (33829337)] Got frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } Rest: [DBG at 21:43:25.451 (33829337)] Converting frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Feb 13 11:32:15 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Thu, 13 Feb 2014 22:02:15 +0530 Subject: [Olympus developers 452]: skype client and jabber interface Message-ID: <010701cf28d9$2932c0c0$7b984240$@imarketingadvantage.com> We are trying to integrate our system with Skype and an interface on the website. I believe a Skype speech client component is available. Can someone point to where it can be found? Also a jabber interface has been added. Where to find this? And I understand we should be able to use this to add the web based chat interface using a generic Jabber web client. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Fri Feb 14 11:04:18 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Fri, 14 Feb 2014 21:34:18 +0530 Subject: [Olympus developers 453]: is this platform/community active ? Message-ID: <00b101cf299e$73712440$5a536cc0$@imarketingadvantage.com> We have been evaluating this platform for over a year. We were beginning to get comfortable with it. But since this is open source and we were not in the development team, we obviously look forward to community support. Of the last few posts that I have made, I only received response from one person twice in the four posts. Is this community active? More importantly is this platform alive? I had arrived at this platform after a lot of research in the open source world. But with lack of any support and dearth of documentation we are having to re-evaluate this. Or is it that this is a close community, where only CMU initiatives get support? Regards Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Rudnicky at cs.cmu.edu Sat Feb 15 09:51:35 2014 From: Alex.Rudnicky at cs.cmu.edu (Alex Rudnicky) Date: Sat, 15 Feb 2014 09:51:35 -0500 Subject: [Olympus developers 454]: Re: is this platform/community active ? In-Reply-To: <00b101cf299e$73712440$5a536cc0$@imarketingadvantage.com> References: <00b101cf299e$73712440$5a536cc0$@imarketingadvantage.com> Message-ID: <11B6FA6BC9879A42BE5A6227C05F3E1F02DA497A3712@EXCH-MB-1.srv.cs.cmu.edu> Hi Nitin, There is indeed a community, and we communicate through this board. I'm sorry that you didn't get responses to some of your questions. Sometimes no one quite knows what to say and hopes that someone else will respond... :| I will look up your queries and see what to do. Alex From: Olympus-developers [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Nitin Dhawan Sent: Friday, February 14, 2014 11:04 AM To: olympus-developers Subject: [Olympus developers 453]: is this platform/community active ? We have been evaluating this platform for over a year. We were beginning to get comfortable with it. But since this is open source and we were not in the development team, we obviously look forward to community support. Of the last few posts that I have made, I only received response from one person twice in the four posts. Is this community active? More importantly is this platform alive? I had arrived at this platform after a lot of research in the open source world. But with lack of any support and dearth of documentation we are having to re-evaluate this. Or is it that this is a close community, where only CMU initiatives get support? Regards Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Feb 15 11:03:14 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sat, 15 Feb 2014 21:33:14 +0530 Subject: [Olympus developers 455]: Re: is this platform/community active ? In-Reply-To: <11B6FA6BC9879A42BE5A6227C05F3E1F02DA497A3712@EXCH-MB-1.srv.cs.cmu.edu> References: <00b101cf299e$73712440$5a536cc0$@imarketingadvantage.com> <11B6FA6BC9879A42BE5A6227C05F3E1F02DA497A3712@EXCH-MB-1.srv.cs.cmu.edu> Message-ID: <007201cf2a67$70bdaab0$52390010$@imarketingadvantage.com> Thanks Alex. That's quite reassuring. From: Alex Rudnicky [mailto:Alex.Rudnicky at cs.cmu.edu] Sent: Saturday, February 15, 2014 8:22 PM To: Nitin Dhawan; olympus-developers Subject: RE: [Olympus developers 453]: is this platform/community active ? Hi Nitin, There is indeed a community, and we communicate through this board. I'm sorry that you didn't get responses to some of your questions. Sometimes no one quite knows what to say and hopes that someone else will respond. K I will look up your queries and see what to do. Alex From: Olympus-developers [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Nitin Dhawan Sent: Friday, February 14, 2014 11:04 AM To: olympus-developers Subject: [Olympus developers 453]: is this platform/community active ? We have been evaluating this platform for over a year. We were beginning to get comfortable with it. But since this is open source and we were not in the development team, we obviously look forward to community support. Of the last few posts that I have made, I only received response from one person twice in the four posts. Is this community active? More importantly is this platform alive? I had arrived at this platform after a lot of research in the open source world. But with lack of any support and dearth of documentation we are having to re-evaluate this. Or is it that this is a close community, where only CMU initiatives get support? Regards Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sat Feb 15 11:12:30 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sat, 15 Feb 2014 21:42:30 +0530 Subject: [Olympus developers 456]: Re: Audio Server keeps crashing, the application also stops. Message-ID: <007701cf2a68$bc1f0a70$345d1f50$@imarketingadvantage.com> We have made some progress on this. The problem is not with Sphinx. The "decoding server is invalid" is a standard message before the sphinx server is found and connected to. But the audio server keeps crashing intermittently. The windows even viewer Log (Event ID 1000) is Faulting application name: AudioServer.exe, version: 0.0.0.0, time stamp: 0x52ff5681 Faulting module name: MSVCR90.dll, version: 9.0.30729.8387, time stamp: 0x51ea24a5 Exception code: 0xc0000417 Fault offset: 0x000369d0 Faulting process id: 0x4534 Faulting application start time: 0x01cf2a4c7399bb42 Faulting application path: E:\Raven\Olympus-Jan2014Setup\Bin\x86-nt\AudioServer.exe Faulting module path: C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.30729.8387_non e_5094ca96bcb6b2bb\MSVCR90.dll Report Id: b1c23295-963f-11e3-82e7-ec55f9dd9fce Faulting package full name: Faulting package-relative application ID: We are running this on Windows 7 and Windows 8 environments. It seems like a memory creep issue. Now we are trying to recompile this on Visual Studio 13 with 64 bit , and hope that this works. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Thursday, February 13, 2014 9:49 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: Audio Server keeps crashing, the application also stops. We are facing this problem on all test machines that the Audio server keeps crashing after working for a few seconds. The whole application also stops. We noticed in the log that it is unable to connect to the PocketSphinx server for some reason. Any ideas? ["E:/Raven/Olympus-Jan2014Setup\bin\release\AudioServer" -config AudioServer.cfg] Compiled on Jan 23 2014 at 12:36:34 Reading configuration file # This is a configuration file for the Audio_Server for MeetingLine # Sample rate sps = 16000 # Host and port number of Sphinx recognition engines engine_list = desktop:localhost:9990 [STD at 21:43:13.005 (33816891)] 1 engines: [STD at 21:43:13.005 (33816891)] desktop: host localhost port 9990 # Enable/disable logging log_full_session_input = 1 log = 1 verbosity = All # Configuration for the Voice Activity Detector vad = GMM vad_config = no_model=true, energy_threshold=7, sampling_rate=16000, fe_frame_rate=100, fe_window_length=0.0256, fe_fft_size=512, fe_num_filters=35, fe_lower_filter_freq=130, fe_upper_filter_freq=3800, fe_normalize_c0=1, prior_noise_level=8, prior_speech_level=14, snr_estimation_step=80000, window_width=20, log_frame_info=false [STD at 21:43:13.005 (33816891)] Starting GMM-based VAD [STD at 21:43:13.005 (33816891)] Sampling Rate: 16000 [STD at 21:43:13.005 (33816891)] Calling ad_open_sps (16000 Hz) Allocating 32 buffers of 512 samples each [STD at 21:43:13.142 (33817028)] Closing BufferedAD [STD at 21:43:13.142 (33817028)] Closing device Opened listener on port 11000 Accepted connection from 127.0.0.1 (socket 420) *** -> IDLE *** [ERR at 21:43:13.529 (33817415)] desktop decoding engine socket is invalid, try to connect Number total of devices device: 1 Recording from device: -1 ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(316): Connection closed ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(278): Retrying... ..\..\..\..\Libraries\libOlympusUtility\sock.cpp(304): Connected to server [DBG at 21:43:24.542 (33828428)] Sending message to desktop engine: get_acoustic_model ..\..\..\..\Libraries\libOlympusUtility\sock.cpp: connect: ERRNO= 10061(0000274d) [STD at 21:43:24.542 (33828428)] Message sent to 1/1 engines [DBG at 21:43:24.543 (33828429)] Got frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } Rest: [DBG at 21:43:24.543 (33828429)] Converting frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } [DBG at 21:43:24.551 (33828437)] Sending message to desktop engine: set_acoustic_model:E:\Raven\Olympus-Jan2014Setup\Resources\DecoderConfig\Aco usticModels\wsj_all_sc.cd_semi_5000 [STD at 21:43:24.551 (33828437)] Message sent to 1/1 engines [DBG at 21:43:25.451 (33829337)] Got frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } Rest: [DBG at 21:43:25.451 (33829337)] Converting frame from engine desktop: {c audioconfig :notify_acoustic_model "E:/Raven/Olympus-Jan2014Setup/Resources/DecoderConfig/AcousticModels/wsj_al l_sc.cd_semi_5000" } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Tue Feb 18 09:08:28 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Tue, 18 Feb 2014 19:38:28 +0530 Subject: [Olympus developers 457]: documentation on Skyper? Message-ID: <003201cf2cb2$e8be2510$ba3a6f30$@imarketingadvantage.com> We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Tue Feb 18 13:47:31 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 00:17:31 +0530 Subject: [Olympus developers 458]: Re: documentation on Skyper? Message-ID: <005a01cf2cd9$e3e8bd80$abba3880$@imarketingadvantage.com> I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 06:34:57 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 17:04:57 +0530 Subject: [Olympus developers 459]: Re: is this platform/community active ? In-Reply-To: <11B6FA6BC9879A42BE5A6227C05F3E1F02DA497A3712@EXCH-MB-1.srv.cs.cmu.edu> References: <00b101cf299e$73712440$5a536cc0$@imarketingadvantage.com> <11B6FA6BC9879A42BE5A6227C05F3E1F02DA497A3712@EXCH-MB-1.srv.cs.cmu.edu> Message-ID: <017e01cf2d66$a1306050$e39120f0$@imarketingadvantage.com> Hi Alex Thanks for your offer to help again. I know this is something you would all do just to support the community as goodwill. And it is surely not a very active community. We have looked at how the whole system has evolved at CMU and know that this is still one of the most promising open source systems to work with. We are in a critical phase. We are trying to build a system, but with absence of any documentation and support, we are really struggling and it's a bit frustrating. I don't know how, but is it possible to get some support on this for the next few days. We are in the process of building a very interesting pilot system on Healthcare using advanced NLP and complex QA systems and the dialog system is an important cog in the wheel. The problems we are facing are a. How to get this to work in a more stable fashion. It seems that on Windows 7/8 it is neither stable , nor scalable. We would like to explore making different logical server work on different physical servers. Some of them on Linux, where they are likely to be more stable. b. Get it to work with Telephony , Skype and Web c. Ensure that we are able to make it work for multiple simultaneous sessions. We are trying to get ourselves upto speed with the Galaxy Documentation http://communicator.sourceforge.net/sites/MITRE/distributions/GalaxyCommunic ator-4.0/docs/manual/ , but this may not answer all the questions and there are practical challenges in putting everything together. Regards Nitin From: Alex Rudnicky [mailto:Alex.Rudnicky at cs.cmu.edu] Sent: Saturday, February 15, 2014 8:22 PM To: Nitin Dhawan; olympus-developers Subject: RE: [Olympus developers 453]: is this platform/community active ? Hi Nitin, There is indeed a community, and we communicate through this board. I'm sorry that you didn't get responses to some of your questions. Sometimes no one quite knows what to say and hopes that someone else will respond. K I will look up your queries and see what to do. Alex From: Olympus-developers [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Nitin Dhawan Sent: Friday, February 14, 2014 11:04 AM To: olympus-developers Subject: [Olympus developers 453]: is this platform/community active ? We have been evaluating this platform for over a year. We were beginning to get comfortable with it. But since this is open source and we were not in the development team, we obviously look forward to community support. Of the last few posts that I have made, I only received response from one person twice in the four posts. Is this community active? More importantly is this platform alive? I had arrived at this platform after a lot of research in the open source world. But with lack of any support and dearth of documentation we are having to re-evaluate this. Or is it that this is a close community, where only CMU initiatives get support? Regards Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 08:08:04 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 18:38:04 +0530 Subject: [Olympus developers 460]: Re: documentation on Skyper? Message-ID: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> >From what we understand now , I believe we have to add Skype's device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Wed Feb 19 08:13:48 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Wed, 19 Feb 2014 08:13:48 -0500 Subject: [Olympus developers 461]: Re: documentation on Skyper? In-Reply-To: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> Message-ID: That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: > From what we understand now , I believe we have to add Skype's device id > to the Audio Server configuration. And we will need more than one audio > server to take care of > > a. Skype > > b. Telephony > > c. Local Microphone Input > > > > Are we on the right track? > > > > *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] > *Sent:* Wednesday, February 19, 2014 12:18 AM > *To:* olympus-developers (olympus-developers at cs.cmu.edu) > *Subject:* RE: documentation on Skyper? > > > > I am able to start Skyper at the start and see it in process monitors. But > it does not seem to start real server for some reason. > > > > This is what I have done so far. > > a. I add [Skyper] to Servers RoomLine-hub.pgm > > b. To startlist_sapi.config, I add > > PROCESS: "$OLYMPUS_BIN\skyper" > > PROCESS_MONITOR_ARGS: --start > > PROCESS_TITLE: SKYPER > > > > When I start the skyper.exe independently - it seems to be connecting > with skype just fine. But when I call in from another skype account into > this, it does not start a session, and it crashes. It is not connected to > the hub instance also like this. > > > > *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] > > *Sent:* Tuesday, February 18, 2014 7:38 PM > *To:* olympus-developers (olympus-developers at cs.cmu.edu) > *Subject:* documentation on Skyper? > > > > We found skyper within the main code. But we are not able to configure it > properly. Can someone suggest the right way to configure or if there is any > documentation. > > > > Thanks in advance. > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Wed Feb 19 08:08:50 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Wed, 19 Feb 2014 08:08:50 -0500 Subject: [Olympus developers 462]: Re: how to enable multiple users simultaneously. In-Reply-To: <003301cf25bf$95537690$bffa63b0$@imarketingadvantage.com> References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> <003301cf25bf$95537690$bffa63b0$@imarketingadvantage.com> Message-ID: It looks good and worth trying in that direction. On Sun, Feb 9, 2014 at 12:51 PM, Nitin Dhawan wrote: > What do you think about this approach > > http://www.ijitcs.com/volume%208_No_2/Vincenzo.pdf ? > > > > Principally it looks the same , but this is talking about a standard > architecture which replaces Ravenclaw with client server modules > > > > *From:* Aasish Pappu [mailto:aasishp at gmail.com] > *Sent:* Saturday, February 8, 2014 3:21 AM > *To:* Nitin Dhawan > *Cc:* olympus-developers > *Subject:* Re: [Olympus developers 443]: how to enable multiple users > simultaneously. > > > > The architecture natively doesn't support multiple users (I assume you > meant simultaneous sessions). However, there were instances where you > could run multiple instances of some of the components (Audio Server, DM, > InteractionManager, Hub) corresponding to each session. Since those > components are session specific. There are few components (ASR, NLU, NLG) > which can run as services i.e., doesn't have to be instantiated. All of > this > > customization will require understanding of how the components interact. > > > > The short answer is there is no easy recipe to customize the architecture > for multiple sessions. > > > > There are few example systems that were built to handle certain aspects of > the simultaneous sessions. > > > > 1. Multiparty Interaction: System interacting with more than user at a > time in a single interactive session. > > > > repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ > > > > We instantiate dialog manager, hub for each user to keep track of their > slot/value pairs (if different from other user). We don't have to > instantiate the IM and AudioServer in this case because both users are > listening on same channel (loud speakers) and talking on same channel > (kinect microphone array). While talking to the users, we don't want the > dialog manager to request for multiple nlg prompts for each user, instead > we synchronize the system's response from DM to one of the users or both > users. We do this using a new component called conversation manager that > talks to interaction-manager on behalf of both the dialog managers. More > details on this work in > http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 > > > > 2. Human robot teams: Single user interacting with multiple robots. > > > > We instantiate dialog manager, hub, robot backend for each robot. The > dialog context for each robot is independent from each other. Here each > robot's DM responds to the user based on a trigger phrase (e.g., RobotA > listen, RobotB listen). > > > > repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ > > > > > > > > > > On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan < > nitin at imarketingadvantage.com> wrote: > > Hi > > > > This might be a stupid question, but how to enable multiple users to > access the system at the same time. How does the system have to be deployed. > > > > Based on all the documentation we have read so far, one can start all the > servers and get either a speech , or tty r java-tty interfact where one > user can interact at a time. > > > > We also want to enable authentication of this system, using a single sign > on with another system. But thats really the next step. > > > > Thanks > > Nitin Dhawan > > > > > > -- > Aasish Pappu > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aasishp at gmail.com Wed Feb 19 08:17:57 2014 From: aasishp at gmail.com (Aasish Pappu) Date: Wed, 19 Feb 2014 08:17:57 -0500 Subject: [Olympus developers 463]: Re: documentation on Skyper? In-Reply-To: References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> Message-ID: I still think you may want to pursue in the direction of freeswitch integrated into olympus. Mainly because freeswitch provides all the telephony infrastructure with a slew of protocols (googletalk, SIP, skype, regular telephone). On Wed, Feb 19, 2014 at 8:13 AM, Aasish Pappu wrote: > That's one way to look at the problem. I would think of something more > scalable rather than device ids (meaning multiple physical devices). You > could try writing the audio data to a network socket and forward it to the > audio server. You will have to convert the audio server to accept network > stream as opposed to reading frm device. There is an example of such > instance in this branch. > > The audioserver reads the data from a socket and there's an audio client > that feeds this data. > > > http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ > > > > On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan < > nitin at imarketingadvantage.com> wrote: > >> From what we understand now , I believe we have to add Skype's device id >> to the Audio Server configuration. And we will need more than one audio >> server to take care of >> >> a. Skype >> >> b. Telephony >> >> c. Local Microphone Input >> >> >> >> Are we on the right track? >> >> >> >> *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] >> *Sent:* Wednesday, February 19, 2014 12:18 AM >> *To:* olympus-developers (olympus-developers at cs.cmu.edu) >> *Subject:* RE: documentation on Skyper? >> >> >> >> I am able to start Skyper at the start and see it in process monitors. >> But it does not seem to start real server for some reason. >> >> >> >> This is what I have done so far. >> >> a. I add [Skyper] to Servers RoomLine-hub.pgm >> >> b. To startlist_sapi.config, I add >> >> PROCESS: "$OLYMPUS_BIN\skyper" >> >> PROCESS_MONITOR_ARGS: --start >> >> PROCESS_TITLE: SKYPER >> >> >> >> When I start the skyper.exe independently - it seems to be connecting >> with skype just fine. But when I call in from another skype account into >> this, it does not start a session, and it crashes. It is not connected to >> the hub instance also like this. >> >> >> >> *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] >> >> *Sent:* Tuesday, February 18, 2014 7:38 PM >> *To:* olympus-developers (olympus-developers at cs.cmu.edu) >> *Subject:* documentation on Skyper? >> >> >> >> We found skyper within the main code. But we are not able to configure it >> properly. Can someone suggest the right way to configure or if there is any >> documentation. >> >> >> >> Thanks in advance. >> > > > > -- > Aasish Pappu > > -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 08:21:28 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 18:51:28 +0530 Subject: [Olympus developers 464]: Re: how to enable multiple users simultaneously. In-Reply-To: References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> <003301cf25bf$95537690$bffa63b0$@imarketingadvantage.com> Message-ID: <01ae01cf2d75$81ec8a20$85c59e60$@imarketingadvantage.com> >From what we have seen recently, Galaxy - Builtin server in the hub also has provision for session management. Is that not good enough to build on further? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:39 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 450]: Re: how to enable multiple users simultaneously. It looks good and worth trying in that direction. On Sun, Feb 9, 2014 at 12:51 PM, Nitin Dhawan wrote: What do you think about this approach http://www.ijitcs.com/volume%208_No_2/Vincenzo.pdf ? Principally it looks the same , but this is talking about a standard architecture which replaces Ravenclaw with client server modules From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Saturday, February 8, 2014 3:21 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 443]: how to enable multiple users simultaneously. The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan wrote: Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -- Aasish Pappu -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 08:40:43 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 19:10:43 +0530 Subject: [Olympus developers 465]: Re: documentation on Skyper? In-Reply-To: References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> Message-ID: <01bd01cf2d78$30e7d960$92b78c20$@imarketingadvantage.com> Thanks. This looks like a good approach. Any ideas where to make the changes in Skyper to get it to write to and read from the network socket ? Regards From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:44 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 460]: Re: documentation on Skyper? That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: >From what we understand now , I believe we have to add Skype's device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 08:48:25 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 19:18:25 +0530 Subject: [Olympus developers 466]: Building Olympus on other platforms Message-ID: <01c201cf2d79$4b100eb0$e1302c10$@imarketingadvantage.com> We keep getting Audioserver and sometimes Apollo to crash, when we compile it on VS 2010 and VS2013 on Windows 7 and Windows 8. Will http://trac.speech.cs.cmu.edu/repos/olympus/branches/vs2010/ work better? The Kalliope does not work on 64 bit as it is using 32 bit SAPI. But even mixing 32 bit Kalliope (bundled with 32 bit LibOlympus library) also does not work with other 64 bit components. Any guidance on making this work on other platforms. Especially, on making some of the servers to work on Linux. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Wed Feb 19 08:51:46 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Wed, 19 Feb 2014 19:21:46 +0530 Subject: [Olympus developers 467]: Re: documentation on Skyper? In-Reply-To: References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> Message-ID: <01dd01cf2d79$bde24890$39a6d9b0$@imarketingadvantage.com> Is there any guidance/headstart in this direction? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:48 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 460]: Re: documentation on Skyper? I still think you may want to pursue in the direction of freeswitch integrated into olympus. Mainly because freeswitch provides all the telephony infrastructure with a slew of protocols (googletalk, SIP, skype, regular telephone). On Wed, Feb 19, 2014 at 8:13 AM, Aasish Pappu wrote: That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: >From what we understand now , I believe we have to add Skype's device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -- Aasish Pappu -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseg at cs.cmu.edu Wed Feb 19 13:28:10 2014 From: joseg at cs.cmu.edu (=?ISO-8859-1?Q?Jos=E9_P=2E_Gonz=E1lez=2DBrenes?=) Date: Wed, 19 Feb 2014 13:28:10 -0500 Subject: [Olympus developers 468]: Re: documentation on Skyper? In-Reply-To: <01dd01cf2d79$bde24890$39a6d9b0$@imarketingadvantage.com> References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> <01dd01cf2d79$bde24890$39a6d9b0$@imarketingadvantage.com> Message-ID: I worked on a VOIP implementation a long time ago. (There might be some code/branch in the SVN). I used a third-party driver that creates a virtual soundcard that writes/reads from a network socket. Just a thought. -- Jos? P. Gonz?lez-Brenes www.josepablogonzalez.com On Wed, Feb 19, 2014 at 8:51 AM, Nitin Dhawan wrote: > Is there any guidance/headstart in this direction? > > > > *From:* Aasish Pappu [mailto:aasishp at gmail.com] > *Sent:* Wednesday, February 19, 2014 6:48 PM > > *To:* Nitin Dhawan > *Cc:* olympus-developers > *Subject:* Re: [Olympus developers 460]: Re: documentation on Skyper? > > > > I still think you may want to pursue in the direction of freeswitch > integrated into olympus. Mainly because freeswitch provides all the > telephony infrastructure with a slew of protocols (googletalk, SIP, skype, > regular telephone). > > > > On Wed, Feb 19, 2014 at 8:13 AM, Aasish Pappu wrote: > > That's one way to look at the problem. I would think of something more > scalable rather than device ids (meaning multiple physical devices). You > could try writing the audio data to a network socket and forward it to the > audio server. You will have to convert the audio server to accept network > stream as opposed to reading frm device. There is an example of such > instance in this branch. > > > > The audioserver reads the data from a socket and there's an audio client > that feeds this data. > > > > > > http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ > > > > > > On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan < > nitin at imarketingadvantage.com> wrote: > > From what we understand now , I believe we have to add Skype's device id > to the Audio Server configuration. And we will need more than one audio > server to take care of > > a. Skype > > b. Telephony > > c. Local Microphone Input > > > > Are we on the right track? > > > > *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] > *Sent:* Wednesday, February 19, 2014 12:18 AM > *To:* olympus-developers (olympus-developers at cs.cmu.edu) > *Subject:* RE: documentation on Skyper? > > > > I am able to start Skyper at the start and see it in process monitors. But > it does not seem to start real server for some reason. > > > > This is what I have done so far. > > a. I add [Skyper] to Servers RoomLine-hub.pgm > > b. To startlist_sapi.config, I add > > PROCESS: "$OLYMPUS_BIN\skyper" > > PROCESS_MONITOR_ARGS: --start > > PROCESS_TITLE: SKYPER > > > > When I start the skyper.exe independently - it seems to be connecting > with skype just fine. But when I call in from another skype account into > this, it does not start a session, and it crashes. It is not connected to > the hub instance also like this. > > > > *From:* Nitin Dhawan [mailto:nitin at imarketingadvantage.com] > > *Sent:* Tuesday, February 18, 2014 7:38 PM > *To:* olympus-developers (olympus-developers at cs.cmu.edu) > *Subject:* documentation on Skyper? > > > > We found skyper within the main code. But we are not able to configure it > properly. Can someone suggest the right way to configure or if there is any > documentation. > > > > Thanks in advance. > > > > > > -- > Aasish Pappu > > > > > > -- > Aasish Pappu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Feb 20 11:16:41 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Thu, 20 Feb 2014 21:46:41 +0530 Subject: [Olympus developers 469]: Re: documentation on Skyper? In-Reply-To: References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> <01dd01cf2d79$bde24890$39a6d9b0$@imarketingadvantage.com> Message-ID: <010701cf2e57$27fba770$77f2f650$@imarketingadvantage.com> Thanks Jos?. I have downloaded the VoIP branch. We are looking at that along with the networkoly branch that Aasish suggested.. We looked at the Galaxy architecture documentation, and now I am thinking ? isn?t Skyper an independent audio device- that should talk to Kalliope and receive messages back from Kalliope. Do I hv to integrate it with the standard audio server. Is there any guidance on how to implement Skyper? From: josepablog at gmail.com [mailto:josepablog at gmail.com] On Behalf Of Jos? P. Gonz?lez-Brenes Sent: Wednesday, February 19, 2014 11:58 PM To: Nitin Dhawan Cc: Aasish Pappu; gayatri at imarketingadvantage.com; amit at imarketingadvantage.com; olympus-developers Subject: Re: [Olympus developers 467]: Re: documentation on Skyper? I worked on a VOIP implementation a long time ago. (There might be some code/branch in the SVN). I used a third-party driver that creates a virtual soundcard that writes/reads from a network socket. Just a thought. -- Jos? P. Gonz?lez-Brenes www.josepablogonzalez.com On Wed, Feb 19, 2014 at 8:51 AM, Nitin Dhawan wrote: Is there any guidance/headstart in this direction? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:48 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 460]: Re: documentation on Skyper? I still think you may want to pursue in the direction of freeswitch integrated into olympus. Mainly because freeswitch provides all the telephony infrastructure with a slew of protocols (googletalk, SIP, skype, regular telephone). On Wed, Feb 19, 2014 at 8:13 AM, Aasish Pappu wrote: That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: >From what we understand now , I believe we have to add Skype?s device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -- Aasish Pappu -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Feb 20 14:29:25 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Fri, 21 Feb 2014 00:59:25 +0530 Subject: [Olympus developers 470]: Re: documentation on Skyper? References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> <01dd01cf2d79$bde24890$39a6d9b0$@imarketingadvantage.com> Message-ID: <013101cf2e72$14125950$3c370bf0$@imarketingadvantage.com> Not Kalliope ? Sphinx Audio device = audio client From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Thursday, February 20, 2014 9:47 PM To: 'Jos? P. Gonz?lez-Brenes' Cc: 'Aasish Pappu'; 'gayatri at imarketingadvantage.com'; 'amit at imarketingadvantage.com'; 'olympus-developers' Subject: RE: [Olympus developers 467]: Re: documentation on Skyper? Thanks Jos?. I have downloaded the VoIP branch. We are looking at that along with the networkoly branch that Aasish suggested.. We looked at the Galaxy architecture documentation, and now I am thinking ? isn?t Skyper an independent audio device- that should talk to Kalliope and receive messages back from Kalliope. Do I hv to integrate it with the standard audio server. Is there any guidance on how to implement Skyper? From: josepablog at gmail.com [mailto:josepablog at gmail.com] On Behalf Of Jos? P. Gonz?lez-Brenes Sent: Wednesday, February 19, 2014 11:58 PM To: Nitin Dhawan Cc: Aasish Pappu; gayatri at imarketingadvantage.com; amit at imarketingadvantage.com; olympus-developers Subject: Re: [Olympus developers 467]: Re: documentation on Skyper? I worked on a VOIP implementation a long time ago. (There might be some code/branch in the SVN). I used a third-party driver that creates a virtual soundcard that writes/reads from a network socket. Just a thought. -- Jos? P. Gonz?lez-Brenes www.josepablogonzalez.com On Wed, Feb 19, 2014 at 8:51 AM, Nitin Dhawan wrote: Is there any guidance/headstart in this direction? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:48 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 460]: Re: documentation on Skyper? I still think you may want to pursue in the direction of freeswitch integrated into olympus. Mainly because freeswitch provides all the telephony infrastructure with a slew of protocols (googletalk, SIP, skype, regular telephone). On Wed, Feb 19, 2014 at 8:13 AM, Aasish Pappu wrote: That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: >From what we understand now , I believe we have to add Skype?s device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -- Aasish Pappu -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Thu Feb 20 15:15:33 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Fri, 21 Feb 2014 01:45:33 +0530 Subject: [Olympus developers 471]: is there a web based chat interface? Message-ID: <000601cf2e78$855c0fb0$90142f10$@imarketingadvantage.com> Does any of the branch/example have a web based chat interface. We are already trying to extend the JavaTTY into a client-server with applet front end. But if there is some work already done on this, it will be a big help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Fri Feb 21 07:35:42 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Fri, 21 Feb 2014 18:05:42 +0530 Subject: [Olympus developers 472]: Re: how to enable multiple users simultaneously. In-Reply-To: References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> Message-ID: <012f01cf2f01$70cc8c80$5265a580$@imarketingadvantage.com> Hi.. I was looking at the session management aspects in the Galaxy architecture http://communicator.sourceforge.net/sites/MITRE/distributions/GalaxyCommunic ator/docs/manual/advanced/session.html The Galaxy Communicator infrastructure is designed to support multiple simultaneous sessions. By session we mean an individual conversation with a Communicator system, such as the sequence of exchanges between the time the system answers the phone and greets an individual user and the time the system or user hangs up. A single Communicator system might support a large number of phone lines, and thus a large number of simultaneous sessions. Has the architecture been modified so much that the core session support needs to be redesigned? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Saturday, February 8, 2014 3:21 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 443]: how to enable multiple users simultaneously. The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan wrote: Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Fri Feb 21 10:57:09 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Fri, 21 Feb 2014 21:27:09 +0530 Subject: [Olympus developers 473]: networkoly example voip.cfg Message-ID: <016301cf2f1d$97855b10$c6901130$@imarketingadvantage.com> The configuration of VoIP is in a password protected rar file VoIP.rar . Anyone knows the password? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Rudnicky at cs.cmu.edu Fri Feb 21 11:19:10 2014 From: Alex.Rudnicky at cs.cmu.edu (Alex Rudnicky) Date: Fri, 21 Feb 2014 11:19:10 -0500 Subject: [Olympus developers 474]: Re: how to enable multiple users simultaneously. In-Reply-To: <012f01cf2f01$70cc8c80$5265a580$@imarketingadvantage.com> References: <000001cf2445$71d8a300$5589e900$@imarketingadvantage.com> <012f01cf2f01$70cc8c80$5265a580$@imarketingadvantage.com> Message-ID: <11B6FA6BC9879A42BE5A6227C05F3E1F02E08DD37DDB@EXCH-MB-1.srv.cs.cmu.edu> That was part of the original design. However subsequent work found that managing context in the way people wanted made this not a practical solution at the time. (For example having dialog state bias language models.) Alex From: Olympus-developers [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Nitin Dhawan Sent: Friday, February 21, 2014 7:36 AM To: 'Aasish Pappu' Cc: 'olympus-developers' Subject: [Olympus developers 472]: Re: how to enable multiple users simultaneously. Hi.. I was looking at the session management aspects in the Galaxy architecture http://communicator.sourceforge.net/sites/MITRE/distributions/GalaxyCommunicator/docs/manual/advanced/session.html The Galaxy Communicator infrastructure is designed to support multiple simultaneous sessions. By session we mean an individual conversation with a Communicator system, such as the sequence of exchanges between the time the system answers the phone and greets an individual user and the time the system or user hangs up. A single Communicator system might support a large number of phone lines, and thus a large number of simultaneous sessions. Has the architecture been modified so much that the core session support needs to be redesigned? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Saturday, February 8, 2014 3:21 AM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 443]: how to enable multiple users simultaneously. The architecture natively doesn't support multiple users (I assume you meant simultaneous sessions). However, there were instances where you could run multiple instances of some of the components (Audio Server, DM, InteractionManager, Hub) corresponding to each session. Since those components are session specific. There are few components (ASR, NLU, NLG) which can run as services i.e., doesn't have to be instantiated. All of this customization will require understanding of how the components interact. The short answer is there is no easy recipe to customize the architecture for multiple sessions. There are few example systems that were built to handle certain aspects of the simultaneous sessions. 1. Multiparty Interaction: System interacting with more than user at a time in a single interactive session. repo: http://trac.speech.cs.cmu.edu/repos/olympus/branches/MultiOly1.0/ We instantiate dialog manager, hub for each user to keep track of their slot/value pairs (if different from other user). We don't have to instantiate the IM and AudioServer in this case because both users are listening on same channel (loud speakers) and talking on same channel (kinect microphone array). While talking to the users, we don't want the dialog manager to request for multiple nlg prompts for each user, instead we synchronize the system's response from DM to one of the users or both users. We do this using a new component called conversation manager that talks to interaction-manager on behalf of both the dialog managers. More details on this work in http://link.springer.com/chapter/10.1007/978-3-642-39330-3_12 2. Human robot teams: Single user interacting with multiple robots. We instantiate dialog manager, hub, robot backend for each robot. The dialog context for each robot is independent from each other. Here each robot's DM responds to the user based on a trigger phrase (e.g., RobotA listen, RobotB listen). repo: http://trac.speech.cs.cmu.edu/repos/teamtalk/branches/tt-olympus2/ On Fri, Feb 7, 2014 at 3:44 PM, Nitin Dhawan > wrote: Hi This might be a stupid question, but how to enable multiple users to access the system at the same time. How does the system have to be deployed. Based on all the documentation we have read so far, one can start all the servers and get either a speech , or tty r java-tty interfact where one user can interact at a time. We also want to enable authentication of this system, using a single sign on with another system. But thats really the next step. Thanks Nitin Dhawan -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin at imarketingadvantage.com Sun Feb 23 07:44:17 2014 From: nitin at imarketingadvantage.com (Nitin Dhawan) Date: Sun, 23 Feb 2014 18:14:17 +0530 Subject: [Olympus developers 475]: Re: documentation on Skyper? In-Reply-To: References: <019801cf2d73$a2419650$e6c4c2f0$@imarketingadvantage.com> Message-ID: <004e01cf3094$f9813000$ec839000$@imarketingadvantage.com> We have made some progress on this branch. But we are a little stuck- any help is welcome. It seems Networkoly brach was configured for batch mode processing of Audio Files. We are trying to get this to work with the computer mic using the network ports. To get the sound back to the AudioServer(reading on network packet), you need to package it in startutt and endutt . We decided to call startutt whenever we detect MainSpeech (Value2) on the following check ,using VAD - gmm/power and endutt when its not MainSpeech. Function - pvVAD->GetCurrentSpeechState(pBuffer, iNumRead) but we always get the speech state as other(5) or silence(4), never MainSpeech(2) Any ideas? From: Aasish Pappu [mailto:aasishp at gmail.com] Sent: Wednesday, February 19, 2014 6:44 PM To: Nitin Dhawan Cc: olympus-developers Subject: Re: [Olympus developers 460]: Re: documentation on Skyper? That's one way to look at the problem. I would think of something more scalable rather than device ids (meaning multiple physical devices). You could try writing the audio data to a network socket and forward it to the audio server. You will have to convert the audio server to accept network stream as opposed to reading frm device. There is an example of such instance in this branch. The audioserver reads the data from a socket and there's an audio client that feeds this data. http://trac.speech.cs.cmu.edu/repos/olympus/branches/networkoly/ On Wed, Feb 19, 2014 at 8:08 AM, Nitin Dhawan wrote: >From what we understand now , I believe we have to add Skype's device id to the Audio Server configuration. And we will need more than one audio server to take care of a. Skype b. Telephony c. Local Microphone Input Are we on the right track? From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Wednesday, February 19, 2014 12:18 AM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: RE: documentation on Skyper? I am able to start Skyper at the start and see it in process monitors. But it does not seem to start real server for some reason. This is what I have done so far. a. I add [Skyper] to Servers RoomLine-hub.pgm b. To startlist_sapi.config, I add PROCESS: "$OLYMPUS_BIN\skyper" PROCESS_MONITOR_ARGS: --start PROCESS_TITLE: SKYPER When I start the skyper.exe independently - it seems to be connecting with skype just fine. But when I call in from another skype account into this, it does not start a session, and it crashes. It is not connected to the hub instance also like this. From: Nitin Dhawan [mailto:nitin at imarketingadvantage.com] Sent: Tuesday, February 18, 2014 7:38 PM To: olympus-developers (olympus-developers at cs.cmu.edu) Subject: documentation on Skyper? We found skyper within the main code. But we are not able to configure it properly. Can someone suggest the right way to configure or if there is any documentation. Thanks in advance. -- Aasish Pappu -------------- next part -------------- An HTML attachment was scrubbed... URL: From frnkbroz at gmail.com Tue Feb 25 07:31:06 2014 From: frnkbroz at gmail.com (Frank Broz) Date: Tue, 25 Feb 2014 12:31:06 +0000 Subject: [Olympus developers 476]: example systems and Olympus branches Message-ID: Hello, I have a question about which example systems are compatible with KinectOly2.0. I have tested the kiosk1.0 system with Kinect2.0 in the past. But after a recent update, they not longer seem to be compatible. Looking at the svn logs, it appears that the way that the grammar represents the interaction start command has changed (in Helios/InputSelector.cpp). Does anyone know where I can find an updated grammar for kiosk? Also, I would be interested in seeing an example of Olympus being used with a tablet for input. Could someone recommend which Olympus branch and example system I should look at? Cheers, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Rudnicky at cs.cmu.edu Tue Feb 25 09:46:29 2014 From: Alex.Rudnicky at cs.cmu.edu (Alex Rudnicky) Date: Tue, 25 Feb 2014 09:46:29 -0500 Subject: [Olympus developers 477]: Re: example systems and Olympus branches In-Reply-To: References: Message-ID: <11B6FA6BC9879A42BE5A6227C05F3E1F02E08DD37F76@EXCH-MB-1.srv.cs.cmu.edu> KinectOly initially had a hard coded attention word. This has been fixed and you now specify the attention word as a net (specifically [Attention]) in the grammar. Be sure that this net is also included in the forms file. This allows you to tailor attention to your app. Example: [Attention] (*AGENT LISTEN) (*AGENT OTHER) AGENT (agent) OTHER (i'm ready) (i'm ready to start) LISTEN (listen) (listen to me) (wake up) ; From: Olympus-developers [mailto:olympus-developers-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Frank Broz Sent: Tuesday, February 25, 2014 7:31 AM To: olympus-developers Subject: [Olympus developers 476]: example systems and Olympus branches Hello, I have a question about which example systems are compatible with KinectOly2.0. I have tested the kiosk1.0 system with Kinect2.0 in the past. But after a recent update, they not longer seem to be compatible. Looking at the svn logs, it appears that the way that the grammar represents the interaction start command has changed (in Helios/InputSelector.cpp). Does anyone know where I can find an updated grammar for kiosk? Also, I would be interested in seeing an example of Olympus being used with a tablet for input. Could someone recommend which Olympus branch and example system I should look at? Cheers, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From aitzaz.ahmad at kics.edu.pk Wed Feb 26 03:13:23 2014 From: aitzaz.ahmad at kics.edu.pk (Aitzaz Ahmad) Date: Wed, 26 Feb 2014 13:13:23 +0500 Subject: [Olympus developers 478]: Example systems compatible with MultiOly1.0 Message-ID: Hi, Could you please tell me which of the example systems are compatible with the MultliOly1.0 branch of Olympus. I would like to use an example system where I can see a demonstration of the multiple dialogue sessions running in parallel. We want to use the RavenClaw dialog manager in a galaxy system which handles multiple dialogue sessions in parallel. Regards, Aitzaz Ahmad ---------- "A census taker once tried to test me. I ate his liver with some fava beans and a nice Chianti." Dr. Hannibal Lecter - Silence of the Lambs (1991) -------------- next part -------------- An HTML attachment was scrubbed... URL: