From svetastenchikova at gmail.com Thu Feb 1 22:46:46 2007 From: svetastenchikova at gmail.com (Svetlana Stenchikova) Date: Thu, 1 Feb 2007 22:46:46 -0500 Subject: [RavenclawDev 220] Re: RC Lunch today 12:30 In-Reply-To: <007d01c74558$bb987d40$03bd0280@sp.cs.cmu.edu> References: <007d01c74558$bb987d40$03bd0280@sp.cs.cmu.edu> Message-ID: <31cecd6b0702011946n69e17448pac1fb8e0a5f25dd6@mail.gmail.com> Hi Antoine, how did your meeting go, did anyone take notes by any chance? Are you still planning to fix the message passing in the RavenClaw (so that the back-end servers can parse them using standard methods)? By the way, I have switched to the VS05, and the DM works with no changes to the code. Thank you, Svetlana On 1/31/07, Antoine Raux wrote: > > Hi guys, > > > > Let's have a RavenClaw lunch! It's been a long time and we might have a > lot to catch up on? Here are some potential topics for discussion (bring > your own!): > > > > - Olympus 2: recent changes in the architecture > > - Imminent release of the new version of Let's Go > > - Hunting down milliseconds: how to make all the Olympus modules faster? > One example among many possibilities: allowing "tentative" ends of > utterances in Sphinx to start the final process (2nd pass?) without > waiting for long pauses (this implies that we can cancel the finalization > process and go back to processing the utterance where we were in case the > user goes on with their utterance) > > - From VSS to Subversion > > - From VS 2003 to VS 2005 > > - Olympus as a meta-platform for research (i.e. how we are building > different platforms for research, e.g. Let's Go, Conquest, TeamTalk/Unreal > on top of it) > > > > See you all (I hope, at least for those at CMU?)! > > > > antoine > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/ravenclaw-developers/attachments/20070201/8572e96d/attachment.html From svetastenchikova at gmail.com Tue Feb 6 19:10:24 2007 From: svetastenchikova at gmail.com (Svetlana Stenchikova) Date: Tue, 6 Feb 2007 19:10:24 -0500 Subject: [RavenclawDev 221] accessing history of concepts from ravenclaw Message-ID: <31cecd6b0702061610w37e06d1bvcbe8118503c48e76@mail.gmail.com> Hi, I have a question about accessing history of a concept. Is this the right syntax to copy an old concept into a temporary value of the same type: EXECUTE(C("LOCAL_the_old_event") = C("the_event")[-1]) It appears to be empty after this command, although "the_event" has had a value before. Could you please suggest what may be a problem? Do I have to do anything explicitly to keep the history of concepts? thanks Svetlana -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/ravenclaw-developers/attachments/20070206/98d221e0/attachment.html From svetastenchikova at gmail.com Wed Feb 7 15:46:15 2007 From: svetastenchikova at gmail.com (Svetlana Stenchikova) Date: Wed, 7 Feb 2007 15:46:15 -0500 Subject: [RavenclawDev 222] compiling festival Message-ID: <31cecd6b0702071246u20e1f868j4e47f463ad80d3e4@mail.gmail.com> Hi, I am trying to install festival on my windows machine using cygwin and I am running into problems when compinling speech_tools. The script gets stuck at: **************** Making in directory ./bin ... Remove Links: Scripts: (sh) (prl) example_to_doc++ **************** if I only compile libraries: make make_libraries, there is no error messages. I am not sure if festival needs just libraries or a fill compilation of thee speech_tools. So then I tried to compile the festival using gnu make and it gets stuck similarly at: *************************** Making in directory ./bin ... Remove Links: Main Links: festival festival_client Scripts: (sh) (prl) festival_server ************************ The other option is to use VStudio's nmake, but I only have VC 8 (while they used VC6) and I am getting compile errors with it. Has anyone installed festival with cygwin and has any suggestions? Thank you -Svetlana -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/ravenclaw-developers/attachments/20070207/54a31774/attachment.html From svetlana at edam.speech.cs.cmu.edu Fri Feb 9 14:56:52 2007 From: svetlana at edam.speech.cs.cmu.edu (svetlana@edam.speech.cs.cmu.edu) Date: Fri, 9 Feb 2007 14:56:52 -0500 Subject: [RavenclawDev 223] [12] Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty: changed some colors, user's spoken text should get printed as well Message-ID: <200702091956.l19JuqJX026560@edam.speech.cs.cmu.edu> An HTML attachment was scrubbed... URL: http://mailman.srv.cs.cmu.edu/pipermail/ravenclaw-developers/attachments/20070209/cda600b4/attachment-0001.html -------------- next part -------------- Modified: Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DecoderInterpreter.java =================================================================== --- Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DecoderInterpreter.java 2007-01-29 23:03:37 UTC (rev 11) +++ Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DecoderInterpreter.java 2007-02-09 19:56:51 UTC (rev 12) @@ -52,7 +52,7 @@ } } String decoded = new String(b1, 0, i); - jconsole.print(decoded, java.awt.Color.RED); + jconsole.print("USER: " + decoded, java.awt.Color.BLACK); pad.say(decoded); } } catch (IOException e) { @@ -61,12 +61,12 @@ } public void print_bot(String bot, String text) { - jconsole.print(bot + ": " + text, java.awt.Color.GREEN); + jconsole.print(bot + ": " + text, java.awt.Color.RED); jconsole.println(); } public void print_human(String source, String confidence, String utterance) { - jconsole.print("U(" + source + ":" + confidence + "): " + utterance, java.awt.Color.RED); + jconsole.print("U(" + source + ":" + confidence + "): " + utterance, java.awt.Color.GREEN); jconsole.println(); } } Modified: Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DrawingPad.java =================================================================== --- Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DrawingPad.java 2007-01-29 23:03:37 UTC (rev 11) +++ Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/DrawingPad.java 2007-02-09 19:56:51 UTC (rev 12) @@ -57,16 +57,16 @@ //add console to topContainer consolePrint.setPreferredSize(new Dimension(1000, 300)); - consolePrint.getViewport().getView().setBackground(Color.black); - consolePrint.getViewport().getView().setForeground(Color.white); - consolePrint.setFont(new Font("Arial", Font.PLAIN, 14)); + consolePrint.getViewport().getView().setBackground(Color.white); + consolePrint.getViewport().getView().setForeground(Color.black); + consolePrint.setFont(new Font("Arial", Font.BOLD, 14)); lowerContainer.add(consolePrint, BorderLayout.NORTH); //add console to lowerContainer consoleWrite.setPreferredSize(new Dimension(1000, 350)); - consoleWrite.getViewport().getView().setBackground(Color.black); - consoleWrite.getViewport().getView().setForeground(Color.white); - consoleWrite.setFont(new Font("Arial", Font.PLAIN, 14)); + consoleWrite.getViewport().getView().setBackground(Color.white); + consoleWrite.getViewport().getView().setForeground(Color.black); + consoleWrite.setFont(new Font("Arial", Font.BOLD, 14)); lowerContainer.add(consoleWrite, BorderLayout.SOUTH); interpreter = new DecoderInterpreter(consolePrint, consoleWrite, this); Modified: Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/JavaTTYServer.java =================================================================== --- Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/JavaTTYServer.java 2007-01-29 23:03:37 UTC (rev 11) +++ Agents/JavaTTY/src/edu/cmu/ravenclaw/javatty/JavaTTYServer.java 2007-02-09 19:56:51 UTC (rev 12) @@ -120,9 +120,9 @@ //lets only listen to one parse per utt //this may be the wrong bot's parse, but they're normally almost identical //later this should act like a real grounding backchannel - int i = Integer.parseInt((String)f.getFrame(":parse").getProperty(":uttid")); + /*int i = Integer.parseInt((String)f.getFrame(":parse").getProperty(":uttid")); if(i <= lastUttID) return (GFrame) null; - lastUttID = i; + lastUttID = i;*/ GFrame features = f.getFrame(":input_features"); if(features != null) { From svetastenchikova at gmail.com Wed Feb 21 22:37:37 2007 From: svetastenchikova at gmail.com (Svetlana Stenchikova) Date: Wed, 21 Feb 2007 22:37:37 -0500 Subject: [RavenclawDev 224] swapping grammars for the ASR Message-ID: <31cecd6b0702211937v469aa57er92d2e875f01525d2@mail.gmail.com> Hi, we are using JSGF grammars with sphinx 4 recognizer. The grammar of sphinx 4 is generated by converting the grammar of the phoenix parser. We would like to be able to swap Sphinx 4 grammars during the runtime. This functionality is supported by sphinx 4. One of the solutions that we consider is to modify the dialog system specific part of the DM: create execute agents that call enable/disable grammars on the ASR and place these agents manually in the dialog tree. This is not a good solution because there will be a duplication between what is passed to ASR and what is set in the GRAMMAR_MAPPING directive on the dialog manager. Another idea is to use the implementation of the GRAMMAR_MAPPING directive to make a call to the ASR component notifying it to change the grammar. >From looking at the code CMAExpect::DeclareExpectations and CMARequest::DeclareExpectations, each agent has a list of expected grammatical rules. So, when an agent is activated(or executed) it could potentially make a call to the ASR component to notify it of the grammar in the current state of the DM and to enable the rules that should be active at this point of the dialog. This would ensure the synchrony between the DM grammar and the ASR grammar. This requires a 1-to-1 mapping between phoenix and ASR grammars, which we comply with because ASR grammar is a conversion from phoenix grammar. Can you please tell us what do you think of this idea? This functionality is currently not present in ravenclaw, right? How difficult would it be to implement this? Could you please give any pointers or suggestions for the implementation? Thank you Svetlana From antoine at cs.cmu.edu Thu Feb 22 08:21:49 2007 From: antoine at cs.cmu.edu (Antoine Raux) Date: Thu, 22 Feb 2007 08:21:49 -0500 Subject: [RavenclawDev 225] Re: swapping grammars for the ASR In-Reply-To: <31cecd6b0702211937v469aa57er92d2e875f01525d2@mail.gmail.com> References: <31cecd6b0702211937v469aa57er92d2e875f01525d2@mail.gmail.com> Message-ID: <00a601c75684$68fcac90$03bd0280@sp.cs.cmu.edu> Hi Svetlana, Note that we are already using state-specific (statistical) LMs in our current systems. The way we switch between LMs is through the INPUT_LINE_CONFIGURATION directive which is generally used in REQUEST agents and agencies, as in the following example from Let's Go: INPUT_LINE_CONFIGURATION( "set_lm=time, set_dtmf_len=1") The specific syntax of the configuration string is dependent on AudioServer (RavenClaw just sends the slot-value pairs to AudioServer). Right now, you use set_lm to specify the LM name for the Sphinx 2/3 recognizers, and set_dtmf_len to specify how many digits your DTMF recognizer should expect. Of course, you could add new directives, as long as you modify AudioServer so that it does the right thing (which is most likely sending the directives to each individual recognition engine). So the current way to implement what you're proposing is to add an INPUT_LINE_CONFIGURATION directive to all request agents that indicates all the rules that should be active at that time. Now, it might be interesting to have a default behavior that does what you're saying (i.e. automatically building the LM configuration from the expectation agenda). This could be achieved by overloading the CDialogAgent virtual method GetInputLineConfiguration, so it (optionally) constructs a list of active rules when called (it's already written so that it inherits unspecified slots from its parent agent, in addition to parsing the string specified in its own INPUT_LINE_CONFIGURATION). Another option is to have RC actually send the whole expectation agenda to AudioServer (it's already sending it to Helios), but then you have to modify AudioServer to allow it to parse the full agenda. Hope this helps... antoine -----Original Message----- From: ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu [mailto:ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu] On Behalf Of Svetlana Stenchikova Sent: Wednesday, February 21, 2007 10:38 PM To: ravenclaw-developers at cs.cmu.edu Subject: [RavenclawDev 224] swapping grammars for the ASR Hi, we are using JSGF grammars with sphinx 4 recognizer. The grammar of sphinx 4 is generated by converting the grammar of the phoenix parser. We would like to be able to swap Sphinx 4 grammars during the runtime. This functionality is supported by sphinx 4. One of the solutions that we consider is to modify the dialog system specific part of the DM: create execute agents that call enable/disable grammars on the ASR and place these agents manually in the dialog tree. This is not a good solution because there will be a duplication between what is passed to ASR and what is set in the GRAMMAR_MAPPING directive on the dialog manager. Another idea is to use the implementation of the GRAMMAR_MAPPING directive to make a call to the ASR component notifying it to change the grammar. >From looking at the code CMAExpect::DeclareExpectations and CMARequest::DeclareExpectations, each agent has a list of expected grammatical rules. So, when an agent is activated(or executed) it could potentially make a call to the ASR component to notify it of the grammar in the current state of the DM and to enable the rules that should be active at this point of the dialog. This would ensure the synchrony between the DM grammar and the ASR grammar. This requires a 1-to-1 mapping between phoenix and ASR grammars, which we comply with because ASR grammar is a conversion from phoenix grammar. Can you please tell us what do you think of this idea? This functionality is currently not present in ravenclaw, right? How difficult would it be to implement this? Could you please give any pointers or suggestions for the implementation? Thank you Svetlana From dbohus at cs.cmu.edu Thu Feb 22 11:53:03 2007 From: dbohus at cs.cmu.edu (Dan Bohus) Date: Thu, 22 Feb 2007 11:53:03 -0500 Subject: [RavenclawDev 226] Re: swapping grammars for the ASR In-Reply-To: <00a601c75684$68fcac90$03bd0280@sp.cs.cmu.edu> Message-ID: <1546B8BE23C96940B5B4929DB7CC2343010478B2@e2k3.srv.cs.cmu.edu> Hi Svetlana, Thanks Antoine for the detailed response. Just wanted to add a short bit to this... I think the idea of dynamically generating the grammar from the expectation agenda is a really interesting research issue, and we've kept saying for a while that someone should try that :) People generally use state-specific language models or grammars, but generating it from the expectation agenda is potentially more interesting, because it (can) provide more fine-grained and dynamic control... for instance if you get in the same dialog state through different paths in the dialog plan, the expectation agenda might in fact look different. I think it's an interesting path to explore. You could of course consider weighting things based on how deep they are in the agenda (something near the top is probably more likely that something near the bottom), and of course you could think of all sorts of learning setups there, etc. In any case, if you go down that path, I'd be really curious to see what happens. Keep us posted. Cheers, Dan. -----Original Message----- From: ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu [mailto:ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu] On Behalf Of Antoine Raux Sent: Thursday, February 22, 2007 8:22 AM To: ravenclaw-developers at cs.cmu.edu Subject: [RavenclawDev 225] Re: swapping grammars for the ASR Hi Svetlana, Note that we are already using state-specific (statistical) LMs in our current systems. The way we switch between LMs is through the INPUT_LINE_CONFIGURATION directive which is generally used in REQUEST agents and agencies, as in the following example from Let's Go: INPUT_LINE_CONFIGURATION( "set_lm=time, set_dtmf_len=1") The specific syntax of the configuration string is dependent on AudioServer (RavenClaw just sends the slot-value pairs to AudioServer). Right now, you use set_lm to specify the LM name for the Sphinx 2/3 recognizers, and set_dtmf_len to specify how many digits your DTMF recognizer should expect. Of course, you could add new directives, as long as you modify AudioServer so that it does the right thing (which is most likely sending the directives to each individual recognition engine). So the current way to implement what you're proposing is to add an INPUT_LINE_CONFIGURATION directive to all request agents that indicates all the rules that should be active at that time. Now, it might be interesting to have a default behavior that does what you're saying (i.e. automatically building the LM configuration from the expectation agenda). This could be achieved by overloading the CDialogAgent virtual method GetInputLineConfiguration, so it (optionally) constructs a list of active rules when called (it's already written so that it inherits unspecified slots from its parent agent, in addition to parsing the string specified in its own INPUT_LINE_CONFIGURATION). Another option is to have RC actually send the whole expectation agenda to AudioServer (it's already sending it to Helios), but then you have to modify AudioServer to allow it to parse the full agenda. Hope this helps... antoine -----Original Message----- From: ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu [mailto:ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu] On Behalf Of Svetlana Stenchikova Sent: Wednesday, February 21, 2007 10:38 PM To: ravenclaw-developers at cs.cmu.edu Subject: [RavenclawDev 224] swapping grammars for the ASR Hi, we are using JSGF grammars with sphinx 4 recognizer. The grammar of sphinx 4 is generated by converting the grammar of the phoenix parser. We would like to be able to swap Sphinx 4 grammars during the runtime. This functionality is supported by sphinx 4. One of the solutions that we consider is to modify the dialog system specific part of the DM: create execute agents that call enable/disable grammars on the ASR and place these agents manually in the dialog tree. This is not a good solution because there will be a duplication between what is passed to ASR and what is set in the GRAMMAR_MAPPING directive on the dialog manager. Another idea is to use the implementation of the GRAMMAR_MAPPING directive to make a call to the ASR component notifying it to change the grammar. >From looking at the code CMAExpect::DeclareExpectations and CMARequest::DeclareExpectations, each agent has a list of expected grammatical rules. So, when an agent is activated(or executed) it could potentially make a call to the ASR component to notify it of the grammar in the current state of the DM and to enable the rules that should be active at this point of the dialog. This would ensure the synchrony between the DM grammar and the ASR grammar. This requires a 1-to-1 mapping between phoenix and ASR grammars, which we comply with because ASR grammar is a conversion from phoenix grammar. Can you please tell us what do you think of this idea? This functionality is currently not present in ravenclaw, right? How difficult would it be to implement this? Could you please give any pointers or suggestions for the implementation? Thank you Svetlana From svetastenchikova at gmail.com Tue Feb 27 13:14:29 2007 From: svetastenchikova at gmail.com (Svetlana Stenchikova) Date: Tue, 27 Feb 2007 13:14:29 -0500 Subject: [RavenclawDev 227] Re: swapping grammars for the ASR In-Reply-To: <1546B8BE23C96940B5B4929DB7CC2343010478B2@e2k3.srv.cs.cmu.edu> References: <00a601c75684$68fcac90$03bd0280@sp.cs.cmu.edu> <1546B8BE23C96940B5B4929DB7CC2343010478B2@e2k3.srv.cs.cmu.edu> Message-ID: <31cecd6b0702271014p5a44eebg4829a017db5ef2b8@mail.gmail.com> Dan and Antoine, Thank you for your answer and suggestions. Could you please elaborate a little more on the two methods that I could use if I want to build grammar dynamically from the expectation agenda: Method 1: using GetInputLineConfiguration() When does the AudioServer receive this message? I am just trying to trace what happens when GetInputLineConfiguration(): CDMCoreAgent::updateInputLineConfiguration() calls etInputLineConfiguration() and logs Log(DMCORE_STREAM, "Input line configuration dumped below.\n%s", it always seems to be empty in my logs. Is it not used now? or Method 2: have RC actually send the whole expectation agenda to AudioServer. Via which method and in what format does it send the expectation agenda to Helios now? What is RC? (Did you mean DM?) thank you Svetlana On 2/22/07, Dan Bohus wrote: > Hi Svetlana, > > Thanks Antoine for the detailed response. Just wanted to add a short bit > to this... I think the idea of dynamically generating the grammar from > the expectation agenda is a really interesting research issue, and we've > kept saying for a while that someone should try that :) People generally > use state-specific language models or grammars, but generating it from > the expectation agenda is potentially more interesting, because it (can) > provide more fine-grained and dynamic control... for instance if you get > in the same dialog state through different paths in the dialog plan, the > expectation agenda might in fact look different. I think it's an > interesting path to explore. You could of course consider weighting > things based on how deep they are in the agenda (something near the top > is probably more likely that something near the bottom), and of course > you could think of all sorts of learning setups there, etc. > > In any case, if you go down that path, I'd be really curious to see what > happens. Keep us posted. > > Cheers, > Dan. > > -----Original Message----- > From: ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu > [mailto:ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu] On > Behalf Of Antoine Raux > Sent: Thursday, February 22, 2007 8:22 AM > To: ravenclaw-developers at cs.cmu.edu > Subject: [RavenclawDev 225] Re: swapping grammars for the ASR > > Hi Svetlana, > > Note that we are already using state-specific (statistical) LMs in our > current systems. The way we switch between LMs is through the > INPUT_LINE_CONFIGURATION directive which is generally used in REQUEST > agents > and agencies, as in the following example from Let's Go: > > INPUT_LINE_CONFIGURATION( "set_lm=time, set_dtmf_len=1") > > The specific syntax of the configuration string is dependent on > AudioServer > (RavenClaw just sends the slot-value pairs to AudioServer). Right now, > you > use set_lm to specify the LM name for the Sphinx 2/3 recognizers, and > set_dtmf_len to specify how many digits your DTMF recognizer should > expect. > Of course, you could add new directives, as long as you modify > AudioServer > so that it does the right thing (which is most likely sending the > directives > to each individual recognition engine). > > So the current way to implement what you're proposing is to add an > INPUT_LINE_CONFIGURATION directive to all request agents that indicates > all > the rules that should be active at that time. > Now, it might be interesting to have a default behavior that does what > you're saying (i.e. automatically building the LM configuration from the > expectation agenda). This could be achieved by overloading the > CDialogAgent > virtual method GetInputLineConfiguration, so it (optionally) constructs > a > list of active rules when called (it's already written so that it > inherits > unspecified slots from its parent agent, in addition to parsing the > string > specified in its own INPUT_LINE_CONFIGURATION). Another option is to > have RC > actually send the whole expectation agenda to AudioServer (it's already > sending it to Helios), but then you have to modify AudioServer to allow > it > to parse the full agenda. > > Hope this helps... > > antoine > > > > -----Original Message----- > From: ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu > [mailto:ravenclaw-developers-bounces at LOGANBERRY.srv.cs.cmu.edu] On > Behalf Of > Svetlana Stenchikova > Sent: Wednesday, February 21, 2007 10:38 PM > To: ravenclaw-developers at cs.cmu.edu > Subject: [RavenclawDev 224] swapping grammars for the ASR > > Hi, we are using JSGF grammars with sphinx 4 recognizer. > The grammar of sphinx 4 is generated by converting the grammar of the > phoenix parser. > We would like to be able to swap Sphinx 4 grammars during the > runtime. This functionality is supported by sphinx 4. > > > One of the solutions that we consider is to modify the dialog system > specific part of the DM: create execute agents that call > enable/disable grammars on the ASR and place these agents manually in > the dialog tree. This is not a good solution because there will be a > duplication between what is passed to ASR and what is set in the > GRAMMAR_MAPPING directive on the dialog manager. > > Another idea is to use the implementation of the GRAMMAR_MAPPING > directive to make a call to the ASR component notifying it to change > the grammar. > >From looking at the code CMAExpect::DeclareExpectations and > CMARequest::DeclareExpectations, each agent has a list of expected > grammatical rules. > > So, when an agent is activated(or executed) it could potentially make > a call to the ASR component to notify it of the grammar in the current > state of the DM and to enable the rules that should be active at this > point of the dialog. > This would ensure the synchrony between the DM grammar and the ASR > grammar. This requires a 1-to-1 mapping between phoenix and ASR > grammars, which we comply with because ASR grammar is a conversion > from phoenix grammar. > > Can you please tell us what do you think of this idea? > This functionality is currently not present in ravenclaw, right? > How difficult would it be to implement this? > Could you please give any pointers or suggestions for the > implementation? > > > Thank you > > Svetlana > > > >