From pavel at dit.unitn.it Mon Aug 3 14:55:34 2009 From: pavel at dit.unitn.it (Pavel Shvaiko) Date: Mon, 3 Aug 2009 20:55:34 +0200 Subject: [ACT-R-users] OAEI-2009: Call for ontology matching systems participation Message-ID: Apologies for cross-postings +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Call for ontology matching systems participation +++++++++++++++++++++++++++++++++++++++++++++++++++++++ OAEI-2009 Ontology Alignment Evaluation Initiative in cooperation with the ISWC Ontology Matching workshop October 25, 2009 - Chantilly, near Washington DC., USA http://oaei.ontologymatching.org/2009/ +++++++++++++++++++++++++++++++++++++++++++++++++++++++ BRIEF DESCRIPTION Ontology matching is an important task for semantic system interoperability. Yet it is not easy to assess the respective qualities of available matching systems. The Ontology Alignment Evaluation Initiative (OAEI) is a coordinated international initiative set up for evaluating ontology matching systems. OAEI campaigns consist of applying matching systems to ontology pairs and evaluating their results. OAEI-2009 is the sixth OAEI campaign. It will consist of five tracks gathering elleven test cases and different evaluation modalities. The tracks cover: (i) comparison track; (ii) expressive ontologies; (iii) directories and thesauri; (iv) oriented matching; (v) instance matching. Anyone developing ontology matchers can participate by evaluating their systems and sending the results to the organizers. Tools for evaluating results and preliminary test bench tuning are available. Final results of the campaign will be presented at the Ontology Matching workshop and published in the proceedings. IMPORTANT DATES June 1st, 2009: First publication of test cases June 22nd, 2009: Comments on test cases (any time before that date) July 6th, 2009: Final publication of test cases Sept. 1st, 2009: Preliminary results due (for interoperability-checking) Sept. 28st, 2009: Participants send final results and supporting papers Oct. 5th, 2009: Organizers publish results for comments Oct. 25th, 2009: OM-2009 workshop + OAEI-2009 final results ready. More about OAEI-2009: http://oaei.ontologymatching.org/2009/ More about OAEI: http://oaei.ontologymatching.org/ More about OM-2009: http://om2009.ontologymatching.org/ OM-2009 submission deadline is approaching - 11.08.2009. More about ontology matching: http://www.ontologymatching.org/; ------------------------------------------------------- Download the OM-2009 flyer: http://om2009.ontologymatching.org/Pictures/CfP_OM2009_flyer.pdf ------------------------------------------------------- Cheers, Pavel --------------------------------------- Pavel Shvaiko, Ph.D. Innovation and Research Project Manager TasLab - Informatica Trentina S.p.A. Via G. Gilli, 2 38100 Trento - Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at wooden-robot.net Sat Aug 8 12:02:40 2009 From: david at wooden-robot.net (David Pautler) Date: Sat, 8 Aug 2009 09:02:40 -0700 Subject: [ACT-R-users] Spatial and temporal granularity in models of perception Message-ID: I'm interested in how people attribute high-level descriptions such as "chase" to the movement of simple animations such as a pair of black dots on a white background. I'm looking for advice on how to make the input for a cognitive model of this phenomenon similar to that which a human would have, so that the performance of the two can be compared. For example, I have a particular animated scene I want to use, and I could render it with arbitrarily-precise positioning and sizes of the dots. And I could choose an arbitrarily high number of frames/time slices. But it seems that the granularity of both should be determined by some set of "just noticeable differences (JNDs)". Beyond granularity, another problem is segmentation of motion trajectories. I have found a few papers that propose algorithms for segmentation, but it's not clear that they have a cognitive basis. I've dipped into the psychophysics and cognitive modeling literature (particularly EPIC and cognitive maps), and I thought I might get some good pointers here about whether there is much agreement about how to handle granularity and segmentation. Any recommendations? Regards, David Pautler From bej at cs.cmu.edu Tue Aug 11 11:18:24 2009 From: bej at cs.cmu.edu (Bonnie John) Date: Tue, 11 Aug 2009 11:18:24 -0400 Subject: [ACT-R-users] Models of listening to continuous speech? In-Reply-To: References: Message-ID: <4A818BC0.4080003@cs.cmu.edu> Folks, Anyone have an models using ACT-R's Audition Module to listen to and comprehend continuous speech? If not continuous speech, how about short phrases or anything other than tones? Any experience using the Audition Module at all? I'm asking because I'd like to make models of ACT-R using the JAWS screen reader to navigate a web site and compare it to ACT-R models of visually navigating a web site. I'd like to read papers or speak with someone who has had experience with the Audition Module to get a heads-up on peaks and pitfalls. I did browse the Publications page, but could only find papers that use vision (but I could have missed the listening ones - apologies if I did). Thanks, Bonnie From r.m.young at acm.org Tue Aug 11 16:10:47 2009 From: r.m.young at acm.org (Richard M Young) Date: Tue, 11 Aug 2009 21:10:47 +0100 Subject: [ACT-R-users] Models of listening to continuous speech? In-Reply-To: <4A818BC0.4080003@cs.cmu.edu> References: <4A818BC0.4080003@cs.cmu.edu> Message-ID: Hello Bonnie, Some years ago, following on from David Huss's work with Mike Byrne, Martin Greaves and I played around with simple models using the audicon to deal with lists of isolated words in a short-term memory experiment. At least in our hands, the audicon was assumed to hold "words" in some unspecified phonemic/phonological/articulatory code, which therefore required an access to DM/LTM in order to retrieve the word as a lexical item. As I remember it -- which is not well -- we found it a bit clunky to use the audicon to deal with ephemeral material spread out in time. With no disrespect to those who had worked on it, I think it's fair to say that not much thought had been given to the underlying design of a transducer dealing with speech sounds. For example, I *think* we had to add an END signal to indicate the silence following the last item in a list. Martin later build partial models of the running memory span task using a similar approach, and included them in his PhD thesis. He may be able to add to what I'm saying here. Word recognition in speech is a well-studied area in cognitive psychology, and there are some good (ad hoc) models around, and I believe a reasonable amount of consensus. It had crossed my mind from time to time that it would be an interesting area to bring into contact with Act-R. There are of course Act-R models of the lexical retrieval stage itself. Good luck! ~ Richard At 11:18 -0400 11/8/09, Bonnie John wrote: >Folks, > >Anyone have an models using ACT-R's Audition Module to listen to and >comprehend continuous speech? >If not continuous speech, how about short phrases or anything other than >tones? >Any experience using the Audition Module at all? > >I'm asking because I'd like to make models of ACT-R using the JAWS >screen reader to navigate a web site and compare it to ACT-R models of >visually navigating a web site. I'd like to read papers or speak with >someone who has had experience with the Audition Module to get a >heads-up on peaks and pitfalls. > >I did browse the Publications page, but could only find papers that use >vision (but I could have missed the listening ones - apologies if I did). > >Thanks, >Bonnie From ijuvina at andrew.cmu.edu Tue Aug 11 16:33:06 2009 From: ijuvina at andrew.cmu.edu (ion juvina) Date: Tue, 11 Aug 2009 16:33:06 -0400 Subject: [ACT-R-users] Models of listening to continuous speech? In-Reply-To: References: <4A818BC0.4080003@cs.cmu.edu> Message-ID: <8D5525EA-C835-41DA-94D1-C21456527DF3@andrew.cmu.edu> Hello, I also used the audition module of ACT-R to model rehearsal based on phonological loop in N-Back. this paper describes the model: Juvina, I., & Taatgen, N. A. (2007). Modeling control strategies in the N-Back task. Proceedings of the eight International Conference on Cognitive Modeling (pp. 73-78). New York: Psychology Press. the code of the model is available to download from my webpage: http://www.contrib.andrew.cmu.edu/~ijuvina/Publications.htm ~ ion Ion Juvina, PhD Research Fellow Department of Psychology Baker Hall 336A Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 telephone: 412-268-2837 email: ijuvina at cmu.edu webpage: http://www.andrew.cmu.edu/user/ijuvina/index.htm Download my most recent article at: http://dx.doi.org/10.1016/j.actpsy.2009.03.002 or email me for a free reprint On Aug 11, 2009, at 4:10 PM, Richard M Young wrote: > Hello Bonnie, > > Some years ago, following on from David Huss's work with Mike Byrne, > Martin Greaves and I played around with simple models using the > audicon to deal with lists of isolated words in a short-term memory > experiment. At least in our hands, the audicon was assumed to hold > "words" in some unspecified phonemic/phonological/articulatory code, > which therefore required an access to DM/LTM in order to retrieve the > word as a lexical item. > > As I remember it -- which is not well -- we found it a bit clunky to > use the audicon to deal with ephemeral material spread out in time. > With no disrespect to those who had worked on it, I think it's fair > to say that not much thought had been given to the underlying design > of a transducer dealing with speech sounds. For example, I *think* > we had to add an END signal to indicate the silence following the > last item in a list. > > Martin later build partial models of the running memory span task > using a similar approach, and included them in his PhD thesis. He > may be able to add to what I'm saying here. > > Word recognition in speech is a well-studied area in cognitive > psychology, and there are some good (ad hoc) models around, and I > believe a reasonable amount of consensus. It had crossed my mind > from time to time that it would be an interesting area to bring into > contact with Act-R. There are of course Act-R models of the lexical > retrieval stage itself. > > Good luck! > > ~ Richard > > At 11:18 -0400 11/8/09, Bonnie John wrote: >> Folks, >> >> Anyone have an models using ACT-R's Audition Module to listen to and >> comprehend continuous speech? >> If not continuous speech, how about short phrases or anything other >> than >> tones? >> Any experience using the Audition Module at all? >> >> I'm asking because I'd like to make models of ACT-R using the JAWS >> screen reader to navigate a web site and compare it to ACT-R models >> of >> visually navigating a web site. I'd like to read papers or speak with >> someone who has had experience with the Audition Module to get a >> heads-up on peaks and pitfalls. >> >> I did browse the Publications page, but could only find papers that >> use >> vision (but I could have missed the listening ones - apologies if I >> did). >> >> Thanks, >> Bonnie > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.greaves at bristol.ac.uk Wed Aug 12 11:32:59 2009 From: martin.greaves at bristol.ac.uk (Martin Greaves) Date: Wed, 12 Aug 2009 16:32:59 +0100 Subject: [ACT-R-users] Models of listening to continuous speech? In-Reply-To: References: <4A818BC0.4080003@cs.cmu.edu> Message-ID: <4A82E0AB.3080609@bristol.ac.uk> Hi Bonnie Further to Richard's comments, one of the key issues we were interested in was rehearsal of auditory items. We attempted to model the word length effect in ACT-R using the audicon to represent a phonological loop-like structure. Whereas the Huss & Byrne model used declarative memory for item storage and decay, and encoded list position within the item chunk, we wanted to see whether a phonological loop could be achieved using the intrinsic mechanisms of the audicon to account for order and decay. However, we found that maintaining any form of rehearsal in the audicon to be difficult to implement. One of the key problems is determining when the end of a list of items has been achieved, since ACT-R places 'rehearsed' items back in the audicon as unattended items. Hence, once an item has been rehearsed, it is added to the end of the existing list of items with nothing to discriminate it from items in the current (unrehearsed) list of items. Consequently, the audicon does not allow discrimination of lists. In order to get around this problem and determine when the end of the list had been reached we added an 'end' item. As Richard said, this was argued as being analogous to a drawing of breath or a pause prior to any further rehearsal of items. A further problem with the model was that it failed to reproduce serial order effects. Although we obtained a good fit for word length, particularly for one-syllable words, we found that serial position curves were 'm' shaped rather than displaying extended primacy and single-item recency. This was because items simply dropped out of memory if they were not rehearsed within the decay period, with no possibility of recovery due to the all-or-nothing decay in the audicon. Without intending to be in any way critical, this was not really surprising given the operation of the audicon. However, rehearsal would not appear to be an issue in your case, so recall from the audicon should simply represent an extended recency effect. In the running memory span models we took a slightly different approach to the model. We assumed that storage might utilise a range of available short-term memory structures, with recall from whichever structures still contained items at recall. This allowed continued use of the audicon but brings declarative memory into play, since a spin-off from sub-vocal rehearsal is an item chunk in DM. The audicon was considered it to operate simply as an extended buffer containing the last 2-3 items presented. Thus, we recalled whatever was available from the audicon at the point of recall, with order implicit in recall, prior to recall of earlier items from declarative memory. Using this approach we were able to account for a number of possible rehearsal strategies and what we termed rehearsal 'micro-strategies' (cf. Gray & Boehm-Davis, 2000) reproducing a range of patterns of recall observed empirically. Were were also able to account for patterns of delayed recall when items in the audicon had long since decayed through recall from declarative memory. Hope this is of help, Martin Richard M Young wrote: > Hello Bonnie, > > Some years ago, following on from David Huss's work with Mike Byrne, > Martin Greaves and I played around with simple models using the > audicon to deal with lists of isolated words in a short-term memory > experiment. At least in our hands, the audicon was assumed to hold > "words" in some unspecified phonemic/phonological/articulatory code, > which therefore required an access to DM/LTM in order to retrieve the > word as a lexical item. > > As I remember it -- which is not well -- we found it a bit clunky to > use the audicon to deal with ephemeral material spread out in time. > With no disrespect to those who had worked on it, I think it's fair to > say that not much thought had been given to the underlying design of a > transducer dealing with speech sounds. For example, I *think* we had > to add an END signal to indicate the silence following the last item > in a list. > > Martin later build partial models of the running memory span task > using a similar approach, and included them in his PhD thesis. He may > be able to add to what I'm saying here. > > Word recognition in speech is a well-studied area in cognitive > psychology, and there are some good (ad hoc) models around, and I > believe a reasonable amount of consensus. It had crossed my mind from > time to time that it would be an interesting area to bring into > contact with Act-R. There are of course Act-R models of the lexical > retrieval stage itself. > > Good luck! > > ~ Richard > > At 11:18 -0400 11/8/09, Bonnie John wrote: >> Folks, >> >> Anyone have an models using ACT-R's Audition Module to listen to and >> comprehend continuous speech? >> If not continuous speech, how about short phrases or anything other than >> tones? >> Any experience using the Audition Module at all? >> >> I'm asking because I'd like to make models of ACT-R using the JAWS >> screen reader to navigate a web site and compare it to ACT-R models of >> visually navigating a web site. I'd like to read papers or speak with >> someone who has had experience with the Audition Module to get a >> heads-up on peaks and pitfalls. >> >> I did browse the Publications page, but could only find papers that use >> vision (but I could have missed the listening ones - apologies if I >> did). >> >> Thanks, >> Bonnie From bej at cs.cmu.edu Wed Aug 12 11:52:13 2009 From: bej at cs.cmu.edu (Bonnie John) Date: Wed, 12 Aug 2009 11:52:13 -0400 Subject: [ACT-R-users] Models of listening to continuous speech? In-Reply-To: <4A82E0AB.3080609@bristol.ac.uk> References: <4A818BC0.4080003@cs.cmu.edu> <4A82E0AB.3080609@bristol.ac.uk> Message-ID: <4A82E52D.40004@cs.cmu.edu> This helps a LOT. Thanks, Bonnie Martin Greaves wrote: > Hi Bonnie > > Further to Richard's comments, one of the key issues we were interested > in was rehearsal of auditory items. We attempted to model the word > length effect in ACT-R using the audicon to represent a phonological > loop-like structure. Whereas the Huss & Byrne model used declarative > memory for item storage and decay, and encoded list position within the > item chunk, we wanted to see whether a phonological loop could be > achieved using the intrinsic mechanisms of the audicon to account for > order and decay. However, we found that maintaining any form of > rehearsal in the audicon to be difficult to implement. > > One of the key problems is determining when the end of a list of items > has been achieved, since ACT-R places 'rehearsed' items back in the > audicon as unattended items. Hence, once an item has been rehearsed, it > is added to the end of the existing list of items with nothing to > discriminate it from items in the current (unrehearsed) list of items. > Consequently, the audicon does not allow discrimination of lists. In > order to get around this problem and determine when the end of the list > had been reached we added an 'end' item. As Richard said, this was > argued as being analogous to a drawing of breath or a pause prior to any > further rehearsal of items. > > A further problem with the model was that it failed to reproduce serial > order effects. Although we obtained a good fit for word length, > particularly for one-syllable words, we found that serial position > curves were 'm' shaped rather than displaying extended primacy and > single-item recency. This was because items simply dropped out of > memory if they were not rehearsed within the decay period, with no > possibility of recovery due to the all-or-nothing decay in the audicon. > Without intending to be in any way critical, this was not really > surprising given the operation of the audicon. However, rehearsal would > not appear to be an issue in your case, so recall from the audicon > should simply represent an extended recency effect. > > In the running memory span models we took a slightly different approach > to the model. We assumed that storage might utilise a range of > available short-term memory structures, with recall from whichever > structures still contained items at recall. This allowed continued use > of the audicon but brings declarative memory into play, since a > spin-off from sub-vocal rehearsal is an item chunk in DM. The audicon > was considered it to operate simply as an extended buffer containing the > last 2-3 items presented. Thus, we recalled whatever was available from > the audicon at the point of recall, with order implicit in recall, prior > to recall of earlier items from declarative memory. Using this approach > we were able to account for a number of possible rehearsal strategies > and what we termed rehearsal 'micro-strategies' (cf. Gray & Boehm-Davis, > 2000) reproducing a range of patterns of recall observed empirically. > Were were also able to account for patterns of delayed recall when items > in the audicon had long since decayed through recall from declarative > memory. > > Hope this is of help, > > Martin > > > > > > > Richard M Young wrote: > >> Hello Bonnie, >> >> Some years ago, following on from David Huss's work with Mike Byrne, >> Martin Greaves and I played around with simple models using the >> audicon to deal with lists of isolated words in a short-term memory >> experiment. At least in our hands, the audicon was assumed to hold >> "words" in some unspecified phonemic/phonological/articulatory code, >> which therefore required an access to DM/LTM in order to retrieve the >> word as a lexical item. >> >> As I remember it -- which is not well -- we found it a bit clunky to >> use the audicon to deal with ephemeral material spread out in time. >> With no disrespect to those who had worked on it, I think it's fair to >> say that not much thought had been given to the underlying design of a >> transducer dealing with speech sounds. For example, I *think* we had >> to add an END signal to indicate the silence following the last item >> in a list. >> >> Martin later build partial models of the running memory span task >> using a similar approach, and included them in his PhD thesis. He may >> be able to add to what I'm saying here. >> >> Word recognition in speech is a well-studied area in cognitive >> psychology, and there are some good (ad hoc) models around, and I >> believe a reasonable amount of consensus. It had crossed my mind from >> time to time that it would be an interesting area to bring into >> contact with Act-R. There are of course Act-R models of the lexical >> retrieval stage itself. >> >> Good luck! >> >> ~ Richard >> >> At 11:18 -0400 11/8/09, Bonnie John wrote: >> >>> Folks, >>> >>> Anyone have an models using ACT-R's Audition Module to listen to and >>> comprehend continuous speech? >>> If not continuous speech, how about short phrases or anything other than >>> tones? >>> Any experience using the Audition Module at all? >>> >>> I'm asking because I'd like to make models of ACT-R using the JAWS >>> screen reader to navigate a web site and compare it to ACT-R models of >>> visually navigating a web site. I'd like to read papers or speak with >>> someone who has had experience with the Audition Module to get a >>> heads-up on peaks and pitfalls. >>> >>> I did browse the Publications page, but could only find papers that use >>> vision (but I could have missed the listening ones - apologies if I >>> did). >>> >>> Thanks, >>> Bonnie >>> > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > From schulth at sfbtr8.uni-bremen.de Thu Aug 13 11:41:39 2009 From: schulth at sfbtr8.uni-bremen.de (Holger Schultheis) Date: Thu, 13 Aug 2009 17:41:39 +0200 Subject: [ACT-R-users] First Announcement and CFP: Remembering Who We Are - Human Memory for Artificial Agents at AISB 2010 Message-ID: <1250178099.16186.25.camel@cepiphany.informatik.uni-bremen.de> Dear colleagues, Please see below the first CFP for the human memory modeling symposium. Cheers, Holger ***** Apologies if you receive multiple copies of this email ******* CALL FOR PAPERS REMEMBERING WHO WE ARE - HUMAN MEMORY FOR ARTIFICIAL AGENTS A one day symposium on the 29th March 2010 In conjunction with the AISB 2010 Convention (http://www.aisb.org.uk/convention/aisb10/AISB2010.html) De Montfort University, Leicester (The symposium is supported by the European FP7 Project LIREC (http://www.lirec.eu/) ) Memory gives us identity, shapes our personality and drives our reactions to different situations in life. We actively create expectations, track the fulfilment of these expectations and dynamically modify our memory when new experiences demand it. Yet up to date, many important social aspects of human memory (for instance, emotional memory and episodic memory) to artificial intelligent agents have not been given much attention. The challenge might lie in the amount of memories one can have in a life time. Take a narrative agent for example, how can we generate a lifetime?s worth of memories for this agent? Can we easily record human experiences for this purpose? What trust and privacy issues will this entail? On the other hand, without this type of memory, can the agent generate believable life stories given that it is what colors our lives in retrospect? For an agent that continuously interacts with users or other agents, how can we design it with the capability to generate memories worth remembering in its lifetime? How can the agent record experiences of others during interaction? Can the agent maintain its relationship with others without any information about its past experiences with them? Artificial agent researchers have been constantly coming up with computational cognitive models inspired by the human brain to create characters that are more natural, believable and behave in human plausible ways. However, memory components in these models are usually oversimplified. Memory components which have been widely accepted and modelled are the long-term memory including procedural and declarative memories, the short-term memory and the sensory memory. What about the more ?socially-aware? memory which allows us to be effectively involved in social interactions and which fundamentally supports the creation of our life stories including the significance of events and their emotional impact? It is important to review artificial agents without this kind of memory particularly those designed for social interactions, and reflect on the effects of this shortcoming. Additionally, many existing models do not take into consideration the bio-mechanisms of human memory operations such as those involved in retrieval and forgetting processes. The most commonly adopted approach to forgetting is decay but the human brain performs other processes such as generalisation, reconstruction and repression to list a few. This symposium offers an opportunity for interdisciplinary discussions on human-like memory for artificial agents including organisational structures and mechanisms. We hope to bring together memory researchers, psychologists, computer scientists and neurologists to discuss issues on memory modelling, memory data collection and application to achieve a better understanding of which, when and how human-like memory can contribute to artificial agents modelling. Topics of interest include but are not limited to: * Role of memory in artificial agents * Type of memory and application * Memory and emotion modelling * Human-agent/human-robot interaction history * Effective memory data collection * Privacy issues related to data collection * Bio-inspiration to memory modelling * Memory mechanisms for encoding, storage and retrieval * Memory influence on reasoning and decision-making * Modelling forgetting in episodic memory * Ethological aspects of memory * Spatial memory Submission We are seeking submissions of original papers (up to 8 pages) that fit well with the symposium theme and topics. Please send the PDF submissions to Mei Yii Lim (M.Lim at hw.ac.uk ) and Wan Ching Ho (W.C.Ho at herts.ac.uk ), including in the email text the following information: title of paper, keywords, author list, contact email, name of attached PDF file. All submissions will be peer reviewed. Authors of accepted contributions will be asked to prepare the final versions (up to 8 pages) for inclusion in the symposium proceedings. At least one author of each accepted paper will be required to register and attend the symposium to present their work. Important Dates * 15th January 2010: Submission deadline of full-length paper * 8th February 2010: Notification for paper acceptance * 1st March 2010: Submission of camera-ready final papers * 29th March 2010: Symposium Program Committee Cyril Brom, Charles University Prague Sibylle Enz, University of Bamberg Wan Ching Ho, University of Hertfordshire (co-chair) Mei Yii Lim, Heriot-Watt University (co-chair) Andrew Nuxoll, University of Portland Alexei Samsonovich, George Mason University Holger Schultheis, University of Bremen Dan Tecuci, University of Texas Patricia Vargas, Heriot-Watt University Official Website http://www.macs.hw.ac.uk/~myl/AISBRWWA.html Contact Dr. Mei Yii Lim Computer Science, Heriot-Watt University, Riccarton, EH14 4AS, UK Email: M.Lim at hw.ac.uk Homepage: http://www.macs.hw.ac.uk/~myl/ Tel: (44) 131 4514162 Fax: (44) 131 4513327 Dr. Wan Ching Ho STRI, University of Hertfordshire, College Lane, Hatfield, AL10 9AB, UK Email: W.C.Ho at herts.ac.uk Homepage: http://homepages.feis.herts.ac.uk/~comqwch/ Tel: (44) 170 7285111 Fax: (44) 170 7284185 From pavel at dit.unitn.it Mon Aug 24 14:28:40 2009 From: pavel at dit.unitn.it (Pavel Shvaiko) Date: Mon, 24 Aug 2009 20:28:40 +0200 Subject: [ACT-R-users] OAEI-2009: Final call for ontology matching systems participation Message-ID: Apologies for cross-postings +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Final call for ontology matching systems participation +++++++++++++++++++++++++++++++++++++++++++++++++++++++ OAEI-2009 Ontology Alignment Evaluation Initiative in cooperation with the ISWC Ontology Matching workshop October 25, 2009 - Chantilly, near Washington DC., USA http://oaei.ontologymatching.org/2009/ +++++++++++++++++++++++++++++++++++++++++++++++++++++++ BRIEF DESCRIPTION Ontology matching is an important task for semantic system interoperability. Yet it is not easy to assess the respective qualities of available matching systems. The Ontology Alignment Evaluation Initiative (OAEI) is a coordinated international initiative set up for evaluating ontology matching systems. OAEI campaigns consist of applying matching systems to ontology pairs and evaluating their results. OAEI-2009 is the sixth OAEI campaign. It will consist of five tracks gathering elleven test cases and different evaluation modalities. The tracks cover: (i) comparison track; (ii) expressive ontologies; (iii) directories and thesauri; (iv) oriented matching; (v) instance matching. Anyone developing ontology matchers can participate by evaluating their systems and sending the results to the organizers. Tools for evaluating results and preliminary test bench tuning are available. Final results of the campaign will be presented at the Ontology Matching workshop and published in the proceedings. IMPORTANT DATES June 1st, 2009: First publication of test cases June 22nd, 2009: Comments on test cases (any time before that date) July 6th, 2009: Final publication of test cases Sept. 1st, 2009: Preliminary results due (for interoperability-checking) Sept. 28st, 2009: Participants send final results and supporting papers Oct. 5th, 2009: Organizers publish results for comments Oct. 25th, 2009: OM-2009 workshop + OAEI-2009 final results ready. More about OAEI-2009: http://oaei.ontologymatching.org/2009/ More about OAEI: http://oaei.ontologymatching.org/ More about OM-2009: http://om2009.ontologymatching.org/ More about ontology matching: http://www.ontologymatching.org/; Cheers, Pavel --------------------------------------- Pavel Shvaiko, Ph.D. Innovation and Research Project Manager TasLab - Informatica Trentina S.p.A. Via G. Gilli, 2 38100 Trento - Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppavlik at cs.cmu.edu Tue Aug 25 11:32:08 2009 From: ppavlik at cs.cmu.edu (Philip Pavlik) Date: Tue, 25 Aug 2009 11:32:08 -0400 Subject: [ACT-R-users] Postdoctoral Position Message-ID: <1546B8BE23C96940B5B4929DB7CC234302B1E259@e2k3.srv.cs.cmu.edu> Postdoctoral research opportunity - Learning Science - Carnegie Mellon University Learning and Adaptive Tutoring in Pre-Algebra We are seeking a postdoctoral fellow to work on an educational research project investigating prerequisite remediation for students using a pre-algebra computerized learning system. The goal of this work is to improve the existing system by providing just in time remediation for deficits in student knowledge or skills that may hamper progress in the system. An ideal candidate for this position will have experience with math pedagogy, instructional design, or teaching experience. The position will involve designing and running experiments, data analysis, and writing scholarly research articles. Other useful skills for this project include task analysis, think aloud protocols and experience with computerized learning systems. If you are interested in this position please send an email to Dr. Philip Pavlik (ppavlik at andrew.cmu.edu) with a curriculum vitae and a description of your availability. For more information on the grant this work involves, please see http://ies.ed.gov/ncer/projects/grant.asp?ProgID=5&grantid=477&InvID=388 and http://optimallearning.org/ies/index.html . Philip I. Pavlik Jr. Human Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA 15213 ppavlik at andrew.cmu.edu http://optimallearning.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppavlik at cs.cmu.edu Tue Aug 25 11:40:55 2009 From: ppavlik at cs.cmu.edu (Philip Pavlik) Date: Tue, 25 Aug 2009 11:40:55 -0400 Subject: [ACT-R-users] Educational Datamining 2010 call for papers Message-ID: <1546B8BE23C96940B5B4929DB7CC234302B1E25E@e2k3.srv.cs.cmu.edu> Apologies for cross postings.... EDM2010 http://www.educationaldatamining.org/EDM2010 The Third International Conference on Educational Data Mining Pittsburgh, PA USA June 11-13, 2010 Call for Papers The Third International Conference on Educational Data Mining brings together researchers from computer science, education, psychology, psychometrics, and statistics to analyze large data sets to answer educational research questions. The increase in instrumented educational software, as well as state databases of student test scores, has created large repositories of data reflecting how students learn. The EDM conference focuses on computational approaches for using those data to address important educational questions. The broad collection of research disciplines ensures cross fertilization of ideas, with the central questions of educational research serving as a unifying focus. This Conference emerges from preceding EDM workshops at the AAAI, AIED, ICALT, ITS, and UM conferences. Topics of Interest We welcome papers describing original work. Areas of interest include but are not limited to: * Improving educational software. Many large educational data sets are generated by computer software. Can we use our discoveries to improve the software's effectiveness? * Domain representation. How do learners represent the domain? Does this representation shift as a result of instruction? Do different subpopulations represent the domain differently? * Evaluating teaching interventions. Student learning data provides a powerful mechanism for determining which teaching actions are successful. How can we best use such data? * Emotion, affect, and choice. The student's level of interest and willingness to be a partner in the educational process is critical. Can we detect when students are bored and uninterested? What other affective states or student choices should we track? * Integrating data mining and pedagogical theory. Data mining typically involves searching a large space of models. Can we use existing educational and psychological knowledge to better focus our search? * Improving teacher support. What types of assessment information would help teachers? What types of instructional suggestions are both feasible to generate and would be welcomed by teachers? * Replication studies. We are especially interested in papers that apply a previously used technique to a new domain, or that reanalyze an existing data set with a new technique. * Best practices for adaptation of data mining techniques to EDM. We are especially interested in papers that present best practices or methods for the adaptation of techniques from data mining and other relevant literatures to the specific needs of analysis of educational data. Important Dates * Paper submission: March 10, 2010 (23:59:59 EST), no extension * Acceptance notification: April 21, 2010 * Poster abstract submission: April 28, 2010 (23:59:59 EST) * Poster notification: May 3, 2010 * Camera ready papers, posters: May 19, 2010 * Conference: June 11-13, 2010 Submission Types All submissions should follow the formatting guidelines (MS Word, PDF). There are three types of submission: * Full papers: Maximum of 10 pages. Should describe substantial, unpublished work * Young researchers: Maximum of 8 pages. Designed for graduate students and undergraduates * Poster abstracts: Maximum of 2 pages Conference Organization * Conference Chair: Ryan S.J.d. Baker, Worcester Polytechnic Institute * Program Chairs: Agathe Merceron, Beuth University of Applied Sciences Berlin Philip I. Pavlik Jr., Carnegie Mellon University * Local Organizing Chair: John Stamper, Carnegie Mellon University * Web Chair: Arnon Hershkovitz, Tel Aviv University Program Committee Esma Aimeur, University of Montreal, Canada Beth Ayers, Carnegie Mellon University, USA Ryan Baker, Worcester Polytechnic Institute, USA Tiffany Barnes, University of North Carolina at Charlotte, USA Joseph Beck, Worcester Polytechnic Institute, USA Bettina Berendt, Katholieke Universiteit Leuven , Belgium Gautam Biswas, Vanderbilt University, USA Cristophe Choquet, Universit? du Maine, France Cristina Conati, University of British Columbia, Canada Richard Cox, University of Sussex, UK Michel Desmarais, Ecole Polytechnique de Montreal, Canada Aude Dufresne, University of Montreal, Canada Mingyu Feng, Worcester Polytechnic Institute, USA Art Graesser, Universisty of Memphis, USA Andreas Harrer, Katholische Universit?t Eichst?tt-Ingolstadt, Germany Neil Heffernan, Worcester Polytechnic Institute, USA Arnon Hershkovitz, Tel Aviv University, Israel Cecily Hiener, University of Utah, USA Roland Hubscher, Bentley University, USA Sebastian Iksal, Universit? du Maine, France Kenneth Koedinger, Carnegie Mellon University, USA Vanda Luengo, Universit? Joseph Fourier Grenoble, France Tara Madhyastha, University of Washington, USA Brent Martin, Canterbury University, New Zealand Noboru Matsuda, Carnegie Mellon University, USA Manolis Mavrikis, The University of Edinburgh, UK Gordon McCalla, Univerisity of Saskatchewan, Canada Bruce McLaren, Deutsches Forschungszentrum f?r K?nstliche Intelligenz,Germany Julia Mingullon Alfonso, Universitat Oberta de Catalunya, Spain Tanja Mitrovic, Canterbury University, New Zealand Jack Mostow, Carnegie Mellon University, USA Rafi Nachmias, Tel Aviv University, Israel Roger Nkambou, Universit? du Qu?bec ? Montr?al (UQAM), Canada Mykola Pechenizkiy, Eindhoven University of Technology, Netherlands Steve Ritter, Carnegie Learning, USA Cristobal Romero, Cordoba University, Spain Carolyn Rose, Carnegie Mellon University, USA Steven Tanimoto, University of Washington, USA Sebastian Ventura, Cordoba University, Spain Kalina Yacef, University of Sydney, Australia Osmar Zaiane, University of Alberta, Canada Philip I. Pavlik Jr. Human Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA 15213 ppavlik at andrew.cmu.edu http://optimallearning.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stu at agstechnet.com Thu Aug 27 14:24:56 2009 From: stu at agstechnet.com (Stu @ AGS TechNet) Date: Thu, 27 Aug 2009 14:24:56 -0400 Subject: [ACT-R-users] DM Retrieval Request and chunk-type hierarchy/subtype/supertype Message-ID: <4A96CF78.8030909@agstechnet.com> All, I have been looking for a detailed discussion of how a retrieval request "+retrieval>" functions for chunk-type supertype matches (or subytpe matches). I have not been able to find any such discussion (in archives, tutorials, or reference manual). Can anyone steer me in the right direction? My questions are: Thank you, Stu -- AGS TechNet P.O.Box 752384 Dayton, OH 45475-2384 937-903-0558 Voice 513-297-0880 Fax stu at agstechnet.com www.agstechnet.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Thu Aug 27 15:16:24 2009 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Thu, 27 Aug 2009 15:16:24 -0400 Subject: [ACT-R-users] DM Retrieval Request and chunk-type hierarchy/subtype/supertype In-Reply-To: <4A96CF78.8030909@agstechnet.com> References: <4A96CF78.8030909@agstechnet.com> Message-ID: --On Thursday, August 27, 2009 2:24 PM -0400 "Stu @ AGS TechNet" wrote: > All, > I have been looking for a detailed discussion of how a retrieval > request "+retrieval>" functions for chunk-type supertype matches (or > subytpe matches). > I have not been able to find any such discussion (in archives, > tutorials, or reference manual). > Can anyone steer me in the right direction? > My questions are: > Thank you, > Stu > It looks like your specific questions got clipped, but there's not really anything special about it so hopefully this general info will help. To be retrieved, a chunk must be a subtype of the chunk-type specified in the retrieval request (a chunk-type is considered to be a subtype of itself). That's basically all there is to it. There is no penalty or bonus to the activation of the possible matches based on the specific chunk-type of a chunk in the retrieval set. For more details of the matching you can look at the match-chunk-spec-p and find-matching-chunks commands in the reference manual. Essentially, the declarative module uses find-matching-chunks with the default testing functions and the set of all chunks in DM to determine which chunks are a match. Hope that helps, Dan From stu at agstechnet.com Thu Aug 27 16:05:24 2009 From: stu at agstechnet.com (Stu @ AGS TechNet) Date: Thu, 27 Aug 2009 16:05:24 -0400 Subject: [ACT-R-users] DM Retrieval Request and chunk-type hierarchy/subtype/supertype In-Reply-To: References: <4A96CF78.8030909@agstechnet.com> Message-ID: <4A96E704.6060502@agstechnet.com> Dan, Yes thanks. That was exactly what I was looking for. I would suggest that your second paragraph (probably with little to no change) be added to the Declarative Module section of the reference-manual.doc. Maybe on page 201 right before the section on Activation. What do you think? Stu db30 at andrew.cmu.edu wrote: > --On Thursday, August 27, 2009 2:24 PM -0400 "Stu @ AGS TechNet" > wrote: > > >> All, >> I have been looking for a detailed discussion of how a retrieval >> request "+retrieval>" functions for chunk-type supertype matches (or >> subytpe matches). >> I have not been able to find any such discussion (in archives, >> tutorials, or reference manual). >> Can anyone steer me in the right direction? >> My questions are: >> Thank you, >> Stu >> >> > > > It looks like your specific questions got clipped, but there's not > really anything special about it so hopefully this general info > will help. > > To be retrieved, a chunk must be a subtype of the chunk-type specified > in the retrieval request (a chunk-type is considered to be a subtype of > itself). That's basically all there is to it. There is no penalty or > bonus to the activation of the possible matches based on the specific > chunk-type of a chunk in the retrieval set. > > For more details of the matching you can look at the match-chunk-spec-p > and find-matching-chunks commands in the reference manual. Essentially, > the declarative module uses find-matching-chunks with the default testing > functions and the set of all chunks in DM to determine which chunks are a > match. > > Hope that helps, > Dan > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: