From arunbkris at gmail.com Thu Aug 5 05:15:42 2021 From: arunbkris at gmail.com (Arun Krishna) Date: Thu, 5 Aug 2021 11:15:42 +0200 Subject: [ACT-R-users] Question related to the Activation of Chunks and Base-Level learning Message-ID: Hi, I have a question related to the Activation of Chunks and Base-Level learning. *Problem Description:* In the model I am using, the base level learning value is not getting changed when I use the chunk merging. I am using add-dm command with python API, where I add the chunks with same slot values and was expecting the base level learning for the chunks getting increased. *Reference:* ACT-R\tutorial\unit4\unit4.pdf, 4.3.2 Chunk Merging section. *Code:* In the model I am using I had set the sub symbolic computation (sgp : esc t) and (sgp :rt -0.5). The complete setting for the model is given below ?(sgp :v t :esc t :rt -0.5 :lf 0.4 :ans 0.5 :bll 0.5 :act nil :ncnar nil :ul t) (sgp :seed (200 4))? I am using the python API for add-dm actr.add_dm( [chunkName, "isa", "navigation", "navigationStart", currentState, "navigationEnd", "state-16", "navigationTrigger", trigger]) This API is getting triggered with different chunk names. Chunk-1, chunk-2,?..chunk 50. All the chunk has same slot values and my expectation is that the base level learning of every chunk will be modified. But when I use (pprint-chunks-plus chunk-3) I get SIMILARITIES NIL REFERENCE-COUNT 0 REFERENCE-LIST NIL SOURCE-SPREAD 0 LAST-BASE-LEVEL 0 BASE-LEVEL NIL CREATION-TIME 0 FAN-IN NIL C-FAN-OUT 0 FAN-OUT 0 IN-DM NIL ACTIVATION 0 BUFFER-SET-INVALID NIL Could you please let me know how to modify the base level learning and reference count of these chunks (Which supposed to point to a single chunk with different reference names since the slot values are same) ? Thanks & Regards, Arun -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Thu Aug 5 09:18:06 2021 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Thu, 5 Aug 2021 09:18:06 -0400 Subject: [ACT-R-users] Question related to the Activation of Chunks and Base-Level learning In-Reply-To: References: Message-ID: <07072EDAB569366556F36C82@[192.168.0.111]> --On Thursday, August 5, 2021 5:15 AM -0400 Arun Krishna wrote: > > > Hi, > > I have a question related to the Activation of Chunks and Base-Level > learning. > > Problem Description: > > In the model I am using, the base level learning value is not getting changed > when I use the chunk merging. I am using add-dm command with python API, > where I add the chunks with same slot values and was expecting the base level > learning for the chunks getting increased. > > Reference: > > ACT-R\tutorial\unit4\unit4.pdf, 4.3.2 Chunk Merging section. > > Code: > > In the model I am using I had set the sub symbolic computation > > (sgp : esc t) and (sgp :rt -0.5). > > The complete setting for the model is given below > > "(sgp :v t :esc t :rt -0.5 :lf 0.4 :ans 0.5 :bll 0.5 :act nil :ncnar nil > :ul t) > > (sgp :seed (200 4))" > > > > I am using the python API for add-dm > > actr.add_dm( > > [chunkName, "isa", "navigation", "navigationStart", currentState, > "navigationEnd", "state-16", > > "navigationTrigger", trigger]) > > This API is getting triggered with different chunk names. Chunk-1, > chunk-2,?..chunk 50. All the chunk has same slot values and my expectation > is that the base level learning of every chunk will be modified. > > But when I use (pprint-chunks-plus chunk-3) I get > > SIMILARITIES NIL > > REFERENCE-COUNT 0 > > REFERENCE-LIST NIL > > SOURCE-SPREAD 0 > > LAST-BASE-LEVEL 0 > > BASE-LEVEL NIL > > CREATION-TIME 0 > > FAN-IN NIL > > C-FAN-OUT 0 > > FAN-OUT 0 > > IN-DM NIL > > ACTIVATION 0 > > BUFFER-SET-INVALID NIL > > Could you please let me know how to modify the base level learning and > reference count of these chunks (Which supposed to point to a single chunk > with different reference names since the slot values are same) ? > Add-dm always "adds" a chunk to declarative memory - it does not merge it into declarative memory. There is a merge-dm command that works like add-dm except that it merges the chunks into declarative memory. However, for modifying the base-level the set-base-levels command allows one to directly adjust the base-level parameters for a chunk. It's use varies depending upon how the parameters for base-level learning are set (:bll and :ol), and you can find the details in the reference manual. Hope that helps, Dan From arunbkris at gmail.com Fri Aug 6 03:49:11 2021 From: arunbkris at gmail.com (Arun Krishna) Date: Fri, 6 Aug 2021 09:49:11 +0200 Subject: [ACT-R-users] Question related to the Activation of Chunks and Base-Level learning In-Reply-To: <07072EDAB569366556F36C82@192.168.0.111> References: <07072EDAB569366556F36C82@192.168.0.111> Message-ID: Hi Dan, Thank you very much. Now I am able to change the base level learning and strengthen the chunk using the merge-dm command. Regards, Arun On Thu, Aug 5, 2021 at 3:20 PM wrote: > > > > --On Thursday, August 5, 2021 5:15 AM -0400 Arun Krishna < > arunbkris at gmail.com> > wrote: > > > > > > > Hi, > > > > I have a question related to the Activation of Chunks and Base-Level > > learning. > > > > Problem Description: > > > > In the model I am using, the base level learning value is not getting > changed > > when I use the chunk merging. I am using add-dm command with python API, > > where I add the chunks with same slot values and was expecting the base > level > > learning for the chunks getting increased. > > > > Reference: > > > > ACT-R\tutorial\unit4\unit4.pdf, 4.3.2 Chunk Merging section. > > > > Code: > > > > In the model I am using I had set the sub symbolic computation > > > > (sgp : esc t) and (sgp :rt -0.5). > > > > The complete setting for the model is given below > > > > "(sgp :v t :esc t :rt -0.5 :lf 0.4 :ans 0.5 :bll 0.5 :act nil :ncnar nil > > :ul t) > > > > (sgp :seed (200 4))" > > > > > > > > I am using the python API for add-dm > > > > actr.add_dm( > > > > [chunkName, "isa", "navigation", "navigationStart", > currentState, > > "navigationEnd", "state-16", > > > > "navigationTrigger", trigger]) > > > > This API is getting triggered with different chunk names. Chunk-1, > > chunk-2,?..chunk 50. All the chunk has same slot values and my > expectation > > is that the base level learning of every chunk will be modified. > > > > But when I use (pprint-chunks-plus chunk-3) I get > > > > SIMILARITIES NIL > > > > REFERENCE-COUNT 0 > > > > REFERENCE-LIST NIL > > > > SOURCE-SPREAD 0 > > > > LAST-BASE-LEVEL 0 > > > > BASE-LEVEL NIL > > > > CREATION-TIME 0 > > > > FAN-IN NIL > > > > C-FAN-OUT 0 > > > > FAN-OUT 0 > > > > IN-DM NIL > > > > ACTIVATION 0 > > > > BUFFER-SET-INVALID NIL > > > > Could you please let me know how to modify the base level learning and > > reference count of these chunks (Which supposed to point to a single > chunk > > with different reference names since the slot values are same) ? > > > > > > Add-dm always "adds" a chunk to declarative memory - it does not merge it > into declarative memory. There is a merge-dm command that works like > add-dm except that it merges the chunks into declarative memory. > > However, for modifying the base-level the set-base-levels command allows > one to directly adjust the base-level parameters for a chunk. It's use > varies depending upon how the parameters for base-level learning are set > (:bll and :ol), and you can find the details in the reference manual. > > Hope that helps, > Dan > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: