From pfbaldi at ics.uci.edu Mon Nov 1 11:47:43 2021 From: pfbaldi at ics.uci.edu (Baldi,Pierre) Date: Mon, 1 Nov 2021 08:47:43 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <8E01234A-03B3-492C-9DD7-B7FBD321475D@princeton.edu> <252610507.1139182.1635505993601@mail.yahoo.com> Message-ID: <2f8f0bdd-9957-861e-c53b-ce80e7957709@ics.uci.edu> Randall, I am glad that we agree on some points. If our quantitatively-bent community wanted to get to the bottom of these issues of biases, cronyism, and collusion the algorithms for doing so are not a secret. 1) For each center of power (e.g. foundation, program/organizing committee, editorial board, university or corporate department) compile the list of its members since its creation. Some of this information can be assembled automatically from the web, from printed material, and other sources. If we want to have full transparency, in some appropriate cases, the community could also ask that those centers release the corresponding information, as well as any other pertinent information. 2) For each one of these lists, compute how any sub-group of interest is represented, and how this representation has evolved over time.? You can probably come up with some nice probabilistic models and compute the corresponding p-values.? As suggested by Tom Dietterich, not too surprisingly one may find a bias in favor of "North American" or " White Male", possibly with a decreasing trend in more recent years due to Tom's and other's efforts. Tom also mentions a "founder effect" for the special case of NIPS. As far as I can tell, "with the exception of Terry Sejnowski" to use Tom's words, the founder effect for NIPS is largely gone. Ed Posner died (and so did Bell Labs) and other founders, such as Yaser Abu Mostafa or Jim Bower, do not seem to have been involved for many years (see: https://work.caltech.edu/nips). 3) To further address the issues of cronyism and collusion, you need to look also at groups that are slightly more subtle. For instance, in the case of academic members, one should look not only at those members individually, but also at their "lineage" i.e. their family relatives and, more importantly, all their present and former graduate students and postdoctoral fellows. In the case of corporations or national laboratories, one could begin by looking at colleagues from the same institution.? Finally, one could try to corroborate or complement these analyses by studying the special events I mentioned in my previous message (e.g. invited talks, birthday celebrations, special sessions) and citations. The interesting challenge for citations is to detect not only positive correlations, but also negative ones, where relevant work has been left out on purpose. Pierre Baldi On 10/31/2021 1:44 AM, Randall O'Reilly wrote: > I'm sure everyone agrees that scientific integrity is essential at all levels, but I hope we can avoid a kind of simplistic, sanctimonious treatment of these issues -- there are lots of complex dynamics at play in this or any scientific field. Here's few additional thoughts / reactions: > > * Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar? Who does that help? Certainly not someone who might want to actually read a useful treatment of foundational ideas. I generally cite papers that I actually think other people should read if they want to learn more about a topic -- those tend to be written by people who write clearly and compellingly. Those who are obsessed about historical precedents should write papers on such things, but don't get bent out of shape if other people really don't care that much about that stuff and really just care about the ideas and moving *forward*. > > * Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule. Furthermore, the idea that gradients passed through many multiplicative steps of the chain rule tend to dissipate exponentially is pretty basic at a mathematical level, and I'm sure some obscure (or even famous) mathematician from the 1800's or even earlier has pointed this out in some context or another. For example, Lyopunov's work from the late 1800's is directly relevant in terms of iterative systems and the need to have an exponent of 1 for stability. So at some level all of deep learning and LSTM is just derivative of this earlier work (pun intended!). > > * More generally, each individual scientist is constantly absorbing ideas from others, synthesizing them in their own internal neural networks, and essentially "reinventing" the insights and implications of these ideas in their own mind. We all only have our own individual subjective lens onto the world, and each have to generate our own internal conceptual structures for ourselves. Thus, reinvention is rampant, and we each feel a distinct sense of ownership over the powerful ideas that we have forged in our own minds. Some people are lucky enough to be at the right place and the right time to share truly new ideas in an effective way with a large number of other people, but everyone who grasps those ideas can cherish the fact that so many people are out there in the world working tirelessly to share all of these great ideas! > > * To support what Pierre Baldi said: People are strongly biased to form in-group affiliations and put others into less respected (or worse) out-groups -- the power of this instinct is behind most of the evil in the world today and throughout history, and science is certainly not immune to its effects. Thus it is important to explicitly promote diversity of all forms in scientific organizations, and work against what clearly are strong "cliques" in the field, who hold longstanding and disproportionate control over important organizations. > > - Randy > >> On Oct 29, 2021, at 4:13 AM, Anand Ramamoorthy wrote: >> >> Hi All, >> Some remarks/thoughts: >> >> 1. Juergen raises important points relevant not just to the ML folks but also the wider scientific community >> >> 2. Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or "strategic vale" or boosting someone because they're a great populariser. >> >> 3. Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear. >> >> 4. Academic science has loads of problems and I think there is some urgency w.r.t sorting them out. For three reasons; a) scientific duty b) for posterity and c) we now live in a world where anti-science sentiments are not limited to fringe elements and this does not bode well for humanity. >> >> Maybe dealing with proper credit assignment as pointed out by Juergen and others in the thread could be a start. >> >> Live Long and Prosper! >> >> Best, >> >> Anand Ramamoorthy >> >> >> >> On Friday, 29 October 2021, 08:39:14 BST, Jonathan D. Cohen wrote: >> >> >> The same incentive structures (and values they reflect) are not necessarily the same ? nor should they necessary be ? in commercial and academic environments. >> >> jdc >> >> >> >>> On Oct 28, 2021, at 12:03 PM, Marina Meila wrote: >>> >>> Since credit is a form of currency in academia, let's look at the "hard currency" rewards of invention. Who gets them? the first company to create a new product usually fails. >>> However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. >>> >>> The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the "product" directly from them or not. >>> >>> Best wishes, >>> >>> Marina >>> >>> -- Marina Meila >>> Professor of Statistics >>> University of Washington >>> >>> >>> On 10/28/21, 5:59 AM, "Connectionists" wrote: >>> >>> As a friendly amendment to both Randy and Danko?s comments, it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor. This may help explain why the *last* person to invent or discover something is the one who gets the [social] credit. That is, giving credit to those who disseminate may even have normative value. After all, if a tree falls in the forrest? As for those who care more about discovery and invention than dissemination, well, for them credit assignment may not be as important ;^). >>> jdc >>> >>> >>> On Oct 28, 2021, at 4:23 AM, Danko Nikolic wrote: >>> >>> Yes Randall, sadly so. I have seen similar examples in neuroscience and philosophy of mind. Often, (but not always), you have to be the one who popularizes the thing to get the credit. Sometimes, you can get away, you just do the hard conceptual work and others doing for you the (also hard) marketing work. The best bet is doing both by yourself. Still no guarantee. >>> Danko >>> >>> >>> >>> >>> On Thu, 28 Oct 2021, 10:13 Randall O'Reilly wrote: >>> >>> >>> I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit. This is almost by definition: once it is sufficiently widely known, nobody can successfully reinvent it; conversely, if it can be successfully reinvented, then the previous attempts failed for one reason or another (which may have nothing to do with the merit of the work in question). >>> >>> For example, I remember being surprised how little Einstein added to what was already established by Lorentz and others, at the mathematical level, in the theory of special relativity. But he put those equations into a conceptual framework that obviously changed our understanding of basic physical concepts. Sometimes, it is not the basic equations etc that matter: it is the big picture vision. >>> >>> Cheers, >>> - Randy >>> >>>> On Oct 27, 2021, at 12:52 AM, Schmidhuber Juergen wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>> >>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> > -- Pierre Baldi, Ph.D. Distinguished Professor, Department of Computer Science Director, Institute for Genomics and Bioinformatics Associate Director, Center for Machine Learning and Intelligent Systems University of California, Irvine Irvine, CA 92697-3435 (949) 824-5809 (949) 824-9813 [FAX] Assistant: Janet Ko jko at uci.edu From levine at uta.edu Mon Nov 1 08:07:50 2021 From: levine at uta.edu (Levine, Daniel S) Date: Mon, 1 Nov 2021 12:07:50 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Tsvi, My book does not include the regulatory feedback you mention, but includes a lot of recurrent networks dating as far back as 1973 (some of them in high-impact journals). It is indeed readily available, in fact it was announced on Connectionists about two years ago. The link is https://www.routledge.com/Introduction-to-Neural-and-Cognitive-Modeling-3rd-Edition/Levine/p/book/9781848726482 . It is organized primarily by problems and secondarily by approaches. Dan ________________________________ From: Tsvi Achler Sent: Monday, November 1, 2021 4:23 AM To: Levine, Daniel S Cc: Schmidhuber Juergen ; connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Mon Nov 1 10:22:25 2021 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Mon, 1 Nov 2021 16:22:25 +0200 Subject: Connectionists: ACM International Conference on Information Technology for Social Good (GoodIT 2022): Second Call for Special Track Proposals Message-ID: *** Second Call for Special Track Proposals *** ACM International Conference on Information Technology for Social Good (GoodIT 2022) 7?9 September, 2022, 5* St. Raphael Resort & Marina, Limassol, Cyprus https://cyprusconferences.org/goodit2022/ Scope ACM GoodIT focuses on the application of IT technologies to social good. Social good is typically defined as an action that provides some sort of benefit to the general public. In this case, Internet connection, education, and healthcare are all good examples of social goods. However, new media innovations and the explosion of online communities have added new meaning to the term. Social good is now about global citizens uniting to unlock the potential of individuals, technology, and collaboration to create positive societal impact. GoodIT topics include but not limited to: ? IT for education ? Data Science ? Digital solutions for Cultural Heritage ? Data sensing, processing, and persistency ? Game, entertainment, and multimedia applications ? Health and social care ? IT for development ? Privacy and trust issues and solutions ? Sustainable cities and transportation ? Smart governance and e-administration ? IT for smart living ? Technology addressing the digital divide ? IT for automotive ? Frugal solutions for IT ? Ethical computing ? Decentralized approaches to IT ? Citizen science ? Socially responsible IT solutions ? Sustainable IT ? Social informatics ? Civic intelligence Journal Special Issue and Best Paper Award Selected papers will be invited to submit an extended version to a special issue in the journal MDPI Sensors, where the theme of the special issue will be "Application of Information Technology (IT) to Social Good". Specifically 5 papers will be invited free of charge and another 5 papers will get a 20% discount on the publication fees. Furthermore, MDPI Sensors will sponsor a Best Paper Award with the amount of 400 CHF. Special Tracks Proposals GoodIT 2022 will feature special tracks whose aim is to focus on a specific topic of interest related to the overall scope of the conference. We solicit proposals for special tracks to be held within the main conference and whose publications will be included in the conference proceedings. Tracks proposals can focus on any contemporary themes that highlight social good aspects in the design, implementation, deployment, securing, and evaluation of IT technologies. Special Track Proposal Format A special track proposal must contain the following information: ? Title of the special track. ? The names of the organizers (indicatively, two) with affiliations, contact information, and a single paragraph of a brief bio. ? A short description of the scope and topics of the track (max 1/2 page) and a brief explanation of: (1) why the topic is timely and important; (2) why the topic is related to the conference?s main theme; (3) why the track may attract a significant number of submissions of good quality. ? Indication if a journal special issue is associated with the track, possibly with information on the process of selecting papers. ? The plan to disseminate the call for papers of the special track for achieving a reasonable number of paper submissions (a list of emailing lists will help). ? A tentative Program Committee list. ? A draft Call for Papers (max 1 page). Publication Papers submitted to each particular track have to satisfy the same criteria as for the main conference. They must be original works and must not have been previously published. They have to be peer-reviewed by the track's Program Committee (at least three reviews per submitted paper are required). The final version of papers must follow the formatting instructions of the main conference (https://cyprusconferences.org/goodit2022/index.php/authors/). At least one of the authors of all accepted papers must register and present the work at the conference; otherwise, the paper will not be published in the proceedings. All accepted and presented papers will be included in the conference proceedings published in the ACM Digital Library. The special track may provide an option for publishing extended versions of selected papers in a special issue of a journal. Special Track Proposal Submission Guidelines Special track proposals should be submitted as a single PDF file to the special track Chairs (see below) via email to: ombretta.gaggi at unipd.it, valentino.vranic at stuba.sk, and rysavy at fit.vut.cz. The subject of the e-mail must be: ?GoodIT 2022 ? special track proposal?. The special track chairs may ask proposers for supplying additional information during the review period. Important Dates ? Special Track Proposal Submission Deadline: 13 December 2021 ? Notification of Selection: 20 December 2021 Contact (Special Tracks Chairs) ? Ombretta Gaggi (University of Padua, Italy) ? Ondrej Rysavy (Brno University of Technology, Czech Republic) ? Valentino Vranic (Slovak University of Technology in Bratislava, Slovakia) -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Mon Nov 1 05:23:25 2021 From: achler at gmail.com (Tsvi Achler) Date: Mon, 1 Nov 2021 02:23:25 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > Tsvi, > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook * Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > Best, > > > Dan Levine > ------------------------------ > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > Sincerely, > Tsvi Achler > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Mon Nov 1 13:43:46 2021 From: gary at eng.ucsd.edu (gary@ucsd.edu) Date: Mon, 1 Nov 2021 10:43:46 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Tvi says: "Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation" Not sure where this is coming from, but it's certainly false. There's plenty of networks out there with feedback, and BPTT works with feedback connections. For a recent (2019) work that is trained on brain dynamics, see: Recurrence is required to cap[ture the representational dynamics of the human visual system. On Sun, Oct 31, 2021 at 12:17 PM Tsvi Achler wrote: > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > Sincerely, > Tsvi Achler > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > >> Hi, fellow artificial neural network enthusiasts! >> >> The connectionists mailing list is perhaps the oldest mailing list on >> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >> that some of them - as well as their contemporaries - might be able to >> provide additional valuable insights into the history of the field. >> >> Following the great success of massive open online peer review (MOOR) for >> my 2015 survey of deep learning (now the most cited article ever published >> in the journal Neural Networks), I've decided to put forward another piece >> for MOOR. I want to thank the many experts who have already provided me >> with comments on it. Please send additional relevant references and >> suggestions for improvements for the following draft directly to me at >> juergen at idsia.ch: >> >> >> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >> >> The above is a point-for-point critique of factual errors in ACM's >> justification of the ACM A. M. Turing Award for deep learning and a >> critique of the Turing Lecture published by ACM in July 2021. This work can >> also be seen as a short history of deep learning, at least as far as ACM's >> errors and the Turing Lecture are concerned. >> >> I know that some view this as a controversial topic. However, it is the >> very nature of science to resolve controversies through facts. Credit >> assignment is as core to scientific history as it is to machine learning. >> My aim is to ensure that the true history of our field is preserved for >> posterity. >> >> Thank you all in advance for your help! >> >> J?rgen Schmidhuber >> >> >> >> >> >> >> >> >> -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From zarifa.mohamad at scioi.org Mon Nov 1 14:21:55 2021 From: zarifa.mohamad at scioi.org (Mohamad, Zarifa) Date: Mon, 1 Nov 2021 18:21:55 +0000 Subject: Connectionists: Call for applications, 11 research positions, deadline 25 November 2021 23:59 (CET) In-Reply-To: <026a8830dedd4787bba9ad6932c31796@scioi.org> References: <3925d9f8ee84474db2fdd12cbce13f77@scioi.org>, <3596f4c8b9ca478bb77365e864a27fd0@scioi.org>, <026a8830dedd4787bba9ad6932c31796@scioi.org> Message-ID: <4b7f03fcc908417bb5fd804b45333766@scioi.org> Call for applications - Science of Intelligence Berlin - Cluster of Excellence Call for 11 research positions (Phd and Postdoc) Application deadline: 25 November 2021 23:59 h CET Cross-disciplinary research in artificial intelligence, machine learning, control, robotics, computer vision, behavioral biology, cognitive science, psychology, educational science, neuroscience, and philosophy. Starting dates: Summer / Fall 2022 Duration: 3 years Salary level: TV-L 13, 100% What are the principles of intelligence, shared by all forms of intelligence, whether artificial or biological, whether robot, computer program, human, or animal? And how can we apply these principles to create intelligent technology? Answering these questions - in an ethically responsible way - is the central scientific objective of the Cluster of Excellence Science of Intelligence (https://www.scienceofintelligence.de/). Researchers from a large number of analytic and synthetic disciplines - artificial intelligence, machine learning, control, robotics, computer vision, behavioral biology, cognitive science, psychology, educational science, neuroscience, and philosophy - join forces at this multi-disciplinary research program across universities and research institutes in Berlin. Our approach is driven by the insight, that any method, concept, and theory must demonstrate its merits by contributing to the intelligent behavior of a synthetic artifact, such as a robot or a computer program. These artifacts represent the shared "language" across disciplines, enabling the validation, combination, transfer, and extension of research results. Thus, we expect to attain cohesion among disciplines, which currently produce their own theories and empirical findings about aspects of intelligence. Interdisciplinary research projects have been defined which combine analytic and synthetic research and which address key aspects of individual, social, and collective intelligence. In addition, the Science of Intelligence graduate program promotes the cross-disciplinary education of young scientists at a doctoral and postdoctoral level. All doctoral researchers associated with the cluster are expected to join the Science of Intelligence doctoral program (https://www.scienceofintelligence.de/education/doctoral-program/). The cluster welcomes applications from all disciplines that contribute to intelligence research. To apply, please visit https://www.scienceofintelligence.de/call-for-applications/open-positions where details of the individual research projects are also available. Please submit your applications by *25 November 2021 23:59 h CET* in order to receive full consideration. More information: https://www.scienceofintelligence.de/call-for-applications/application-process/ FAQ https://www.scienceofintelligence.de/education/admissions/admissions-faqs/ CONTACT Zarifa Mohamad, Graduate Coordinator Cluster Science of Intelligence (SCIoI) Technische Universit?t Berlin Marchstra?e 23 10587 Berlin, Germany Tel.: +49 30 314-22673 Email: applications at scioi.de www.scienceofintelligence.de Zarifa Mohamad Science of Intelligence (SCIoI) Technische Universitaet Berlin Marchstra?e 23 10587 Berlin, Germany +49 30 314 22673 zarifa.mohamad at scioi.org Subscribe to our mailing list here or by sending an empty email to scioi-info-join at lists.tu-berlin.de *********************************************** Science of Intelligence (SCIoI) Cluster of Excellence www.scienceofintelligence.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at eng.ucsd.edu Mon Nov 1 13:59:00 2021 From: gary at eng.ucsd.edu (gary@ucsd.edu) Date: Mon, 1 Nov 2021 10:59:00 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there *is* an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > Daniel, > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > Thomas, I am happy to have more discussions and/or start a different > thread. > > Sincerely, > Tsvi Achler MD/PhD > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > >> Tsvi, >> >> While deep learning and feedforward networks have an outsize popularity, >> there are plenty of published sources that cover a much wider variety of >> networks, many of them more biologically based than deep learning. A >> treatment of a range of neural network approaches, going from simpler to >> more complex cognitive functions, is found in my textbook * Introduction >> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also >> Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) >> emphasizes a variety of architectures with a strong biological basis. >> >> >> Best, >> >> >> Dan Levine >> ------------------------------ >> *From:* Connectionists >> on behalf of Tsvi Achler >> *Sent:* Saturday, October 30, 2021 3:13 AM >> *To:* Schmidhuber Juergen >> *Cc:* connectionists at cs.cmu.edu >> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >> Lecture, etc. >> >> Since the title of the thread is Scientific Integrity, I want to point >> out some issues about trends in academia and then especially focusing on >> the connectionist community. >> >> In general analyzing impact factors etc the most important progress gets >> silenced until the mainstream picks it up Impact Factiors in novel >> research www.nber.org/.../working_papers/w22180/w22180.pdf >> and >> often this may take a generation >> https://www.nber.org/.../does-science-advance-one-funeral... >> >> . >> >> The connectionist field is stuck on feedforward networks and variants >> such as with inhibition of competitors (e.g. lateral inhibition), or other >> variants that are sometimes labeled as recurrent networks for learning time >> where the feedforward networks can be rewound in time. >> >> This stasis is specifically occuring with the popularity of deep >> learning. This is often portrayed as neurally plausible connectionism but >> requires an implausible amount of rehearsal and is not connectionist if >> this rehearsal is not implemented with neurons (see video link for further >> clarification). >> >> Models which have true feedback (e.g. back to their own inputs) cannot >> learn by backpropagation but there is plenty of evidence these types of >> connections exist in the brain and are used during recognition. Thus they >> get ignored: no talks in universities, no featuring in "premier" journals >> and no funding. >> >> But they are important and may negate the need for rehearsal as needed in >> feedforward methods. Thus may be essential for moving connectionism >> forward. >> >> If the community is truly dedicated to brain motivated algorithms, I >> recommend giving more time to networks other than feedforward networks. >> >> Video: >> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >> >> >> Sincerely, >> Tsvi Achler >> >> >> >> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >> wrote: >> >> Hi, fellow artificial neural network enthusiasts! >> >> The connectionists mailing list is perhaps the oldest mailing list on >> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >> that some of them - as well as their contemporaries - might be able to >> provide additional valuable insights into the history of the field. >> >> Following the great success of massive open online peer review (MOOR) for >> my 2015 survey of deep learning (now the most cited article ever published >> in the journal Neural Networks), I've decided to put forward another piece >> for MOOR. I want to thank the many experts who have already provided me >> with comments on it. Please send additional relevant references and >> suggestions for improvements for the following draft directly to me at >> juergen at idsia.ch: >> >> >> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >> >> >> The above is a point-for-point critique of factual errors in ACM's >> justification of the ACM A. M. Turing Award for deep learning and a >> critique of the Turing Lecture published by ACM in July 2021. This work can >> also be seen as a short history of deep learning, at least as far as ACM's >> errors and the Turing Lecture are concerned. >> >> I know that some view this as a controversial topic. However, it is the >> very nature of science to resolve controversies through facts. Credit >> assignment is as core to scientific history as it is to machine learning. >> My aim is to ensure that the true history of our field is preserved for >> posterity. >> >> Thank you all in advance for your help! >> >> J?rgen Schmidhuber >> >> >> >> >> >> >> >> >> -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From astrid.prinz at emory.edu Mon Nov 1 14:37:10 2021 From: astrid.prinz at emory.edu (Prinz, Astrid A) Date: Mon, 1 Nov 2021 18:37:10 +0000 Subject: Connectionists: [External] Re: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Dear Connectionists, Perhaps someone can provide Tsvi with the access to the academic literature he requires. See his email below. This may help Tsvi to address the issue that (qoute): Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thank you! Astrid A. Prinz, PhD (she/her/hers) Associate Professor Department of Biology Emory University O. Wayne Rollins Research Center, Room 2105 1510 Clifton Road Atlanta, GA 30322 phone: 404-727-5191 fax: 404-727-2880 e-mail: astrid.prinz at emory.edu website: http://www.biology.emory.edu/research/Prinz/ ________________________________ From: Connectionists on behalf of Tsvi Achler Sent: Monday, November 1, 2021 5:23 AM To: Levine, Daniel S Cc: connectionists at cs.cmu.edu Subject: [External] Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -------------- next part -------------- An HTML attachment was scrubbed... URL: From astrid.prinz at emory.edu Mon Nov 1 15:11:55 2021 From: astrid.prinz at emory.edu (Prinz, Astrid A) Date: Mon, 1 Nov 2021 19:11:55 +0000 Subject: Connectionists: [External] Re: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Forgive my sarcasm. ________________________________ From: Prinz, Astrid A Sent: Monday, November 1, 2021 2:37 PM To: connectionists at cs.cmu.edu Subject: Re: [External] Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Dear Connectionists, Perhaps someone can provide Tsvi with the access to the academic literature he requires. See his email below. This may help Tsvi to address the issue that (qoute): Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thank you! Astrid A. Prinz, PhD (she/her/hers) Associate Professor Department of Biology Emory University O. Wayne Rollins Research Center, Room 2105 1510 Clifton Road Atlanta, GA 30322 phone: 404-727-5191 fax: 404-727-2880 e-mail: astrid.prinz at emory.edu website: http://www.biology.emory.edu/research/Prinz/ ________________________________ From: Connectionists on behalf of Tsvi Achler Sent: Monday, November 1, 2021 5:23 AM To: Levine, Daniel S Cc: connectionists at cs.cmu.edu Subject: [External] Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -------------- next part -------------- An HTML attachment was scrubbed... URL: From Donald.Adjeroh at mail.wvu.edu Mon Nov 1 15:53:59 2021 From: Donald.Adjeroh at mail.wvu.edu (Donald Adjeroh) Date: Mon, 1 Nov 2021 19:53:59 +0000 Subject: Connectionists: Final call -- IEEE BIBM-LncRNA'21: few hours to go, journal special issue; see our array of speakers! In-Reply-To: References: , , , , Message-ID: Apologies if you receive multiple copies ... Authors of selected papers will be invited to submit extended versions for consideration for Journal Special Issue in MDPI Non-Coding RNA. We also have an exciting array of speakers for the workshop -- both In-Person presenters in Dubai, UAE, and online/remote presenters !! See our website: BIBM- LncRNA'2021: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ Our paper submission deadline Nov. 1, just few hours away -- see below Call for Papers The IEEE BIBM 2021 Workshop on Long Non-Coding RNAs: Mechanism, Function, and Computational Analysis (BIBM-LncRNA) will be held in conjunction with the 2021 IEEE International Conference on Bioinformatics and Biomedicine (IEEE BIBM 2021), Dec. 9 - 12, 2021. Though the BIBM conference will be virtual/online, the LncRNA workshop will be held in a mixed mode -- both virtual/remote and face-to-face in Dubai, UAE. BIBM- LncRNA'2021: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ IEEE BIBM 2021: https://ieeebibm.org/BIBM2021/ The recent application of high throughput technologies to transcriptomics has changed our view of gene regulation and function. The discovery of extensive transcription of large RNA transcripts, termed long noncoding RNAs (lncRNAs), provide an important and new perspective on the centrality of RNA in gene regulation. LncRNAs are involved in various biological and cellular processes, such as genetic imprinting, chromatin remodeling, gene regulation and embryonic development. LncRNAs have been implicated in several chronic diseases, such as cancers, and heart disease, etc. Various types of genomic data on lncRNAs are currently available, including sequences, secondary/tertiary structures, transcriptome data, and their interactions with related proteins or genes. The key challenge is how to integrate data from myriad sources to determine the functions and the regulatory mechanism of these ubiquitous lncRNAs. Research topics: The potential topics include, but not limited to, the following: lncRNA detection and biomarker discovery CLIP-Seq and RIP-Seq data analysis Prediction of physical binding between lncRNA and DNA, RNA and protein. Competition and interaction between lncRNA, miRNA and mRNA Studying methylation regulating lncRNA functions Function Prediction for lncRNAs Deep learning approaches to lncRNA/RNA binding protein prediction Computational approaches to analyzing lncRNA lncRNA 3D secondary structures lncRNA-protein interactions lncRNA in epigenetic regulation lncRNA associated diseases network lncRNAs in plant genomics lncRNAs in phenotype-genotype problems lncRNAs and single cell transcriptomics lncRNAs and spatial transcriptomics CRISPR/Cas9 and Genome editing in lncRNAs We invite you to submit papers with unpublished, original research describing recent advances on the areas related to this workshop. All papers will undergo peer review by the conference program committee. All papers accepted will be included in the Workshop Proceedings published by the IEEE Computer Society Press and will be available at the workshop. Authors of selected papers will be invited to extend their papers for submission to special issues in prestigious Journals. Fellowships: Funds are available for limited fellowships to support the participation of students, and of researchers from underrepresented minority groups in the workshop. We aim at supporting at least one author for each accepted paper, depending on number of papers, and on availability of funds. Journal Special Issue: Authors of selected submissions will be invited to extend their papers for submission for review and possible publication in a special issue of the journal -- Non-Coding RNA. https://www.mdpi.com/journal/ncrna Paper Submission: Please submit a full-length paper (up to 8 page IEEE 2-column format) through the online submission system. Electronic submissions in pdf format are required. For paper submission click on the following link: https://wi-lab.com/cyberchair/2021/bibm21/scripts/submit.php?subarea=S08&undisplay_detail=1&wh=/cyberchair/2021/bibm21/scripts/ws_submit.php Important Dates: Nov 1, 2021 11:59:59 PM WST: Due date for full workshop paper submission. Nov 14, 2021: Notification of paper decision to authors Nov 21, 2021: Camera-ready of accepted papers Dec 9-12, 2021: Workshops BIBM-LncRNA'21 Workshop home page: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From francisco.pereira at gmail.com Mon Nov 1 20:42:01 2021 From: francisco.pereira at gmail.com (Francisco Pereira) Date: Mon, 1 Nov 2021 20:42:01 -0400 Subject: Connectionists: job: machine learning research scientist at NIMH Message-ID: ## HIRING: machine learning research scientist The Machine Learning Team at the National Institute of Mental Health (NIMH) in Bethesda, MD, has an open position for a machine learning research scientist. The NIMH is the leading federal agency for research on mental disorders and neuroscience, and part of the National Institutes of Health (NIH). ## About the NIMH Machine Learning Team Our mission is to help NIMH scientists use machine learning methods to address research problems in clinical and cognitive psychology and neuroscience. These range from identifying biomarkers for aiding diagnoses to creating and testing models of mental processes in healthy subjects. Our overarching goal is to use machine learning to improve every aspect of the scientific effort, from helping discover or develop theories to generating actionable results. We work with many different data types, including very large brain imaging datasets from various imaging modalities, behavioral data, and picture and text corpora. We have excellent computational resources, both of our own (tens of high-end GPUs for deep learning, several large servers) and shared within the NIH (a cluster with hundreds of thousands of CPUs, and hundreds of GPUs). As a machine learning research group, we develop new methods and publish in the main machine learning conferences (e.g. NeurIPS and ICLR), as well as in psychology and neuroscience journals. Many of our problems require devising research approaches that combine imaging and non-imaging data, and leveraging structured knowledge resources (databases, scientific literature, etc) to generate explanations and hypotheses. You can find more about our work and recent publications at https://cmn.nimh.nih.gov/mlt ## About the position We are seeking candidates who are capable of combining machine learning, statistical, and domain-specific computational tools to solve practical data analysis challenges (e.g. designing experiments, generating and testing statistical hypotheses, training and interpreting predictive models, and developing novel models and methods). Additionally, candidates should be capable of visualizing and communicating findings to a broad scientific audience, as well as explaining the details of relevant methods to researchers in a variety of domains. Desirable experience that is not required, but will be considered very favorably: - deep learning - reinforcement learning - Bayesian statistical modelling - other types of modelling of human/animal learning and decision-making - neuroimaging data processing/ analysis (any MRI modality, MEG, or EEG) - other types of neural data (e.g. neural recording, calcium imaging) in the context of substantial research projects, ideally having led to submitted or published articles. Finally, you should have demonstrable experience programming in languages currently used in data-intensive, scientific computing, such as Python, MATLAB or R. Experience with handling large datasets in high performance computing settings is also very valuable. Although this position requires a Ph.D. in a STEM discipline, we will consider applicants from a variety of backgrounds, as their research experience is the most important factor. Backgrounds of team members include computer science, statistics, mathematics, and biomedical engineering. This is an ideal position for someone who wants to establish a research career in method development and applications driven by scientific and clinical needs. Given our access to a variety of collaborators and large or unique datasets, there is ample opportunity to match research interests with novel research problems. We also maintain collaborations outside of the NIH, driven by our own research interests or community impact. If you would like to be considered for this position, please send francisco.pereira at nih.gov a CV, with your email serving as cover letter. We especially encourage applications from members of underrepresented groups in the machine learning research community. If you already have a research statement, please feel free to send that as well. There is no need for reference letters at this stage. Other inquiries are also welcome. Thank you for your attention and interest! -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Mon Nov 1 20:16:31 2021 From: achler at gmail.com (Tsvi Achler) Date: Mon, 1 Nov 2021 17:16:31 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: I received several messages along the lines "you must not know what you are talking about but this is X and you should read book Y", without the commenters reading the original work on Regulatory Feedback. More specifically the X & Y's of the responses are: Steve: X= Adaptive Resonance, Y= Steve's book Gary: X= Trainable via Backprop, Y= Randy's book First I want to point out the more novel and counterintuitive an idea, less people that synergize with it, less support from advisor, less support from academic pedigree, despite academic departments and grants stating exactly the opposite. So how does this happen? Everyone in self selected committees promoting themselves, dismissive of others and decisions and advice becomes political. The more counterintuitive the less support. This is a counterintuitive model, where during recognition input information goes to output neurons then feed-back and partially modifies the same inputs that are then reprocessed by the same outputs continuously until neuron activations settle. This mechanism does not describe learning or learning through time, it occurs during recognition and does not change weights. I really urge reading the original article and demonstration videos before making comments: Achler 2014 "Symbolic Networks for Cognitive Capacities" BICA, https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities In-depth updated video: https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3 It is not that I am against healthy criticisms and discourse, but I am against dismissiveness without looking into details. Moreover I would be happy to be invited to give a talk at your institutions and go over the details within your communities. I am disappointed with both Gary and Steve, because I met both of you in the past and discussed the model. In fact, in a paid conference led by Steve, I was relegated to a few minutes introduction for this model that is counter intuitive, because it was assumed to be "Adaptive Resonance" (just like the last message) and it didn't need more time. This paucity of opportunity to dive into the details and quick dismissiveness is a huge part of the problem which contributes to the inhibition of novel ideas as indicated by the two articles about academia I cited. Since I am no longer funded and do not have an academic budget I am no longer presenting at paid conferences where this work will be dismissed and relegated to a dark corner and told to listen to the invited speaker or paid Journal with low impact factors. Nor will I pay for books by those who will promote paid books before reading my work. No matter how successful one side or another pushes their narrative, this does not change how the brain works. I hope the community can realize these problems. I am happy to come to invited talks, go into a deep dive and have the conversations that academics like to project outwardly that they have. Sincerely, -Tsvi Achler MD/PhD (I put my degrees here in hopes I won't be pointed to any more beginners books) On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > Randy & Yuko's models take into account feedback and inhibition. > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > >> Daniel, >> >> Does your book include a discussion of Regulatory or Inhibitory Feedback >> published in several low impact journals between 2008 and 2014 (and in >> videos subsequently)? >> These are networks where the primary computation is inhibition back to >> the inputs that activated them and may be very counterintuitive given >> today's trends. You can almost think of them as the opposite of Hopfield >> networks. >> >> I would love to check inside the book but I dont have an academic budget >> that allows me access to it and that is a huge part of the problem with how >> information is shared and funding is allocated. I could not get access to >> any of the text or citations especially Chapter 4: "Competition, Lateral >> Inhibition, and Short-Term Memory", to weigh in. >> >> I wish the best circulation for your book, but even if the Regulatory >> Feedback Model is in the book, that does not change the fundamental problem >> if the book is not readily available. >> >> The same goes with Steve Grossberg's book, I cannot easily look inside. >> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >> as a predominant mechanism, but I do believe a function such as vigilance >> is very important during recognition and Adaptive Resonance is one of >> a very few models that have it. The Regulatory Feedback model I have >> developed (and Michael Spratling studies a similar model as well) is built >> primarily using the vigilance type of connections and allows multiple >> neurons to be evaluated at the same time and continuously during >> recognition in order to determine which (single or multiple neurons >> together) match the inputs the best without lateral inhibition. >> >> Unfortunately within conferences and talks predominated by the Adaptive >> Resonance crowd I have experienced the familiar dismissiveness and did not >> have an opportunity to give a proper talk. This goes back to the larger >> issue of academic politics based on small self-selected committees, the >> same issues that exist with the feedforward crowd, and pretty much all of >> academia. >> >> Today's information age algorithms such as Google's can determine >> relevance of information and ways to display them, but hegemony of the >> journal systems and the small committee system of academia developed in the >> middle ages (and their mutual synergies) block the use of more modern >> methods in research. Thus we are stuck with this problem, which especially >> affects those that are trying to introduce something new and >> counterintuitive, and hence the results described in the two National >> Bureau of Economic Research articles I cited in my previous message. >> >> Thomas, I am happy to have more discussions and/or start a different >> thread. >> >> Sincerely, >> Tsvi Achler MD/PhD >> >> >> >> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: >> >>> Tsvi, >>> >>> While deep learning and feedforward networks have an outsize popularity, >>> there are plenty of published sources that cover a much wider variety of >>> networks, many of them more biologically based than deep learning. A >>> treatment of a range of neural network approaches, going from simpler to >>> more complex cognitive functions, is found in my textbook * >>> Introduction to Neural and Cognitive Modeling* (3rd edition, Routledge, >>> 2019). Also Steve Grossberg's book *Conscious Mind, Resonant Brain* >>> (Oxford, 2021) emphasizes a variety of architectures with a strong >>> biological basis. >>> >>> >>> Best, >>> >>> >>> Dan Levine >>> ------------------------------ >>> *From:* Connectionists >>> on behalf of Tsvi Achler >>> *Sent:* Saturday, October 30, 2021 3:13 AM >>> *To:* Schmidhuber Juergen >>> *Cc:* connectionists at cs.cmu.edu >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> Since the title of the thread is Scientific Integrity, I want to point >>> out some issues about trends in academia and then especially focusing on >>> the connectionist community. >>> >>> In general analyzing impact factors etc the most important progress gets >>> silenced until the mainstream picks it up Impact Factiors in novel >>> research www.nber.org/.../working_papers/w22180/w22180.pdf >>> and >>> often this may take a generation >>> https://www.nber.org/.../does-science-advance-one-funeral... >>> >>> . >>> >>> The connectionist field is stuck on feedforward networks and variants >>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>> variants that are sometimes labeled as recurrent networks for learning time >>> where the feedforward networks can be rewound in time. >>> >>> This stasis is specifically occuring with the popularity of deep >>> learning. This is often portrayed as neurally plausible connectionism but >>> requires an implausible amount of rehearsal and is not connectionist if >>> this rehearsal is not implemented with neurons (see video link for further >>> clarification). >>> >>> Models which have true feedback (e.g. back to their own inputs) cannot >>> learn by backpropagation but there is plenty of evidence these types of >>> connections exist in the brain and are used during recognition. Thus they >>> get ignored: no talks in universities, no featuring in "premier" journals >>> and no funding. >>> >>> But they are important and may negate the need for rehearsal as needed >>> in feedforward methods. Thus may be essential for moving connectionism >>> forward. >>> >>> If the community is truly dedicated to brain motivated algorithms, I >>> recommend giving more time to networks other than feedforward networks. >>> >>> Video: >>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>> >>> >>> Sincerely, >>> Tsvi Achler >>> >>> >>> >>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>> wrote: >>> >>> Hi, fellow artificial neural network enthusiasts! >>> >>> The connectionists mailing list is perhaps the oldest mailing list on >>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>> that some of them - as well as their contemporaries - might be able to >>> provide additional valuable insights into the history of the field. >>> >>> Following the great success of massive open online peer review (MOOR) >>> for my 2015 survey of deep learning (now the most cited article ever >>> published in the journal Neural Networks), I've decided to put forward >>> another piece for MOOR. I want to thank the many experts who have already >>> provided me with comments on it. Please send additional relevant references >>> and suggestions for improvements for the following draft directly to me at >>> juergen at idsia.ch: >>> >>> >>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>> >>> >>> The above is a point-for-point critique of factual errors in ACM's >>> justification of the ACM A. M. Turing Award for deep learning and a >>> critique of the Turing Lecture published by ACM in July 2021. This work can >>> also be seen as a short history of deep learning, at least as far as ACM's >>> errors and the Turing Lecture are concerned. >>> >>> I know that some view this as a controversial topic. However, it is the >>> very nature of science to resolve controversies through facts. Credit >>> assignment is as core to scientific history as it is to machine learning. >>> My aim is to ensure that the true history of our field is preserved for >>> posterity. >>> >>> Thank you all in advance for your help! >>> >>> J?rgen Schmidhuber >>> >>> >>> >>> >>> >>> >>> >>> >>> > > -- > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > Schedule: http://tinyurl.com/b7gxpwo > > Listen carefully, > Neither the Vedas > Nor the Qur'an > Will teach you this: > Put the bit in its mouth, > The saddle on its back, > Your foot in the stirrup, > And ride your wild runaway mind > All the way to heaven. > > -- Kabir > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bing.Xue at ecs.vuw.ac.nz Mon Nov 1 22:57:58 2021 From: Bing.Xue at ecs.vuw.ac.nz (Bing XUE) Date: Tue, 2 Nov 2021 15:57:58 +1300 Subject: Connectionists: Extended Deadline - CfP EuroGP 2022 - 25th European Conference on Genetic Programming - 20-22 April 2022 Message-ID: Dear Colleague(s), ?*** Apologies for cross-posting *** We would like to invite you to submit papers to EuroGP 2022: THE 25th EUROPEAN CONFERENCE ON GENETIC PROGRAMMING which will be held on April 20-22, 2022 Please visit http://www.evostar.org/2022/eurogp/ for more details. *** Important dates *** *Extended Submission Deadline: 24 November 2021* EvoStar Conference: 20-22 April, 2022 Submission link: https://easychair.org/conferences/?conf=evo2022 *** EuroGP *** EuroGP is the premier annual conference on Genetic Programming (GP), the oldest and the only meeting worldwide devoted specifically to this branch of evolutionary computation. It is always a high-quality, enjoyable, friendly event, attracting participants from all continents, and offering excellent opportunities for networking, informal contact, and exchange of ideas with fellow researchers. It will feature a mixture of oral presentations and poster sessions and invited keynote speakers. EuroGP is featured in the conference ranking database CORE (http://portal.core.edu.au/conf-ranks/481/) *** EvoStar *** EvoStar is a leading international event devoted to evolutionary computing, comprising four conferences, EuroGP, EvoApplications, EvoCOP, and EvoMUSART. The low-cost registration includes access to all of them, as well as daily lunch and the conference reception and banquet. *** Topics *** Topics to be covered include, but are not limited to: Innovative applications of GP, Theoretical developments, GP performance and behaviour, Fitness landscape analysis of GP, Algorithms, representations and operators for GP, Search-based software engineering, Genetic improvement programming, Evolutionary design, Evolutionary robotics, Tree-based GP and Linear GP, Graph-based GP and Grammar-based GP, Evolvable hardware, Self-reproducing programs, Multi-population GP,? Multi-objective GP, Parallel GP, Probabilistic GP, Object-orientated GP, Hybrid architectures including GP, Coevolution and modularity in GP, Semantics in GP, Unconventional GP, Automatic software maintenance, Evolutionary inductive programming, Evolution of automata or machines. ***the EvoML joint track *** Please visit: http://www.evostar.org/2022/eml/ This joint track on Evolutionary Machine Learning (EML) will provide a specialized forum of discussion and exchange of information for researchers interested in exploring approaches that combine nature and nurture, with the long-term goal of evolving Artificial Intelligence (AI). In response to the growing interest in the area, and consequent advances of the state-of-the-art, the special session covers theoretical and practical advances on the combination of Evolutionary Computation (EC) and Machine Learning (ML) techniques. As a joint EuroGP+EvoAPPS track, authors should decide whether their paper will be treated within EvoApplications or EuroGP at the submission time. *** Paper submission *** High-quality submissions not exceeding 16 pages (including references) in Springer LNCS format are now solicited. Accepted papers will be published by Springer-Verlag in the Lecture Notes in Computer Science series. The highest quality papers may also be invited to submit extensions for publication in a special issue of the journal Genetic Programming and Evolvable Machines (GPEM). *** Organization *** Program Chairs: Eric Medvet, University of Trieste, Italy Gisele Pappa, Universidade Federal de Minas Gerais, Brazil Publication Chair: Bing Xue, Victoria University of Wellington For further information please visit http://www.evostar.org/2022/eurogp/ Eric Medvet, Gisele Pappa, and Bing Xue EuroGP Chairs -- ---------------------------------------------- Dr Bing Xue (she/her), MIEEE, MACM Professor | Ahorangi Programme Director of Science | Pouakorangi School of Engineering and Computer Science | Te Kura M?tai P?kaha, P?rorohiko Victoria University of Wellington | Te Herenga Waka New Zealand | Aotearoa Phone: +64 4 463 5542 Homepage:https://homepages.ecs.vuw.ac.nz/~xuebing/index.html ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.nasser at agroscope.admin.ch Tue Nov 2 04:18:08 2021 From: roland.nasser at agroscope.admin.ch (roland.nasser at agroscope.admin.ch) Date: Tue, 2 Nov 2021 08:18:08 +0000 Subject: Connectionists: Computer Vision Engineer - Permanent Position - Switzerland In-Reply-To: <330ccb8808794e1098650013c7482299@agroscope.admin.ch> References: <330ccb8808794e1098650013c7482299@agroscope.admin.ch> Message-ID: <60583b8fafe24b119f69988b7d9c31ef@agroscope.admin.ch> Dear all, Our team (Digital Production) is currently recruiting a computer vision engineer to empower our set of tools in smart farming. Applicants are invited to submit their documents here: https://jobs.admin.ch/postes-vacants/Collaboratrice-ou-collaborateur-scientifique-Computer-Vision/86efff70-8707-4bd3-8835-c57a4aaed28c&m1641_lk= PS: attached the job position in English. Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EN_Stellenausschreibung_WM_Computer Vision_2021_04.pdf Type: application/pdf Size: 150448 bytes Desc: EN_Stellenausschreibung_WM_Computer Vision_2021_04.pdf URL: From achler at gmail.com Tue Nov 2 01:18:58 2021 From: achler at gmail.com (Tsvi Achler) Date: Mon, 1 Nov 2021 22:18:58 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: I forgot one... Daniel: X = something in the book, Y = Daniel's book On Mon, Nov 1, 2021 at 5:16 PM Tsvi Achler wrote: > > I received several messages along the lines "you must not know what you > are talking about but this is X and you should read book Y", without the > commenters reading the original work on Regulatory Feedback. > More specifically the X & Y's of the responses are: > Steve: X= Adaptive Resonance, Y= Steve's book > Gary: X= Trainable via Backprop, Y= Randy's book > > First I want to point out the more novel and counterintuitive an idea, > less people that synergize with it, less support from advisor, less support > from academic pedigree, despite academic departments and grants stating > exactly the opposite. So how does this happen? > Everyone in self selected committees promoting themselves, dismissive of > others and decisions and advice becomes political. The more > counterintuitive the less support. > > This is a counterintuitive model, where during recognition input > information goes to output neurons then feed-back and partially modifies > the same inputs that are then reprocessed by the same outputs > continuously until neuron activations settle. > This mechanism does not describe learning or learning through time, it > occurs during recognition and does not change weights. > > I really urge reading the original article and demonstration videos before > making comments: Achler 2014 "Symbolic Networks for Cognitive Capacities" > BICA, > https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities > > In-depth updated video: > https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3 > It is not that I am against healthy criticisms and discourse, but I am > against dismissiveness without looking into details. > Moreover I would be happy to be invited to give a talk at your > institutions and go over the details within your communities. > > I am disappointed with both Gary and Steve, because I met both of you in > the past and discussed the model. > > In fact, in a paid conference led by Steve, I was relegated to a few > minutes introduction for this model that is counter intuitive, because it > was assumed to be "Adaptive Resonance" (just like the last message) and it > didn't need more time. This paucity of opportunity to dive into the > details and quick dismissiveness is a huge part of the problem which > contributes to the inhibition of novel ideas as indicated by the two > articles about academia I cited. > > Since I am no longer funded and do not have an academic budget I am no > longer presenting at paid conferences where this work will be dismissed and > relegated to a dark corner and told to listen to the invited speaker or > paid Journal with low impact factors. Nor will I pay for books by those > who will promote paid books before reading my work. > > No matter how successful one side or another pushes their narrative, this > does not change how the brain works. > > I hope the community can realize these problems. I am happy to come to > invited talks, go into a deep dive and have the conversations that > academics like to project outwardly that they have. > > Sincerely, > -Tsvi Achler MD/PhD (I put my degrees here in hopes I won't be pointed to > any more beginners books) > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > >> Tsvi - While I think Randy and Yuko's book >> is actually somewhat better than >> the online version (and buying choices on amazon start at $9.99), there >> *is* an online version. >> Randy & Yuko's models take into account feedback and inhibition. >> >> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >> >>> Daniel, >>> >>> Does your book include a discussion of Regulatory or Inhibitory Feedback >>> published in several low impact journals between 2008 and 2014 (and in >>> videos subsequently)? >>> These are networks where the primary computation is inhibition back to >>> the inputs that activated them and may be very counterintuitive given >>> today's trends. You can almost think of them as the opposite of Hopfield >>> networks. >>> >>> I would love to check inside the book but I dont have an academic budget >>> that allows me access to it and that is a huge part of the problem with how >>> information is shared and funding is allocated. I could not get access to >>> any of the text or citations especially Chapter 4: "Competition, Lateral >>> Inhibition, and Short-Term Memory", to weigh in. >>> >>> I wish the best circulation for your book, but even if the Regulatory >>> Feedback Model is in the book, that does not change the fundamental problem >>> if the book is not readily available. >>> >>> The same goes with Steve Grossberg's book, I cannot easily look inside. >>> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >>> as a predominant mechanism, but I do believe a function such as vigilance >>> is very important during recognition and Adaptive Resonance is one of >>> a very few models that have it. The Regulatory Feedback model I have >>> developed (and Michael Spratling studies a similar model as well) is built >>> primarily using the vigilance type of connections and allows multiple >>> neurons to be evaluated at the same time and continuously during >>> recognition in order to determine which (single or multiple neurons >>> together) match the inputs the best without lateral inhibition. >>> >>> Unfortunately within conferences and talks predominated by the Adaptive >>> Resonance crowd I have experienced the familiar dismissiveness and did not >>> have an opportunity to give a proper talk. This goes back to the larger >>> issue of academic politics based on small self-selected committees, the >>> same issues that exist with the feedforward crowd, and pretty much all of >>> academia. >>> >>> Today's information age algorithms such as Google's can determine >>> relevance of information and ways to display them, but hegemony of the >>> journal systems and the small committee system of academia developed in the >>> middle ages (and their mutual synergies) block the use of more modern >>> methods in research. Thus we are stuck with this problem, which especially >>> affects those that are trying to introduce something new and >>> counterintuitive, and hence the results described in the two National >>> Bureau of Economic Research articles I cited in my previous message. >>> >>> Thomas, I am happy to have more discussions and/or start a different >>> thread. >>> >>> Sincerely, >>> Tsvi Achler MD/PhD >>> >>> >>> >>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>> wrote: >>> >>>> Tsvi, >>>> >>>> While deep learning and feedforward networks have an outsize >>>> popularity, there are plenty of published sources that cover a much wider >>>> variety of networks, many of them more biologically based than deep >>>> learning. A treatment of a range of neural network approaches, going from >>>> simpler to more complex cognitive functions, is found in my textbook * >>>> Introduction to Neural and Cognitive Modeling* (3rd edition, >>>> Routledge, 2019). Also Steve Grossberg's book *Conscious Mind, >>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures >>>> with a strong biological basis. >>>> >>>> >>>> Best, >>>> >>>> >>>> Dan Levine >>>> ------------------------------ >>>> *From:* Connectionists >>>> on behalf of Tsvi Achler >>>> *Sent:* Saturday, October 30, 2021 3:13 AM >>>> *To:* Schmidhuber Juergen >>>> *Cc:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> Since the title of the thread is Scientific Integrity, I want to point >>>> out some issues about trends in academia and then especially focusing on >>>> the connectionist community. >>>> >>>> In general analyzing impact factors etc the most important progress >>>> gets silenced until the mainstream picks it up Impact Factiors in >>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf >>>> and >>>> often this may take a generation >>>> https://www.nber.org/.../does-science-advance-one-funeral... >>>> >>>> . >>>> >>>> The connectionist field is stuck on feedforward networks and variants >>>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>>> variants that are sometimes labeled as recurrent networks for learning time >>>> where the feedforward networks can be rewound in time. >>>> >>>> This stasis is specifically occuring with the popularity of deep >>>> learning. This is often portrayed as neurally plausible connectionism but >>>> requires an implausible amount of rehearsal and is not connectionist if >>>> this rehearsal is not implemented with neurons (see video link for further >>>> clarification). >>>> >>>> Models which have true feedback (e.g. back to their own inputs) cannot >>>> learn by backpropagation but there is plenty of evidence these types of >>>> connections exist in the brain and are used during recognition. Thus they >>>> get ignored: no talks in universities, no featuring in "premier" journals >>>> and no funding. >>>> >>>> But they are important and may negate the need for rehearsal as needed >>>> in feedforward methods. Thus may be essential for moving connectionism >>>> forward. >>>> >>>> If the community is truly dedicated to brain motivated algorithms, I >>>> recommend giving more time to networks other than feedforward networks. >>>> >>>> Video: >>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>>> >>>> >>>> Sincerely, >>>> Tsvi Achler >>>> >>>> >>>> >>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on >>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>>> that some of them - as well as their contemporaries - might be able to >>>> provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) >>>> for my 2015 survey of deep learning (now the most cited article ever >>>> published in the journal Neural Networks), I've decided to put forward >>>> another piece for MOOR. I want to thank the many experts who have already >>>> provided me with comments on it. Please send additional relevant references >>>> and suggestions for improvements for the following draft directly to me at >>>> juergen at idsia.ch: >>>> >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's >>>> justification of the ACM A. M. Turing Award for deep learning and a >>>> critique of the Turing Lecture published by ACM in July 2021. This work can >>>> also be seen as a short history of deep learning, at least as far as ACM's >>>> errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the >>>> very nature of science to resolve controversies through facts. Credit >>>> assignment is as core to scientific history as it is to machine learning. >>>> My aim is to ensure that the true history of our field is preserved for >>>> posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >> >> -- >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> Computer Science and Engineering 0404 >> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego - >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> Schedule: http://tinyurl.com/b7gxpwo >> >> Listen carefully, >> Neither the Vedas >> Nor the Qur'an >> Will teach you this: >> Put the bit in its mouth, >> The saddle on its back, >> Your foot in the stirrup, >> And ride your wild runaway mind >> All the way to heaven. >> >> -- Kabir >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl.o.mikalsen at uit.no Tue Nov 2 03:57:08 2021 From: karl.o.mikalsen at uit.no (=?Windows-1252?Q?Karl_=D8yvind_Mikalsen?=) Date: Tue, 2 Nov 2021 07:57:08 +0000 Subject: Connectionists: Data scientist / ML engineer, University Hospital of North-Norway Message-ID: Data scientist / ML engineer ? ML for health, The Norwegian Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North-Norway SPKI is expanding, and is in search of two highly motivated data scientists / ML engineers who want to contribute to the development and implementation of new artificial intelligence (AI) tools for health. The work will be done in a highly interdisciplinary environment, and you will collaborate with a team consisting of clinicians, scientists from the university and technologists, legal experts, industry partners, as well as personnel responsible for ICT, data security, privacy concerns and more. This environment also includes researchers at The Machine Learning Group and Visual Intelligence . For details, please see: https://www.finn.no/job/fulltime/ad.html?finnkode=237275174 Please contact Karl ?yvind Mikalsen karl.o.mikalsen at uit.no for more info. -------------- next part -------------- An HTML attachment was scrubbed... URL: From EPNSugan at ntu.edu.sg Tue Nov 2 04:33:38 2021 From: EPNSugan at ntu.edu.sg (Ponnuthurai Nagaratnam Suganthan) Date: Tue, 2 Nov 2021 08:33:38 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <2f8f0bdd-9957-861e-c53b-ce80e7957709@ics.uci.edu> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <8E01234A-03B3-492C-9DD7-B7FBD321475D@princeton.edu> <252610507.1139182.1635505993601@mail.yahoo.com> <2f8f0bdd-9957-861e-c53b-ce80e7957709@ics.uci.edu> Message-ID: A Few scattered thoughts ... 1. When committees are formed, we do really want to include researchers whom we trust to get the work done properly, in particular, for higher level/critical positions. This certainly requires some sort of previous interactions. Having said that, we were also not born with our connections and networks. We get to meet diverse researchers at conferences. It is sad that, as Pierre says, some researchers make a conscientious choice of "becoming introduced" to researchers with some characteristics, but not others. Unfortunately, it applies in Asia too, not only "North American" nor " White Male". How to address this? ? 2. The name "deep learning" came about recently. I guess that it is very much possible to see the essence of deep learning in very old publications, the least, many authors could have suggested in their publications as a potential future research direction of multi-layer feedforward neural networks, something like: "Our future research will investigate MLBPs with 5-10 hidden layers." Not sure if we could attribute the DL to all such statements. Further, due to the lack of computing power and large datasets, perhaps only a few may have really tried. The following article was getting ~100 citations until late 2000s (now 8000+): Gradient-based learning applied to document recognition, 1998/11, Proceedings of the IEEE Authors: Yann LeCun, L?on Bottou, Yoshua Bengio, Patrick Haffner 3. Should we make a distinction between the deep learning in the feedforward nets and recurrent nets? All recurrent nets might be viewed as deep learning, as old data remains in the net due to feedbacks (while feedforward version requires a deeper structure). 4. As Pierre points out below, research ethics (who to cite, what to exclude, how to present the literature in a misleading manner, etc.) are issues that I've encountered many times in the context of randomized neural networks. Due to the lack of internet, databases, etc., about 8-10 researchers independently invented randomized neural nets (and many were published as conference articles in the 1990s). But, the challenge is getting the authors to cite and fairly review the original works, even after introducing these diverse works to authors in the review comments: On the origins of randomization-based feedforward neural networks, Applied Soft Computing 105, 107239, 2021. Cheers Suganthan -----Original Message----- From: Connectionists On Behalf Of Baldi,Pierre Sent: Monday, 1 November 2021 11:48 pm To: Randall O'Reilly ; Connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Randall, I am glad that we agree on some points. If our quantitatively-bent community wanted to get to the bottom of these issues of biases, cronyism, and collusion the algorithms for doing so are not a secret. 1) For each center of power (e.g. foundation, program/organizing committee, editorial board, university or corporate department) compile the list of its members since its creation. Some of this information can be assembled automatically from the web, from printed material, and other sources. If we want to have full transparency, in some appropriate cases, the community could also ask that those centers release the corresponding information, as well as any other pertinent information. 2) For each one of these lists, compute how any sub-group of interest is represented, and how this representation has evolved over time. You can probably come up with some nice probabilistic models and compute the corresponding p-values. As suggested by Tom Dietterich, not too surprisingly one may find a bias in favor of "North American" or " White Male", possibly with a decreasing trend in more recent years due to Tom's and other's efforts. Tom also mentions a "founder effect" for the special case of NIPS. As far as I can tell, "with the exception of Terry Sejnowski" to use Tom's words, the founder effect for NIPS is largely gone. Ed Posner died (and so did Bell Labs) and other founders, such as Yaser Abu Mostafa or Jim Bower, do not seem to have been involved for many years (see: https://work.caltech.edu/nips). 3) To further address the issues of cronyism and collusion, you need to look also at groups that are slightly more subtle. For instance, in the case of academic members, one should look not only at those members individually, but also at their "lineage" i.e. their family relatives and, more importantly, all their present and former graduate students and postdoctoral fellows. In the case of corporations or national laboratories, one could begin by looking at colleagues from the same institution. Finally, one could try to corroborate or complement these analyses by studying the special events I mentioned in my previous message (e.g. invited talks, birthday celebrations, special sessions) and citations. The interesting challenge for citations is to detect not only positive correlations, but also negative ones, where relevant work has been left out on purpose. Pierre Baldi On 10/31/2021 1:44 AM, Randall O'Reilly wrote: > I'm sure everyone agrees that scientific integrity is essential at all levels, but I hope we can avoid a kind of simplistic, sanctimonious treatment of these issues -- there are lots of complex dynamics at play in this or any scientific field. Here's few additional thoughts / reactions: > > * Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar? Who does that help? Certainly not someone who might want to actually read a useful treatment of foundational ideas. I generally cite papers that I actually think other people should read if they want to learn more about a topic -- those tend to be written by people who write clearly and compellingly. Those who are obsessed about historical precedents should write papers on such things, but don't get bent out of shape if other people really don't care that much about that stuff and really just care about the ideas and moving *forward*. > > * Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule. Furthermore, the idea that gradients passed through many multiplicative steps of the chain rule tend to dissipate exponentially is pretty basic at a mathematical level, and I'm sure some obscure (or even famous) mathematician from the 1800's or even earlier has pointed this out in some context or another. For example, Lyopunov's work from the late 1800's is directly relevant in terms of iterative systems and the need to have an exponent of 1 for stability. So at some level all of deep learning and LSTM is just derivative of this earlier work (pun intended!). > > * More generally, each individual scientist is constantly absorbing ideas from others, synthesizing them in their own internal neural networks, and essentially "reinventing" the insights and implications of these ideas in their own mind. We all only have our own individual subjective lens onto the world, and each have to generate our own internal conceptual structures for ourselves. Thus, reinvention is rampant, and we each feel a distinct sense of ownership over the powerful ideas that we have forged in our own minds. Some people are lucky enough to be at the right place and the right time to share truly new ideas in an effective way with a large number of other people, but everyone who grasps those ideas can cherish the fact that so many people are out there in the world working tirelessly to share all of these great ideas! > > * To support what Pierre Baldi said: People are strongly biased to form in-group affiliations and put others into less respected (or worse) out-groups -- the power of this instinct is behind most of the evil in the world today and throughout history, and science is certainly not immune to its effects. Thus it is important to explicitly promote diversity of all forms in scientific organizations, and work against what clearly are strong "cliques" in the field, who hold longstanding and disproportionate control over important organizations. > > - Randy > >> On Oct 29, 2021, at 4:13 AM, Anand Ramamoorthy wrote: >> >> Hi All, >> Some remarks/thoughts: >> >> 1. Juergen raises important points relevant not just to the ML folks >> but also the wider scientific community >> >> 2. Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or "strategic vale" or boosting someone because they're a great populariser. >> >> 3. Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear. >> >> 4. Academic science has loads of problems and I think there is some urgency w.r.t sorting them out. For three reasons; a) scientific duty b) for posterity and c) we now live in a world where anti-science sentiments are not limited to fringe elements and this does not bode well for humanity. >> >> Maybe dealing with proper credit assignment as pointed out by Juergen and others in the thread could be a start. >> >> Live Long and Prosper! >> >> Best, >> >> Anand Ramamoorthy >> >> >> >> On Friday, 29 October 2021, 08:39:14 BST, Jonathan D. Cohen wrote: >> >> >> The same incentive structures (and values they reflect) are not necessarily the same ? nor should they necessary be ? in commercial and academic environments. >> >> jdc >> >> >> >>> On Oct 28, 2021, at 12:03 PM, Marina Meila wrote: >>> >>> Since credit is a form of currency in academia, let's look at the "hard currency" rewards of invention. Who gets them? the first company to create a new product usually fails. >>> However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. >>> >>> The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the "product" directly from them or not. >>> >>> Best wishes, >>> >>> Marina >>> >>> -- Marina Meila >>> Professor of Statistics >>> University of Washington >>> >>> >>> On 10/28/21, 5:59 AM, "Connectionists" wrote: >>> >>> As a friendly amendment to both Randy and Danko?s comments, it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor. This may help explain why the *last* person to invent or discover something is the one who gets the [social] credit. That is, giving credit to those who disseminate may even have normative value. After all, if a tree falls in the forrest? As for those who care more about discovery and invention than dissemination, well, for them credit assignment may not be as important ;^). >>> jdc >>> >>> >>> On Oct 28, 2021, at 4:23 AM, Danko Nikolic wrote: >>> >>> Yes Randall, sadly so. I have seen similar examples in neuroscience and philosophy of mind. Often, (but not always), you have to be the one who popularizes the thing to get the credit. Sometimes, you can get away, you just do the hard conceptual work and others doing for you the (also hard) marketing work. The best bet is doing both by yourself. Still no guarantee. >>> Danko >>> >>> >>> >>> >>> On Thu, 28 Oct 2021, 10:13 Randall O'Reilly wrote: >>> >>> >>> I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit. This is almost by definition: once it is sufficiently widely known, nobody can successfully reinvent it; conversely, if it can be successfully reinvented, then the previous attempts failed for one reason or another (which may have nothing to do with the merit of the work in question). >>> >>> For example, I remember being surprised how little Einstein added to what was already established by Lorentz and others, at the mathematical level, in the theory of special relativity. But he put those equations into a conceptual framework that obviously changed our understanding of basic physical concepts. Sometimes, it is not the basic equations etc that matter: it is the big picture vision. >>> >>> Cheers, >>> - Randy >>> >>>> On Oct 27, 2021, at 12:52 AM, Schmidhuber Juergen wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award- >>>> deep-learning.html >>>> >>>> The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> > -- Pierre Baldi, Ph.D. Distinguished Professor, Department of Computer Science Director, Institute for Genomics and Bioinformatics Associate Director, Center for Machine Learning and Intelligent Systems University of California, Irvine Irvine, CA 92697-3435 (949) 824-5809 (949) 824-9813 [FAX] Assistant: Janet Ko jko at uci.edu ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. From m.reske at fz-juelich.de Tue Nov 2 07:00:59 2021 From: m.reske at fz-juelich.de (Martina Reske) Date: Tue, 2 Nov 2021 12:00:59 +0100 Subject: Connectionists: =?utf-8?q?Postdoc_position_in_Computational_and_S?= =?utf-8?q?ystems_Neuroscience_at_Research_Center_J=C3=BClich=2C_Germany?= Message-ID: The working group Statistical Neuroscience at INM-6 Computational and Systems Neuroscience at Research Center Juelich, Germany, led by Sonja Gr?n develops statistical methods for the analysis of experimental electrophysiological data from massively parallel electrophysiological recordings and leads the development of open source software tools for the analysis of activity data and validation of models on the population level. The department aims to fill a postdoctoral position that is responsible for the analysis of massively parallel spike and LFP data acquired in a collaborative project with CNRS-AMU, Marseille. Your Job: ? Analysis of massively parallel spike and LFP data from the Vision for Action project (visuo-motor integration across areas) ? Adaptation of machine learning algorithms and/or further development of methods for the analysis of these neuronal multichannel data ? Development of methods for quantifying the relationship between spike patterns and LFP/ECoG data ? Optimization and adaptation of neuronal network model in NEST for the comparison of experimental and simulated activity data ? Implementation of data analysis tools in Python and integration of the codes into the shared analysis library Elephant ? Contribution to the data analysis pipeline, under FAIR principles ? Cooperation with experimental collaboration partners ? Integration of the work into Ebrains / HBP ? Presentation of project results at international conferences ? Scientific and administrative project coordination ? Support in the supervision of PhD students Your Profile: ? MSc in physics, data science, mathematics or computer science; PhD in computational neuroscience, applied mathematics, physics or related ? Detailed knowledge of neuroscientific concepts, especially in neuronal dynamics, statistical analysis of neuronal activity and dynamics ? Expertise in network modelling at different spatial levels ? Experience with large-scale data-science activities and scientific programming, especially Python ? Fluent in spoken and written English ? Experience in scientific publishing in a related field ? Proactive working style, confident appearance and communication skills, high social competence and ability to work cooperatively in a team The position is initially for a fixed term of 2 Jahre years. Salary and benefits in conformity with the provisions of the Collective Agreement for the Civil Service (TV?D, salary grade 13-14). FZ J?lich promotes equal opportunities and diversity in its employment relations. We also welcome applications from disabled persons. Please apply via our recruitment system under reference number: 2021-323. Martina Reske -- Dr. Martina Reske Scientific Coordinator Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6) Theoretical Neuroscience J?lich Research Centre and JARA J?lich, Germany Work +49.2461.611916 Work Cell +49.151.26156918 Fax +49.2461.619460 www.csn.fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at gmail.com Tue Nov 2 08:11:35 2021 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Tue, 2 Nov 2021 13:11:35 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Tsvi wrote: "No matter how successful one side or another pushes their narrative, this does not change how the brain works." So true! I suppose we would do much better as a discipline if we tried harder to build on top of each other's insights. We should walk collectively towards the "truth" rather than pretending that we are already there. Danko Dr. Danko Nikoli? www.danko-nikolic.com https://www.linkedin.com/in/danko-nikolic/ --- A progress usually starts with an insight --- On Tue, Nov 2, 2021 at 9:54 AM Tsvi Achler wrote: > > I received several messages along the lines "you must not know what you > are talking about but this is X and you should read book Y", without the > commenters reading the original work on Regulatory Feedback. > More specifically the X & Y's of the responses are: > Steve: X= Adaptive Resonance, Y= Steve's book > Gary: X= Trainable via Backprop, Y= Randy's book > > First I want to point out the more novel and counterintuitive an idea, > less people that synergize with it, less support from advisor, less support > from academic pedigree, despite academic departments and grants stating > exactly the opposite. So how does this happen? > Everyone in self selected committees promoting themselves, dismissive of > others and decisions and advice becomes political. The more > counterintuitive the less support. > > This is a counterintuitive model, where during recognition input > information goes to output neurons then feed-back and partially modifies > the same inputs that are then reprocessed by the same outputs > continuously until neuron activations settle. > This mechanism does not describe learning or learning through time, it > occurs during recognition and does not change weights. > > I really urge reading the original article and demonstration videos before > making comments: Achler 2014 "Symbolic Networks for Cognitive Capacities" > BICA, > https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities > > In-depth updated video: > https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3 > It is not that I am against healthy criticisms and discourse, but I am > against dismissiveness without looking into details. > Moreover I would be happy to be invited to give a talk at your > institutions and go over the details within your communities. > > I am disappointed with both Gary and Steve, because I met both of you in > the past and discussed the model. > > In fact, in a paid conference led by Steve, I was relegated to a few > minutes introduction for this model that is counter intuitive, because it > was assumed to be "Adaptive Resonance" (just like the last message) and it > didn't need more time. This paucity of opportunity to dive into the > details and quick dismissiveness is a huge part of the problem which > contributes to the inhibition of novel ideas as indicated by the two > articles about academia I cited. > > Since I am no longer funded and do not have an academic budget I am no > longer presenting at paid conferences where this work will be dismissed and > relegated to a dark corner and told to listen to the invited speaker or > paid Journal with low impact factors. Nor will I pay for books by those > who will promote paid books before reading my work. > > No matter how successful one side or another pushes their narrative, this > does not change how the brain works. > > I hope the community can realize these problems. I am happy to come to > invited talks, go into a deep dive and have the conversations that > academics like to project outwardly that they have. > > Sincerely, > -Tsvi Achler MD/PhD (I put my degrees here in hopes I won't be pointed to > any more beginners books) > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > >> Tsvi - While I think Randy and Yuko's book >> is actually somewhat better than >> the online version (and buying choices on amazon start at $9.99), there >> *is* an online version. >> Randy & Yuko's models take into account feedback and inhibition. >> >> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >> >>> Daniel, >>> >>> Does your book include a discussion of Regulatory or Inhibitory Feedback >>> published in several low impact journals between 2008 and 2014 (and in >>> videos subsequently)? >>> These are networks where the primary computation is inhibition back to >>> the inputs that activated them and may be very counterintuitive given >>> today's trends. You can almost think of them as the opposite of Hopfield >>> networks. >>> >>> I would love to check inside the book but I dont have an academic budget >>> that allows me access to it and that is a huge part of the problem with how >>> information is shared and funding is allocated. I could not get access to >>> any of the text or citations especially Chapter 4: "Competition, Lateral >>> Inhibition, and Short-Term Memory", to weigh in. >>> >>> I wish the best circulation for your book, but even if the Regulatory >>> Feedback Model is in the book, that does not change the fundamental problem >>> if the book is not readily available. >>> >>> The same goes with Steve Grossberg's book, I cannot easily look inside. >>> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >>> as a predominant mechanism, but I do believe a function such as vigilance >>> is very important during recognition and Adaptive Resonance is one of >>> a very few models that have it. The Regulatory Feedback model I have >>> developed (and Michael Spratling studies a similar model as well) is built >>> primarily using the vigilance type of connections and allows multiple >>> neurons to be evaluated at the same time and continuously during >>> recognition in order to determine which (single or multiple neurons >>> together) match the inputs the best without lateral inhibition. >>> >>> Unfortunately within conferences and talks predominated by the Adaptive >>> Resonance crowd I have experienced the familiar dismissiveness and did not >>> have an opportunity to give a proper talk. This goes back to the larger >>> issue of academic politics based on small self-selected committees, the >>> same issues that exist with the feedforward crowd, and pretty much all of >>> academia. >>> >>> Today's information age algorithms such as Google's can determine >>> relevance of information and ways to display them, but hegemony of the >>> journal systems and the small committee system of academia developed in the >>> middle ages (and their mutual synergies) block the use of more modern >>> methods in research. Thus we are stuck with this problem, which especially >>> affects those that are trying to introduce something new and >>> counterintuitive, and hence the results described in the two National >>> Bureau of Economic Research articles I cited in my previous message. >>> >>> Thomas, I am happy to have more discussions and/or start a different >>> thread. >>> >>> Sincerely, >>> Tsvi Achler MD/PhD >>> >>> >>> >>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>> wrote: >>> >>>> Tsvi, >>>> >>>> While deep learning and feedforward networks have an outsize >>>> popularity, there are plenty of published sources that cover a much wider >>>> variety of networks, many of them more biologically based than deep >>>> learning. A treatment of a range of neural network approaches, going from >>>> simpler to more complex cognitive functions, is found in my textbook * >>>> Introduction to Neural and Cognitive Modeling* (3rd edition, >>>> Routledge, 2019). Also Steve Grossberg's book *Conscious Mind, >>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures >>>> with a strong biological basis. >>>> >>>> >>>> Best, >>>> >>>> >>>> Dan Levine >>>> ------------------------------ >>>> *From:* Connectionists >>>> on behalf of Tsvi Achler >>>> *Sent:* Saturday, October 30, 2021 3:13 AM >>>> *To:* Schmidhuber Juergen >>>> *Cc:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> Since the title of the thread is Scientific Integrity, I want to point >>>> out some issues about trends in academia and then especially focusing on >>>> the connectionist community. >>>> >>>> In general analyzing impact factors etc the most important progress >>>> gets silenced until the mainstream picks it up Impact Factiors in >>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf >>>> and >>>> often this may take a generation >>>> https://www.nber.org/.../does-science-advance-one-funeral... >>>> >>>> . >>>> >>>> The connectionist field is stuck on feedforward networks and variants >>>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>>> variants that are sometimes labeled as recurrent networks for learning time >>>> where the feedforward networks can be rewound in time. >>>> >>>> This stasis is specifically occuring with the popularity of deep >>>> learning. This is often portrayed as neurally plausible connectionism but >>>> requires an implausible amount of rehearsal and is not connectionist if >>>> this rehearsal is not implemented with neurons (see video link for further >>>> clarification). >>>> >>>> Models which have true feedback (e.g. back to their own inputs) cannot >>>> learn by backpropagation but there is plenty of evidence these types of >>>> connections exist in the brain and are used during recognition. Thus they >>>> get ignored: no talks in universities, no featuring in "premier" journals >>>> and no funding. >>>> >>>> But they are important and may negate the need for rehearsal as needed >>>> in feedforward methods. Thus may be essential for moving connectionism >>>> forward. >>>> >>>> If the community is truly dedicated to brain motivated algorithms, I >>>> recommend giving more time to networks other than feedforward networks. >>>> >>>> Video: >>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>>> >>>> >>>> Sincerely, >>>> Tsvi Achler >>>> >>>> >>>> >>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on >>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>>> that some of them - as well as their contemporaries - might be able to >>>> provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) >>>> for my 2015 survey of deep learning (now the most cited article ever >>>> published in the journal Neural Networks), I've decided to put forward >>>> another piece for MOOR. I want to thank the many experts who have already >>>> provided me with comments on it. Please send additional relevant references >>>> and suggestions for improvements for the following draft directly to me at >>>> juergen at idsia.ch: >>>> >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's >>>> justification of the ACM A. M. Turing Award for deep learning and a >>>> critique of the Turing Lecture published by ACM in July 2021. This work can >>>> also be seen as a short history of deep learning, at least as far as ACM's >>>> errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the >>>> very nature of science to resolve controversies through facts. Credit >>>> assignment is as core to scientific history as it is to machine learning. >>>> My aim is to ensure that the true history of our field is preserved for >>>> posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >> >> -- >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> Computer Science and Engineering 0404 >> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego - >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> Schedule: http://tinyurl.com/b7gxpwo >> >> Listen carefully, >> Neither the Vedas >> Nor the Qur'an >> Will teach you this: >> Put the bit in its mouth, >> The saddle on its back, >> Your foot in the stirrup, >> And ride your wild runaway mind >> All the way to heaven. >> >> -- Kabir >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anja.meunier at univie.ac.at Tue Nov 2 10:30:51 2021 From: anja.meunier at univie.ac.at (Anja Meunier) Date: Tue, 2 Nov 2021 15:30:51 +0100 Subject: Connectionists: BCI Un-Conference: Submission open Message-ID: Dear colleagues, registration and abstract submission for the 3rd BCI Un-Conference is now open [1]. Abstract submissions are text-only. The length and format of the submissions are up to the authors - use whatever format you consider best for attracting up-votes! Important dates: - Submission deadline: December 3rd, 2021 at 11:59 pm (CET) - Voting period: December 6th, 2021 - December 12th, 2021 at 11:59 pm (CET) - BCI-UC event: January 27th, 2022, from 03:00 pm to 09:00 pm (CET) We will invite the authors of the top-voted abstracts to present at the un-conference. The 3rd BCI-UC will feature keynotes by Cynthia Chestek (University of Michigan) [2] and Thorsten Zander (TU Brandenburg) [3]. BCI-UC is an online un-conference that provides rapid dissemination of novel research results in the BCI community. Participants can submit abstracts to apply for 20 minutes presentation slots. Abstracts are not reviewed by a program committee. Rather, all registered participants can vote on which of the submitted abstracts they would like to see presented at the un-conference. Because the BCI-UC does not publish conference proceedings, submitted abstracts can report novel as well as already published work. Registration and attendance are free of charge. Presentations of the 1st and 2nd BCI-UC can be re-watched at [4] and [5].? All presentations will be streamed via Crowdcast [6] and can be accessed free of charge. Join us in supporting this novel community-driven dissemination of research results! The BCI-UC committee: Moritz Grosse-Wentrup Anja Meunier Philipp Raggam Jiachen Xu Links: [1] https://bciunconference.univie.ac.at/register-vote/ [2] https://scholar.google.com/citations?user=36sxAZEAAAAJ&hl=de [3] https://scholar.google.com/citations?hl=en&user=0E49HxYAAAAJ [4] https://bciunconference.univie.ac.at/past-events/1st-bci-uc/ [5] https://bciunconference.univie.ac.at/past-events/2nd-bci-uc/ [6] https://www.crowdcast.io/ From alessio.ferone at uniparthenope.it Tue Nov 2 09:56:36 2021 From: alessio.ferone at uniparthenope.it (ALESSIO FERONE) Date: Tue, 2 Nov 2021 13:56:36 +0000 Subject: Connectionists: [CfP] ICIAP2021 - Special Session: Computer Vision for Coastal and Marine Environment Monitoring Message-ID: ******Apologies for multiple posting****** _________________________________________ ICIAP2021 Special Session Computer Vision for Coastal and Marine Environment Monitoring https://www.iciap2021.org/specialsession/ _________________________________________ The coastal and marine environment represents a vital part of the world, resulting in a complex ecosystem tightly linked to many human activities. For this reason, monitoring coastal and marine ecosystems is of critical importance for gaining a better understanding of their complexity with the goal of protecting such a fundamental resource. Coastal and marine environmental monitoring aims to employ leading technologies and methodologies to monitor and evaluate the marine environment both near the coast and underwater. This monitoring can be performed either on site, using sensors for collecting data, or remotely through seafloor cabled observatories, AUVs or ROVs, resulting in a huge amount of data that require advanced intelligent methodologies to extract useful information and knowledge on environmental conditions. A large part of this data is represented by images and videos produced by fixed and PTZ cameras either on the coast, on the marine surface or underwater. For this reason, the analysis of such volume of imagery data imposes a series of unique challenges, which need to be tackled by the computer vision community. The aim of the special session is to host recent research advances in the field of computer vision and image processing techniques applied to the monitoring of coastal and marine environment and to highlight research issues and still open questions. Full CfP at https://neptunia.uniparthenope.it/cfp/cv-cmem/ Important Dates: Paper Submission Deadline: January 17, 2022 Decision Notification: February 19, 2022 Camera Ready: March 6, 2022 Organizers: Angelo Ciaramella Sajid Javed Alessio Ferone From cognitivium at sciencebeam.com Tue Nov 2 10:46:20 2021 From: cognitivium at sciencebeam.com (Mary) Date: Tue, 2 Nov 2021 18:16:20 +0330 Subject: Connectionists: Neurofeedback and QEEG Workshop Message-ID: <202111021446.1A2EkMLB100476@scs-mx-02.andrew.cmu.edu> Dear Researchers, On behalf of the ScienceBeam company, a designer and manufacturer of Electrophysiology products and an organizer of various neuroscience and Electrophysiology and Neuroscience workshops, we would like to invite you to join us for the second Neurofeedback and QEEG workshop in our office in Istanbul, Turkey. This will be a 2-day hands-on workshop. Considering that we emphasize on our workshops to be completely practical, the workshop will be held in two groups as in semi-private workshops, and it has limited capacity. The first group will attend the workshop on November 18-19, and the second group will attend the workshop on November 20-21.? This workshop, which will be led by ScienceBeam experts in both Neurofeedback and QEEG areas, provides not only a broad and up-to-date exposure to the current state of QEEG and Neurofeedback, but also provides an opportunity to do Neurofeedback treatment as well as QEEG recording and analysis during hands-on sessions. All the attendees who have little or broad experience in QEEG and Neurofeedback will find this meeting a comprehensive and engaging workshop that will include essential material to ensure a solid understanding of current QEEG and Neurofeedback concepts. ?We are delighted to invite you to join us for this fantastic meeting. This is an opportunity to get together with a group of experts/scientists/clinicians/researchers, as well as having a fancy stay at one of the most beautiful and live cities in the whole world. So, even if you are not in Istanbul, do not miss this extraordinary opportunity and book your flight promptly. We are excited to see you! For more information regarding the workshop schedule and registration please visit the link below: https://sciencebeam.com/neurofeedback-and-qeeg-workshop-2/? Please click on the link below to see the workshop schedule: https://sciencebeam.com/wp-content/uploads/2021/11/Program-Schedule.pdf? You are more than welcome to invite your friends and colleagues to attend this fantastic workshop with you (Lower admission fee for group registration). We are planning for a fantastic meeting and gathering with researchers and clinicians from over the world, as well as a fancy stay at one of the most beautiful and live cities in the whole world. Notably, this is a great opportunity for clinicians and researchers who?d want to purchase the Neurofeedback or QEEG equipment, as well as learning all the related concepts. For them, the workshop cost would be free. Please bear in mind that the registration deadline is November 15th. If you have any questions regarding the workshop, do not hesitate to contact us (workshop at sciencebeam.com , or WhatsApp: 00905356498587). We hope to see you soon in Istanbul. ? ? ? Mary Reae Human Neuroscience Dept. Manager @ScienceBeam mary at sciencebeam.com www.sciencebeam.com? -------------- next part -------------- An HTML attachment was scrubbed... URL: From N.Cohen at leeds.ac.uk Tue Nov 2 17:05:54 2021 From: N.Cohen at leeds.ac.uk (Netta Cohen) Date: Tue, 2 Nov 2021 21:05:54 +0000 Subject: Connectionists: Faculty positions at the University of Leeds, England Message-ID: Dear connectionists, The School of Computing at the University of Leeds invites applications for a number of Lecturer posts at either Grade 8 (view advert) or Grade 7 (view advert). We are a highly-ranked academic department with a vibrant research culture and a commitment to excellence in our teaching. We are recruiting these posts in support of our current and planned growth in both fundamental and applied computer science. The positions are open-ended research and teaching academic positions, with start dates as early as January 2022, or as soon as possible thereafter. We are seeking candidates whose research demonstrably aligns with one or more of our existing research themes, however we would particularly welcome applications from candidates with research interests in AI and robotics (including the interface to computational and cognitive neuroscience). We welcome applications from both academic and non-academic organizations, from future colleagues who wish to work part-time or flexibly, and we particularly encourage women and members of ethnic minorities or other under-represented groups to apply. All applications submitted by 23.59 (UK time) on 26 November, 2021 will receive full consideration. Best wishes, Netta ------------------------------------------- Netta Cohen Professor of Complex Systems School of Computing University of Leeds Leeds, UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Tue Nov 2 22:08:14 2021 From: achler at gmail.com (Tsvi Achler) Date: Tue, 2 Nov 2021 19:08:14 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > Randy & Yuko's models take into account feedback and inhibition. > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > >> Daniel, >> >> Does your book include a discussion of Regulatory or Inhibitory Feedback >> published in several low impact journals between 2008 and 2014 (and in >> videos subsequently)? >> These are networks where the primary computation is inhibition back to >> the inputs that activated them and may be very counterintuitive given >> today's trends. You can almost think of them as the opposite of Hopfield >> networks. >> >> I would love to check inside the book but I dont have an academic budget >> that allows me access to it and that is a huge part of the problem with how >> information is shared and funding is allocated. I could not get access to >> any of the text or citations especially Chapter 4: "Competition, Lateral >> Inhibition, and Short-Term Memory", to weigh in. >> >> I wish the best circulation for your book, but even if the Regulatory >> Feedback Model is in the book, that does not change the fundamental problem >> if the book is not readily available. >> >> The same goes with Steve Grossberg's book, I cannot easily look inside. >> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >> as a predominant mechanism, but I do believe a function such as vigilance >> is very important during recognition and Adaptive Resonance is one of >> a very few models that have it. The Regulatory Feedback model I have >> developed (and Michael Spratling studies a similar model as well) is built >> primarily using the vigilance type of connections and allows multiple >> neurons to be evaluated at the same time and continuously during >> recognition in order to determine which (single or multiple neurons >> together) match the inputs the best without lateral inhibition. >> >> Unfortunately within conferences and talks predominated by the Adaptive >> Resonance crowd I have experienced the familiar dismissiveness and did not >> have an opportunity to give a proper talk. This goes back to the larger >> issue of academic politics based on small self-selected committees, the >> same issues that exist with the feedforward crowd, and pretty much all of >> academia. >> >> Today's information age algorithms such as Google's can determine >> relevance of information and ways to display them, but hegemony of the >> journal systems and the small committee system of academia developed in the >> middle ages (and their mutual synergies) block the use of more modern >> methods in research. Thus we are stuck with this problem, which especially >> affects those that are trying to introduce something new and >> counterintuitive, and hence the results described in the two National >> Bureau of Economic Research articles I cited in my previous message. >> >> Thomas, I am happy to have more discussions and/or start a different >> thread. >> >> Sincerely, >> Tsvi Achler MD/PhD >> >> >> >> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: >> >>> Tsvi, >>> >>> While deep learning and feedforward networks have an outsize popularity, >>> there are plenty of published sources that cover a much wider variety of >>> networks, many of them more biologically based than deep learning. A >>> treatment of a range of neural network approaches, going from simpler to >>> more complex cognitive functions, is found in my textbook * >>> Introduction to Neural and Cognitive Modeling* (3rd edition, Routledge, >>> 2019). Also Steve Grossberg's book *Conscious Mind, Resonant Brain* >>> (Oxford, 2021) emphasizes a variety of architectures with a strong >>> biological basis. >>> >>> >>> Best, >>> >>> >>> Dan Levine >>> ------------------------------ >>> *From:* Connectionists >>> on behalf of Tsvi Achler >>> *Sent:* Saturday, October 30, 2021 3:13 AM >>> *To:* Schmidhuber Juergen >>> *Cc:* connectionists at cs.cmu.edu >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> Since the title of the thread is Scientific Integrity, I want to point >>> out some issues about trends in academia and then especially focusing on >>> the connectionist community. >>> >>> In general analyzing impact factors etc the most important progress gets >>> silenced until the mainstream picks it up Impact Factiors in novel >>> research www.nber.org/.../working_papers/w22180/w22180.pdf >>> and >>> often this may take a generation >>> https://www.nber.org/.../does-science-advance-one-funeral... >>> >>> . >>> >>> The connectionist field is stuck on feedforward networks and variants >>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>> variants that are sometimes labeled as recurrent networks for learning time >>> where the feedforward networks can be rewound in time. >>> >>> This stasis is specifically occuring with the popularity of deep >>> learning. This is often portrayed as neurally plausible connectionism but >>> requires an implausible amount of rehearsal and is not connectionist if >>> this rehearsal is not implemented with neurons (see video link for further >>> clarification). >>> >>> Models which have true feedback (e.g. back to their own inputs) cannot >>> learn by backpropagation but there is plenty of evidence these types of >>> connections exist in the brain and are used during recognition. Thus they >>> get ignored: no talks in universities, no featuring in "premier" journals >>> and no funding. >>> >>> But they are important and may negate the need for rehearsal as needed >>> in feedforward methods. Thus may be essential for moving connectionism >>> forward. >>> >>> If the community is truly dedicated to brain motivated algorithms, I >>> recommend giving more time to networks other than feedforward networks. >>> >>> Video: >>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>> >>> >>> Sincerely, >>> Tsvi Achler >>> >>> >>> >>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>> wrote: >>> >>> Hi, fellow artificial neural network enthusiasts! >>> >>> The connectionists mailing list is perhaps the oldest mailing list on >>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>> that some of them - as well as their contemporaries - might be able to >>> provide additional valuable insights into the history of the field. >>> >>> Following the great success of massive open online peer review (MOOR) >>> for my 2015 survey of deep learning (now the most cited article ever >>> published in the journal Neural Networks), I've decided to put forward >>> another piece for MOOR. I want to thank the many experts who have already >>> provided me with comments on it. Please send additional relevant references >>> and suggestions for improvements for the following draft directly to me at >>> juergen at idsia.ch: >>> >>> >>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>> >>> >>> The above is a point-for-point critique of factual errors in ACM's >>> justification of the ACM A. M. Turing Award for deep learning and a >>> critique of the Turing Lecture published by ACM in July 2021. This work can >>> also be seen as a short history of deep learning, at least as far as ACM's >>> errors and the Turing Lecture are concerned. >>> >>> I know that some view this as a controversial topic. However, it is the >>> very nature of science to resolve controversies through facts. Credit >>> assignment is as core to scientific history as it is to machine learning. >>> My aim is to ensure that the true history of our field is preserved for >>> posterity. >>> >>> Thank you all in advance for your help! >>> >>> J?rgen Schmidhuber >>> >>> >>> >>> >>> >>> >>> >>> >>> > > -- > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > Schedule: http://tinyurl.com/b7gxpwo > > Listen carefully, > Neither the Vedas > Nor the Qur'an > Will teach you this: > Put the bit in its mouth, > The saddle on its back, > Your foot in the stirrup, > And ride your wild runaway mind > All the way to heaven. > > -- Kabir > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gexarcha3 at gmail.com Wed Nov 3 06:10:28 2021 From: gexarcha3 at gmail.com (Georgios Exarchakis) Date: Wed, 3 Nov 2021 11:10:28 +0100 Subject: Connectionists: Research Scientist in FL Message-ID: The University Hospital Institute of Image-Guided Surgery (IHU Strasbourg)?is searching for a scientist to?conduct research in federated learning for medical data to help develop generalizable and efficient ML algorithms under the scope of self-supervision, domain shift, improved data utility detection, learning through noisy labelling.IHU has been in the forefront of research in surgical?AI with its state-of-the-art facilities and multidisciplinary team of computer scientists and research surgeons. The candidate will have the opportunity to work closely with researchers in the space of surgical AI, to mentor and to shape research. The candidate will have the pleasure to live in Strasbourg, one of the most beautiful and vibrant cities in Europe. How to apply * Please email your resume at alexandros.karargyris at ihu-strasbourg.eu with the title "Research Scientist in FL" * Position is open until filled * Initially, 2-year contract with a possibility to extend or convert to permanent?position Qualifications * Strong background in machine learning * Proven experience in Federated Learning through publication(s) and/or code (e.g. Github) is a plus but not a requirement * Background in medical data (e.g., imaging, health records, genomics, etc.) is a plus but not a requirement * Python * Tensorflow, PyTorch, or Keras Responsibilities * Perform research in Federated Learning algorithms * Publish peer-reviewed papers * Participate in research meetings with top industry partners and contribute code to open research community (e.g. MLCommons) * Provide mentorship About the institute The University Hospital Institute of Image-Guided Surgery (IHU Strasbourg) i?is a unique research facility offering a true multidisciplinary environment to innovate in the image-guided surgery domain. Surgeons and engineers work closely together on practical as well as moonshot research ideas. Promising ideas are funded and supported internally, competitive research grants or partnership programs, while developed and validated ideas are translated to products through licensing and startup spin-offs. The institute has strong collaborations with academia and industry. Located in the heart of Strasbourg?s historic hospital campus, the IHU Strasbourg is an international medical-surgical centre created in 2011, specialising in minimally invasive approaches (laparoscopy, flexible endoscopy, ultrasound, percutaneous surgery). The IHU brings together care, research, training, and technology transfer activities in an exceptional setting for the benefit of patients. Surgeons and engineers work in close collaboration on applied and innovative research topics. Upstream topics (TRL 0 to 2) are funded and supported internally through competitive research grants or partnership programmes. Mature topics (TRL 3 to 5) are transformed into products through licensing and spin-off companies. The Institute has strong collaborations with academia and industry About the lab The research group CAMMA (Computational Analysis and Modelling of Medical Activities) numbers approximately 30 researchers with interdisciplinary backgrounds, led by Prof. Nicolas Padoy. CAMMA aims at developing new tools and methods based on computer vision, medical image analysis, and machine learning to perceive, model, analyze, and support clinician and staff activities in the operating room (OR) using the vast amount of digital data generated during surgeries. CAMMA is a joint group of ICube at the University of Strasbourg and the Institut Hospitalo-Universitaire of Strasbourg (IHU Strasbourg). Our offices are located on the campus of Strasbourg?s University Hospital in the ultramodern facilities of IHU Strasbourg, a walking distance from the beautiful historic city center of Strasbourg. Due to its unique location and collaborations, the group has privileged access to multiple resources for high-performance computing, preclinical and clinical platforms for fast prototyping, and offers new and modern office space for its members. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulrich.reimer at ost.ch Wed Nov 3 06:32:47 2021 From: ulrich.reimer at ost.ch (Ulrich Reimer) Date: Wed, 3 Nov 2021 10:32:47 +0000 Subject: Connectionists: Trust in Medical AI Systems Message-ID: Dear colleagues, please find below the link to a questionnaire on ?Trust in Medical AI Systems?. Filling in the questionnaire takes about 10 minutes. The insights gained from the questionnaire will form a major part of a research project. Participants will get the aggregated results. https://ww3.unipark.de/uc/digitaltrust_in_medical_AI/ Please feel free to further distribute the link. Best regards Ulrich Reimer --- Prof. Dr. habil. Ulrich Reimer Eastern Switzerland University of Applied Sciences Institute for Information and Process Management Rosenbergstrasse 59 CH-9001 St. Gallen Switzerland Email: ulrich.reimer at ost.ch Tel.: +41 58 257 1746 Web: www.ost.ch/ipm www.ulrichreimer.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhammer at techfak.uni-bielefeld.de Wed Nov 3 06:31:23 2021 From: bhammer at techfak.uni-bielefeld.de (Barbara Hammer) Date: Wed, 3 Nov 2021 11:31:23 +0100 Subject: Connectionists: Efficient and explainable AutoML in JAII Lecture Series Message-ID: <0f3de6bd-4e75-92b0-52d6-a55553484d01@techfak.uni-bielefeld.de> Dear connectionsts, I would like to draw your attention to a talk on 'Efficient and explainable AutoML' on November 11, 4pm CET in the LAII Lecture Series with the possibility of in person or virtual attendence. The registration link can be found here: https://jaii.eu/ Best wishes Barbara Hammer -- Prof. Dr. Barbara Hammer Machine Learning Group, CITEC Bielefeld University D-33594 Bielefeld Phone: +49 521 / 106 12115 From gexarcha3 at gmail.com Wed Nov 3 06:08:47 2021 From: gexarcha3 at gmail.com (Georgios Exarchakis) Date: Wed, 3 Nov 2021 11:08:47 +0100 Subject: Connectionists: Systems Engineer/Architect for Federated Learning Message-ID: The University Hospital Institute of Image-Guided Surgery (IHU Strasbourg)?is searching for a systems engineer/architect to lead the development of the Federated Learning (FL) infrastructure for its research purposes across its partners. The ideal candidate will have the unique opportunity to work in state-of-the-art facilities within a multidisciplinary?team of engineers and clinicians who shape next generation European medical research. The candidate will have the pleasure to live in Strasbourg, one of the most beautiful and vibrant cities in Europe. Qualifications * Background in System design and architecture, Distributed systems * Familiarity with Federated Learning (FL) or desire to learn FL. * Preferable: knowledge of one of the FL frameworks: e.g. Nvidia Clara, OpenFL, TFF, PySyft, etc. * Python, Kubernetes, Docker, Cloud Computing (MLOps) Responsibilities * Lead development FL infrastructure for consortium * Identify and utilize open-sourced FL framework for utilization based on consortium needs * Support system integrity (stability, security) * Integrate FL system with data warehouse * Mentor and train junior team members How to apply * Please email your resume at alexandros.karargyris at ihu-strasbourg.eu with the title "System Engineer/Architect" * Position is open until filled * Initially, 2-year contract with a possibility to extend or convert to permanent?position About the project CLINNOVA is a European initiative of the ?Grande R?gion? (http://www.granderegion.net/) which groups the French region Grand Est, Belgian Federation Wallonia-Brussels and Ostbelgien, German Saarland and Rhineland-Palatinate, as well as the Grand Duchy of Luxembourg. CLINNOVA project aims to unlock the potential of Artificial Intelligence (AI) and data science in healthcare, with the ambition of establishing a standard, sovereign, open, interoperable, European model. The overall objective of CLINNOVA is to enable a data-driven healthcare environment for AI solutions, which is based both on infrastructure investment and coordination between clinical stakeholders. The initiative aims to create a federated infrastructure of prospective standardised multimodal medical Big Data (e.g., biobanking, imaging) between participating institutes with a focus on autoimmune, inflammatory and cancer diseases. Research and development of AI algorithms on this amount of federated data is a unique exciting opportunity from both computer science and clinical perspective About the institute The University Hospital Institute of Image-Guided Surgery (IHU Strasbourg) i?is a unique research facility offering a true multidisciplinary environment to innovate in the image-guided surgery domain. Surgeons and engineers work closely together on practical as well as moonshot research ideas. Promising ideas are funded and supported internally, competitive research grants or partnership programs, while developed and validated ideas are translated to products through licensing and startup spin-offs. The institute has strong collaborations with academia and industry. Located in the heart of Strasbourg?s historic hospital campus, the IHU Strasbourg is an international medical-surgical centre created in 2011, specialising in minimally invasive approaches (laparoscopy, flexible endoscopy, ultrasound, percutaneous surgery). The IHU brings together care, research, training, and technology transfer activities in an exceptional setting for the benefit of patients. Surgeons and engineers work in close collaboration on applied and innovative research topics. Upstream topics (TRL 0 to 2) are funded and supported internally through competitive research grants or partnership programmes. Mature topics (TRL 3 to 5) are transformed into products through licensing and spin-off companies. The Institute has strong collaborations with academia and industry -------------- next part -------------- An HTML attachment was scrubbed... URL: From evomusart at gmail.com Wed Nov 3 15:36:45 2021 From: evomusart at gmail.com (EvoMUSART) Date: Wed, 3 Nov 2021 20:36:45 +0100 Subject: Connectionists: Extended submission deadline - EvoMUSART 2022 - Call for Papers Message-ID: ------------------------------------------------ Call for papers for the 11th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART) ? Please distribute ? Apologies for cross-posting ------------------------------------------------ The 11th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART) will take place on 20-22 April 2022, as part of the evo* event. EvoMUSART webpage: www.evostar.org/2022/evomusart/ *Extended submission deadline: 24 November 2021* *Conference: 20-22 April 2022* EvoMUSART is a multidisciplinary conference that brings together researchers who are working on the application of Artificial Neural Networks, Evolutionary Computation, Swarm Intelligence, Cellular Automata, Alife, and other Artificial Intelligence techniques in creative and artist fields such as Visual Art, Music, Architecture, Video, Digital Games, Poetry, or Design. This conference gives researchers in the field the opportunity to promote, present and discuss ongoing work in the area. Submissions must be at most 16 pages long, in Springer LNCS format. Each submission must be anonymised for a double-blind review process. Accepted papers will be presented orally or as posters at the event and included in the EvoMUSART proceedings published by Springer Nature in a dedicated volume of the Lecture Notes in Computer Science series. In addition, an agreement has been reached with Entropy journal (IF 2.524; JCR Q2; ISSN 1099-4300) whereby it will publish a special issue of EvoMUSART every year. Entropy journal has already published the special issues entitled ?Artificial Intelligence and Complexity in Art, Music, Games and Design? for EvoMUSART 2020 (Volume 1) and 2021 (Volume 2). All papers accepted in EvoMUSART 2022 will be encouraged to submit to a new special issue of Entropy. Indicative topics include but are not limited to: * Systems that create drawings, images, animations, sculptures, poetry, text, designs, webpages, buildings, etc.; * Systems that create musical pieces, sounds, instruments, voices, sound effects, sound analysis, etc.; * Systems that create artefacts such as game content, architecture, furniture, based on aesthetic and/or functional criteria; * Systems that resort to artificial intelligence to perform the analysis of image, music, sound, sculpture, or some other types of artistic objects; * Systems in which artificial intelligence is used to promote the creativity of a human user; * Theories or models of computational aesthetics; * Computational models of emotional response, surprise, novelty; * Representation techniques for images, videos, music, etc.; * Surveys of the current state-of-the-art in the area; * New ways of integrating the user in the process (e.g. improvisation, co-creation, participation). Submission link: https://easychair.org/conferences/?conf=evo2022 More information on the submission process and the topics of EvoMUSART: www.evostar.org/2022/evomusart/ Flyer of EvoMUSART 2022: http://www.evostar.org/2022/flyers/evomusart Papers published in EvoMUSART: https://evomusart-index.dei.uc.pt We look forward to seeing you in EvoMUSART 2022! The EvoMUSART 2022 organisers Tiago Martins Nereida Rodr?guez-Fernandez S?rgio Rebelo (publication chair) -------------- next part -------------- An HTML attachment was scrubbed... URL: From boubchir at ai.univ-paris8.fr Thu Nov 4 04:36:52 2021 From: boubchir at ai.univ-paris8.fr (Larbi Boubchir) Date: Thu, 4 Nov 2021 09:36:52 +0100 Subject: Connectionists: [CfP] [Deadline Approaching] International Workshop on Artificial Intelligence & Edge Computing (AIEC 2021) In-Reply-To: <30292b50-4021-6c33-59d0-1736598fe630@ai.univ-paris8.fr> References: <30292b50-4021-6c33-59d0-1736598fe630@ai.univ-paris8.fr> Message-ID: <8873ffd4-f761-a347-f23f-d09ff1090d07@ai.univ-paris8.fr> [Apologies if you got multiple copies of this invitation] ** *International Workshop on Artificial Intelligence & Edge Computing (AIEC 2021)* https://sites.google.com/view/waiec2021/ in conjunction with *The Sixth International Conference on Fog and Mobile Edge Computing (FMEC 2021)* Gandia, Spain. December 6-9, 2021 (virtual event) *AIEC 2021 CFP* Artificial Intelligence (AI) became a popular wide area for the latest- generation of software-oriented solutions. Lately, AI has been considered as a hot topic due to its huge applications and it attracted attention from academia as well as industrial and end users while receiving positive media coverage. AI is advancing at considerable speed and lead to many widely beneficial applications, variant from Machine Translation to Medical Image Computing. From R&D perspectives, AI acquires incredible amounts of progress in terms funding investors devoted on AI applications. According to AI efforts, Edge Computing (EC) has become an essential solution to overcome the strangulation of emerging technology development according to its benefits of minimizing data transmission, reducing service latency and easing cloud computing pressure. Also, the scope of EC is very diverse on large application area, such as smart grid and smart city, logistics and transportation, manufacturing and healthcare. Furthermore, the EC provide significant gains which include low-latency thus allowing close cooperation in sophisticated mobile applications and on the core network to reduce traffic volume as data streaming along the way to the distant data center is no longer necessary. The evolution of new technologies of communication such as 5G made communication so much easier where the latency is lower and memory bandwidth is higher, the border between edge infrastructure and mobile devices will be even imprecise and EC will become more attractive. The Workshop on Artificial Intelligence & Edge Computing (AIEC) aims to bring together researchers and practitioners from both academia and industry who are working on Artificial Intelligence and Edge Computing as well as their integration, to exchange research ideas and identify new research challenges in this emerging field. *Topics* The Workshop on Artificial Intelligence & Edge Computing (AIEC) calls for contributions that address fundamental research and solutions issues in Artificial Intelligence and Edge Computing including but not limited to the following : ?Big Data mining at the Edge ?Machine Learning at the edge ?Architectures of Edge AI for IoT ?Security on the Edge ?Resource-friendly Edge AI Model Design ?Resource Management for Edge AI ?Applications/services for Edge AI ?Communication and Networking Protocols for Edge AI ?Software Platforms for Edge *Important Dates * ?*Submission Date: 15^th November, 2021* ?Notification to Authors: *22^th November, 2021* ?Camera Ready Submission: *25^th November, 2021* *Submissions Guidelines and Proceedings * Papers selected for presentation will appear in the FMEC Proceedings, which will be published by the IEEE Computer Society and be submitted to IEEE Xplore for inclusion. Papers must be 6 pages in IEEE format, 10pt font using the IEEE 8.5" x 11" two-column format, single space, A4 format. All papers should be in PDF format, and submitted electronically at Paper Submission Link. A full paper must not exceed the stated length (including all figures, tables and references). Submitted papers must present original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines may be rejected without review. Also submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. Authors may contact the Program Chair for further information or clarification. *Submission System * *https://easychair.org/conferences/?conf=waiec2021 * *Journal Special Issues * Selected papers from the will be invited to submit an extended version to the following journal. Papers will be selected based on their reviewers? scores and appropriateness to the Journal?s theme. All extended versions will undergo reviews and must represent original unpublished research work. Further details will be made available at a later stage. Please send any inquiry on AIEC 2020 to the Emerging Tech. Network Team at: _emergingtechnetwork at gmail.com _ -- _____________________________________________________ Prof. Larbi Boubchir, /SMIEEE/ LIASD - University of Paris 8 2 rue de la Libert?, 93526 Saint-Denis, France Tel. (+33) 1 49 40 67 95 Email. larbi.boubchir at univ-paris8.fr http://www.ai.univ-paris8.fr/~boubchir/ _____________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gros at itp.uni-frankfurt.de Thu Nov 4 07:45:35 2021 From: gros at itp.uni-frankfurt.de (Claudius Gros) Date: Thu, 04 Nov 2021 12:45:35 +0100 Subject: Connectionists: =?utf-8?q?Complex_Systems_PhD_position_at_the_Goe?= =?utf-8?q?the_University=2C__Frankfurt?= Message-ID: PhD position at the Institute for Theoretical Physics, Goethe University Frankfurt Applications are invited for a fully funded PhD position at the Institute for Theoretical Physics, Goethe University Frankfurt, Germany Fields: neurosciences, complex systems theory Application deadline: Dec 15, 2021 Supervisor: Prof. Dr. Claudius Gros We are developing new models and generative principles for complex systems, using a range of toolsets from dynamical systems theory, game theory, and the neurosciences. Examples are self-generated gaits for animals and robotic systems, criticality in autonomous networks, as well as Covid-19 modeling. Several subjects are available for the announced PhD thesis, depending on the background of the successful candidate. The work will include analytical investigations and numerical simulations. The candidates should have a Diploma/Master in physics with an excellent academic track record and good computational skills. Experience or strong interest in the fields of complex systems, dynamical systems theory, game theory and/or artificial or biological cognitive systems is expected. The degree of scientific research experience is expected to be on the level of a German Diploma/Master. The appointment will start in spring 2022, for three years. Interested applicants should submit a curriculum vitae and a list of publications to the address below. Prof. Claudius Gros Institute for Theoretical Physics Goethe University Frankfurt Max-von-Laue-Str. 1 60438 Frankfurt am Main, Germany cgr at itp.uni-frankfurt.de http://www.itp.uni-frankfurt.de/~gros General information on the University of Frankfurt and its employment policy: With its around 46,000 students and 4,600 employees, the Goethe University in Frankfurt is the largest university in the state of Hessen and an internationally renowned, important regional employer. Numerous quality and performance oriented internal reforms have been initiated in the recent years. The reorganized campuses for natural sciences and humanities offer an ideal environment for research and education. Since 2008, the Goethe University is a foundation under public law and enjoys full administrative autonomy. The fixed-term employment of the academic staff is subject to the provisions of the Temporary Science Employment Law and the Hessian Higher Education Act. The University advocates gender equality and therefore strongly encourages women to apply. People with disabilities are given preference if equally qualified. -- ### ### Prof. Dr. Claudius Gros ### http://itp.uni-frankfurt.de/~gros ### ### Complex and Adaptive Dynamical Systems, A Primer ### A graduate-level textbook, Springer (2008/10/13/15) ### ### Life for barren exoplanets: The Genesis project ### https://link.springer.com/article/10.1007/s10509-016-2911-0 ### From mr.guang.yang at gmail.com Thu Nov 4 19:09:56 2021 From: mr.guang.yang at gmail.com (Guang Yang) Date: Thu, 4 Nov 2021 23:09:56 +0000 Subject: Connectionists: Job Opportunity: Research Associate @ NHLI, Imperial College London Message-ID: The Cardiovascular Magnetic Resonance (CMR) Unit of the Royal Brompton Hospital and National Heart and Lung Institute at Imperial College was established in 1984. It dedicates its research to the cardiovascular system and runs one of the largest cardiovascular clinical services in the world. Research ranges from basic physics and the development of artificial intelligence methods to clinical applications and involves a dedicated interdisciplinary team. The research programme in big data and artificial intelligence in health imaging led by Dr Guang Yang focuses on generating promising tools for assisting clinicians in application fields such as cardiovascular image analysis, cancer management and innovation ecosystem to respond to a pandemic. We are now looking for a postdoctoral researcher (an MRI expert with computing/AI knowledge) to join our group (https://www.yanglab.fyi/) at National Heart and Lung Institute, Imperial College London as a Research Associate. The position is initially funded for 3 years with potential extensions with a recently awarded UK Research and Innovation Future Leaders Fellowship. The successful candidate will join a multi-disciplinary team of physicists, computer scientists, AI specialists, cardiologists and technologists and work on MRI sequence development and image reconstruction under the supervision of Dr Guang Yang at his Smart Imaging Lab. While cardiac MRI has already been successfully applied in clinical research studies, it remains a time-consuming method that is only available in a handful of specialist research centres and is only applicable in healthy volunteers and highly cooperative patients. The successful candidate will develop, implement and test advanced AI methods improving the efficiency, clinical applicability and reliability of cardiac MR, helping to transform this novel research method into a clinically useable tool. The successful candidate will also hold an honorary contract with the Royal Brompton and Harefield NHS Foundation Trust in addition to the Imperial College employment. The post is also with funded travelling and short-term visiting scholar opportunities to work at the EPSRC Cambridge Mathematics of Information in Healthcare Hub (CMIH), the University of Cambridge, The National Institutes of Health, and Siemens Healthineers. More details can be found in https://www.imperial.ac.uk/jobs/description/MED02803/research-associate Candidate qualifications include: ? Hold a PhD (or equivalent) in MRI, Computer Science, Medical Physics, Bioinformatics, Bioengineering or a closely related discipline, or equivalent research, industrial or commercial experience ? Knowledge of MRI, MR physics, MR sequence design and implementation, artificial intelligence, medical image analysis, deep learning, image reconstruction ? Knowledge of research methods and statistical procedures ? Practical experience within a research environment and/or publication in relevant and refereed journals Applications are open until 15 December 2021. Application Process: Please apply through the link above or send your CV to g.yang at imperial.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Imperial College London NHLI.pdf Type: application/pdf Size: 148991 bytes Desc: not available URL: From mturner at flatironinstitute.org Thu Nov 4 14:47:19 2021 From: mturner at flatironinstitute.org (Matthew Turner) Date: Thu, 4 Nov 2021 14:47:19 -0400 Subject: Connectionists: Flatiron Research Fellows (Post-Docs), Center for Computational Neuroscience, Flatiron Institute Message-ID: Flatiron Research Fellows (Post-Docs), Center for Computational Neuroscience , Flatiron Institute New York, New York ORGANIZATIONAL OVERVIEW The Simons Foundation is a private foundation established in 1994 in New York City. With assets of approximately $2 billion and an annual grants and programs budget of $230 million, the foundation is dedicated to advancing the frontiers of research in mathematics and the basic sciences. The Flatiron Institute is a major, new internal scientific unit of the Simons Foundation, focused on the computational aspects of a wide range of basic science. The mission of the Flatiron Institute is to advance scientific research through computational methods, including theory, modeling, simulation and data analysis. The Flatiron Institute currently consists of five centers: the Center for Computational Astrophysics (CCA); the Center for Computational Biology (CCB); the Center for Computational Quantum Physics (CCQ); the Center for Computational Mathematics (CCM); and the Center for Computational Neuroscience (CCN). The Institute staff is composed of research scientists at various stages of their careers from recent Ph.D.s through senior scientists, software engineers and support staff. We host a vigorous visitor program and interact regularly with scientists from neighboring institutions. POSITION SUMMARY Applications are invited for Flatiron Research Fellowships (FRF) at the Center for Computational Neuroscience. The CCN FRF program offers the opportunity for postdoctoral research in areas that have strong synergy with one or more of the existing research groups at CCN or other centers at the Flatiron Institute. CCN FRF?s will be assigned a primary mentor from a CCN research group or project, though affiliations and collaborations with other research groups within CCN and throughout the Flatiron Institute are encouraged. In addition to carrying out an independent research program, Flatiron Research Fellows are expected to: disseminate their results through scientific presentations, publications, and software release, collaborate with other members of the CCN or Flatiron Institute, and participate in the scientific life of the CCN and Flatiron Institute by attending seminars, colloquia, and group meetings. Flatiron Research Fellows may have the opportunity to organize workshops and to mentor graduate and undergraduate students. The mission of CCN is to develop theories, models, and computational methods that deepen our knowledge of brain function ? both in health and in disease. CCN takes a ?systems" neuroscience approach, building models that are motivated by fundamental principles, that are constrained by properties of neural circuits and responses, and that provide insights into perception, cognition and behavior. This cross-disciplinary approach not only leads to the design of new model-driven scientific experiments, but also encapsulates current functional descriptions of the brain that can spur the development of new engineered computational systems, especially in the realm of machine learning. CCN currently has research groups in computational vision, neural circuits and algorithms, neuroAI and geometry, and statistical analysis of neural data; interested candidates should review the CCN public website for specific information on CCN?s research areas. Review of applications for positions starting between July and October 2022 will begin in mid-January 2022. Application Materials: - Cover letter (optional); - Curriculum Vitae with bibliography; - Research statement of no more than three pages describing past work and proposed research program. Applicants are encouraged to discuss the broad impact of the past and proposed research on computational neuroscience. Applicants should also indicate the primary CCN group(s) with which they?d seek to conduct research, and any desired affiliation with other Flatiron Centers. - Three (3) letters of recommendation submitted confidentially by direct email to ccnjobs at simonsfoundation.org Selection Criteria: Applicants must have a PhD in a related field or expect to receive their PhD before the start of the appointment. Applications will be evaluated based on 1) past research accomplishments 2) proposed research program 3) synergy of applicant?s expertise and research proposal topic with existing CCN staff and research programs. Education: - PhD in computational neuroscience or a relevant technical field such as electrical engineering, machine learning, statistics, physics, or applied math. Related Skills: - Flexible multi-disciplinary mindset; - Strong interest and experience in the scientific study of the brain; - Demonstrated abilities in analysis, software and algorithm development, modeling and/or scientific simulation; - Ability to do original and outstanding research in neuroscience; - Ability to work well independently as well as in a collaborative team environment. FRF positions are two-year appointments and are generally renewed for a third year, contingent on performance. FRF receive a research budget and have access to the Flatiron Institute?s powerful scientific computing resources. FRF may be eligible for subsidized housing within walking distance of the CCN. THE SIMONS FOUNDATION'S DIVERSITY COMMITMENT Many of the greatest ideas and discoveries come from a diverse mix of minds, backgrounds and experiences, and we are committed to cultivating an inclusive work environment. The Simons Foundation actively seeks a diverse applicant pool and encourages candidates of all backgrounds to apply. We provide equal opportunities to all employees and applicants for employment without regard to race, religion, color, age, sex, national origin, sexual orientation, gender identity, genetic disposition, neurodiversity, disability, veteran status or any other protected category under federal, state and local law. Application portal: https://simonsfoundation.wd1.myworkdayjobs.com/en-US/simonsfoundationcareers/job/162-Fifth-Avenue/Flatiron-Research-Fellow--Center-for-Computational-Neuroscience_R0000686 Matthew B. Turner Manager for Center Administration | Center for Computational Neuroscience Pronouns: he/him/his *FLATIRON INSTITUTE* 162 Fifth Avenue Suite 308 New York, NY 10010 917.363.1095 *mobile* *simonsfoundation.org/flatiron/ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel.kaski at manchester.ac.uk Thu Nov 4 16:04:42 2021 From: samuel.kaski at manchester.ac.uk (Samuel Kaski) Date: Thu, 4 Nov 2021 20:04:42 +0000 Subject: Connectionists: Research positions available in machine learning at all levels: Research Fellow, Postdoc, PhD student. Turing AI Fellowship, Univ Manchester, UK Message-ID: Still some positions available in my new research group funded by the Turing AI World-Leading Researcher Fellowship: Human-AI Research Teams: Steering AI in Experimental Design and Decision-Making. Positions are available at all stages; we seek to fill most positions now but leave some for future years as well: - Research Fellow - Postdoc - PhD Student The work involves probabilistic modelling in exciting new settings, and developing new methods for probabilistic machine learning and inference. Applicants with outstandingly strong expertise in one of following topics are welcome, or strong expertise in one and keen interest in working with expert colleagues on the others: automatic experimental design, Bayesian inference, human-in-the-loop learning, advanced user modelling, machine teaching, privacy-preserving learning, reinforcement learning, inverse reinforcement learning, simulator-based inference, likelihood-free inference. There will be particularly good opportunities to join new work on collaborative modelling and decision-making with AI. And applications in drug design, synthetic biology, personalized medicine, and digital twins. The positions are in the University of Manchester, which has recently strengthened its position as a centre for research into AI fundamentals and impactful applications, featuring: - Brand-new ELLIS Unit Manchester (press release out any minute now...) - Partnership with the Alan Turing Institute - New Centre for Fundamentals of AI, with a number of excellent new faculty members joining - Institute for Data Science and AI, with >900 researchers - Excellent university with outstanding collaborators in other strong fields within a walking distance on the same campus - Dual positions can be negotiated with research groups in cancer research, biotechnology, digital twins, medicine and health, both in academia, hospitals and companies. Get in touch. - Most livable city in the UK Links to detailed calls: Research Fellow: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=20643 Postdoc: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=20642 Doctoral student: https://www.cs.manchester.ac.uk/study/postgraduate-research/research-projects/description/?projectid=33392 https://www.cs.manchester.ac.uk/study/postgraduate-research/research-projects/description/?projectid=33371 https://www.cs.manchester.ac.uk/study/postgraduate-research/research-projects/description/?projectid=33360 More info on the Turing AI World-Leading Researcher Fellowships: https://www.ukri.org/news/global-leaders-named-as-turing-ai-world-leading-researcher-fellows/ https://www.manchester.ac.uk/discover/news/new-human-ai-research-teams-could-be-the-future-of-research-meeting-future-societal-challenges/ Get in touch to discuss further! Samuel Kaski From achler at gmail.com Thu Nov 4 12:45:58 2021 From: achler at gmail.com (Tsvi Achler) Date: Thu, 4 Nov 2021 09:45:58 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > Gary- Thanks for the accessible online link to the book. > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > Sincerely, > -Tsvi > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > >> Tsvi - While I think Randy and Yuko's book >> is actually somewhat better than >> the online version (and buying choices on amazon start at $9.99), there >> *is* an online version. >> Randy & Yuko's models take into account feedback and inhibition. >> >> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >> >>> Daniel, >>> >>> Does your book include a discussion of Regulatory or Inhibitory Feedback >>> published in several low impact journals between 2008 and 2014 (and in >>> videos subsequently)? >>> These are networks where the primary computation is inhibition back to >>> the inputs that activated them and may be very counterintuitive given >>> today's trends. You can almost think of them as the opposite of Hopfield >>> networks. >>> >>> I would love to check inside the book but I dont have an academic budget >>> that allows me access to it and that is a huge part of the problem with how >>> information is shared and funding is allocated. I could not get access to >>> any of the text or citations especially Chapter 4: "Competition, Lateral >>> Inhibition, and Short-Term Memory", to weigh in. >>> >>> I wish the best circulation for your book, but even if the Regulatory >>> Feedback Model is in the book, that does not change the fundamental problem >>> if the book is not readily available. >>> >>> The same goes with Steve Grossberg's book, I cannot easily look inside. >>> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >>> as a predominant mechanism, but I do believe a function such as vigilance >>> is very important during recognition and Adaptive Resonance is one of >>> a very few models that have it. The Regulatory Feedback model I have >>> developed (and Michael Spratling studies a similar model as well) is built >>> primarily using the vigilance type of connections and allows multiple >>> neurons to be evaluated at the same time and continuously during >>> recognition in order to determine which (single or multiple neurons >>> together) match the inputs the best without lateral inhibition. >>> >>> Unfortunately within conferences and talks predominated by the Adaptive >>> Resonance crowd I have experienced the familiar dismissiveness and did not >>> have an opportunity to give a proper talk. This goes back to the larger >>> issue of academic politics based on small self-selected committees, the >>> same issues that exist with the feedforward crowd, and pretty much all of >>> academia. >>> >>> Today's information age algorithms such as Google's can determine >>> relevance of information and ways to display them, but hegemony of the >>> journal systems and the small committee system of academia developed in the >>> middle ages (and their mutual synergies) block the use of more modern >>> methods in research. Thus we are stuck with this problem, which especially >>> affects those that are trying to introduce something new and >>> counterintuitive, and hence the results described in the two National >>> Bureau of Economic Research articles I cited in my previous message. >>> >>> Thomas, I am happy to have more discussions and/or start a different >>> thread. >>> >>> Sincerely, >>> Tsvi Achler MD/PhD >>> >>> >>> >>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>> wrote: >>> >>>> Tsvi, >>>> >>>> While deep learning and feedforward networks have an outsize >>>> popularity, there are plenty of published sources that cover a much wider >>>> variety of networks, many of them more biologically based than deep >>>> learning. A treatment of a range of neural network approaches, going from >>>> simpler to more complex cognitive functions, is found in my textbook * >>>> Introduction to Neural and Cognitive Modeling* (3rd edition, >>>> Routledge, 2019). Also Steve Grossberg's book *Conscious Mind, >>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures >>>> with a strong biological basis. >>>> >>>> >>>> Best, >>>> >>>> >>>> Dan Levine >>>> ------------------------------ >>>> *From:* Connectionists >>>> on behalf of Tsvi Achler >>>> *Sent:* Saturday, October 30, 2021 3:13 AM >>>> *To:* Schmidhuber Juergen >>>> *Cc:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> Since the title of the thread is Scientific Integrity, I want to point >>>> out some issues about trends in academia and then especially focusing on >>>> the connectionist community. >>>> >>>> In general analyzing impact factors etc the most important progress >>>> gets silenced until the mainstream picks it up Impact Factiors in >>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf >>>> and >>>> often this may take a generation >>>> https://www.nber.org/.../does-science-advance-one-funeral... >>>> >>>> . >>>> >>>> The connectionist field is stuck on feedforward networks and variants >>>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>>> variants that are sometimes labeled as recurrent networks for learning time >>>> where the feedforward networks can be rewound in time. >>>> >>>> This stasis is specifically occuring with the popularity of deep >>>> learning. This is often portrayed as neurally plausible connectionism but >>>> requires an implausible amount of rehearsal and is not connectionist if >>>> this rehearsal is not implemented with neurons (see video link for further >>>> clarification). >>>> >>>> Models which have true feedback (e.g. back to their own inputs) cannot >>>> learn by backpropagation but there is plenty of evidence these types of >>>> connections exist in the brain and are used during recognition. Thus they >>>> get ignored: no talks in universities, no featuring in "premier" journals >>>> and no funding. >>>> >>>> But they are important and may negate the need for rehearsal as needed >>>> in feedforward methods. Thus may be essential for moving connectionism >>>> forward. >>>> >>>> If the community is truly dedicated to brain motivated algorithms, I >>>> recommend giving more time to networks other than feedforward networks. >>>> >>>> Video: >>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>>> >>>> >>>> Sincerely, >>>> Tsvi Achler >>>> >>>> >>>> >>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on >>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>>> that some of them - as well as their contemporaries - might be able to >>>> provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) >>>> for my 2015 survey of deep learning (now the most cited article ever >>>> published in the journal Neural Networks), I've decided to put forward >>>> another piece for MOOR. I want to thank the many experts who have already >>>> provided me with comments on it. Please send additional relevant references >>>> and suggestions for improvements for the following draft directly to me at >>>> juergen at idsia.ch: >>>> >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's >>>> justification of the ACM A. M. Turing Award for deep learning and a >>>> critique of the Turing Lecture published by ACM in July 2021. This work can >>>> also be seen as a short history of deep learning, at least as far as ACM's >>>> errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the >>>> very nature of science to resolve controversies through facts. Credit >>>> assignment is as core to scientific history as it is to machine learning. >>>> My aim is to ensure that the true history of our field is preserved for >>>> posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >> >> -- >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> Computer Science and Engineering 0404 >> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego - >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> Schedule: http://tinyurl.com/b7gxpwo >> >> Listen carefully, >> Neither the Vedas >> Nor the Qur'an >> Will teach you this: >> Put the bit in its mouth, >> The saddle on its back, >> Your foot in the stirrup, >> And ride your wild runaway mind >> All the way to heaven. >> >> -- Kabir >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Fri Nov 5 02:26:38 2021 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Fri, 5 Nov 2021 08:26:38 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Nicol=C3=B2_C?= =?utf-8?q?esa-Bianchi=3A_=E2=80=9CThe_power_of_cooperation_in_netw?= =?utf-8?q?orks_of_learning_agents=E2=80=9D=2C_9th_November_2021_17?= =?utf-8?q?=3A00-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_lecture?= =?utf-8?q?s?= Message-ID: <039c01d7d20e$175aa2f0$460fe8d0$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Prof. Nicol? Cesa-Bianchi (University Milano, Italy), a prominent AI researcher internationally, will deliver the e-lecture: ?The power of cooperation in networks of learning agents?, on Tuesday 9th November 2021 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/92061611776 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. Other upcoming lectures: 1. Prof. Cees Snoek (University Amsterdam, Netherlands), 23rd November 2021 17:00 ? 18:00 CET. More lecture infos in: https://www.i-aida.org/event_cat/ai-lectures/?type=future The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Fri Nov 5 05:16:40 2021 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Fri, 5 Nov 2021 10:16:40 +0100 Subject: Connectionists: Call for Participation COMPLEX NETWORKS 2021 November 30 -December 02, 2021 Message-ID: *Tenth** International Conference on Complex Networks & Their Applications* http://www.complexnetworks.org COMPLEX NETWORKS 2021 proceeds as a hybrid event. *See the detailed program at : * https://easychair.org/smart-program/COMPLEXNETWORKS2021/ *Registration*: https://complexnetworks.org/registration/ *SPEAKERS * ? Marc Barth?l?my CEA France ? Ginestra Bianconi Queen Mary University of London UK ? Jo?o Gama University of Porto Portugal ? Dirk Helbing ETH Z?rich Switzerland ? Yizhou Sun UCLA USA ? Alessandro Vespignani Northeastern University USA *TUTORIALS (November 29, 2021)* ? Elisabeth Lex Graz University of Technology Austria ? Giovanni Petri ISI Foundation Italy Best regards, and looking forward to seeing you at COMPLEX NETWORKS 2021. Rosa M. Benito, Hocine Cherifi, Esteban Moro COMPLEX NETWORKS General Chairs Join us at COMPLEX NETWORKS 2021 Madrid Spain *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Deputy Director LIB EA N? 7534 Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at gmail.com Fri Nov 5 05:35:43 2021 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Fri, 5 Nov 2021 10:35:43 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: This entire thread of discussion reminds me of this famous quote: "Academic Politics Are So Vicious Because the Stakes Are So Small" (there is no agreement on who to attribute the quote to) For me personally, the message is: Take it easy. None of this is as big of a deal as it may seem at moments. Have more fun. Worry less. Danko Dr. Danko Nikoli? www.danko-nikolic.com https://www.linkedin.com/in/danko-nikolic/ --- A progress usually starts with an insight --- On Fri, Nov 5, 2021 at 8:09 AM Tsvi Achler wrote: > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > Sincerely, > -Tsvi > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > >> Gary- Thanks for the accessible online link to the book. >> >> I looked especially at the inhibitory feedback section of the book which >> describes an Air Conditioner AC type feedback. >> It then describes a general field-like inhibition based on all >> activations in the layer. It also describes the role of inhibition in >> sparsity and feedforward inhibition, >> >> The feedback described in Regulatory Feedback is similar to the AC >> feedback but occurs for each neuron individually, vis-a-vis its inputs. >> Thus for context, regulatory feedback is not a field-like inhibition, it >> is very directed based on the neurons that are activated and their inputs. >> This sort of regulation is also the foundation of Homeostatic Plasticity >> findings (albeit with changes in Homeostatic regulation in experiments >> occurring in a slower time scale). The regulatory feedback model describes >> the effect and role in recognition of those regulated connections in real >> time during recognition. >> >> I would be happy to discuss further and collaborate on writing about the >> differences between the approaches for the next book or review. >> >> And I want to point out to folks, that the system is based on politics >> and that is why certain work is not cited like it should, but even worse >> these politics are here in the group today and they continue to very >> strongly influence decisions in the connectionist community and holds us >> back. >> >> Sincerely, >> -Tsvi >> >> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: >> >>> Tsvi - While I think Randy and Yuko's book >>> is actually somewhat better than >>> the online version (and buying choices on amazon start at $9.99), there >>> *is* an online version. >>> Randy & Yuko's models take into account feedback and inhibition. >>> >>> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >>> >>>> Daniel, >>>> >>>> Does your book include a discussion of Regulatory or Inhibitory >>>> Feedback published in several low impact journals between 2008 and 2014 >>>> (and in videos subsequently)? >>>> These are networks where the primary computation is inhibition back to >>>> the inputs that activated them and may be very counterintuitive given >>>> today's trends. You can almost think of them as the opposite of Hopfield >>>> networks. >>>> >>>> I would love to check inside the book but I dont have an academic >>>> budget that allows me access to it and that is a huge part of the problem >>>> with how information is shared and funding is allocated. I could not get >>>> access to any of the text or citations especially Chapter 4: "Competition, >>>> Lateral Inhibition, and Short-Term Memory", to weigh in. >>>> >>>> I wish the best circulation for your book, but even if the Regulatory >>>> Feedback Model is in the book, that does not change the fundamental problem >>>> if the book is not readily available. >>>> >>>> The same goes with Steve Grossberg's book, I cannot easily look >>>> inside. With regards to Adaptive Resonance I dont subscribe to lateral >>>> inhibition as a predominant mechanism, but I do believe a function such as >>>> vigilance is very important during recognition and Adaptive Resonance is >>>> one of a very few models that have it. The Regulatory Feedback model I >>>> have developed (and Michael Spratling studies a similar model as well) is >>>> built primarily using the vigilance type of connections and allows multiple >>>> neurons to be evaluated at the same time and continuously during >>>> recognition in order to determine which (single or multiple neurons >>>> together) match the inputs the best without lateral inhibition. >>>> >>>> Unfortunately within conferences and talks predominated by the Adaptive >>>> Resonance crowd I have experienced the familiar dismissiveness and did not >>>> have an opportunity to give a proper talk. This goes back to the larger >>>> issue of academic politics based on small self-selected committees, the >>>> same issues that exist with the feedforward crowd, and pretty much all of >>>> academia. >>>> >>>> Today's information age algorithms such as Google's can determine >>>> relevance of information and ways to display them, but hegemony of the >>>> journal systems and the small committee system of academia developed in the >>>> middle ages (and their mutual synergies) block the use of more modern >>>> methods in research. Thus we are stuck with this problem, which especially >>>> affects those that are trying to introduce something new and >>>> counterintuitive, and hence the results described in the two National >>>> Bureau of Economic Research articles I cited in my previous message. >>>> >>>> Thomas, I am happy to have more discussions and/or start a different >>>> thread. >>>> >>>> Sincerely, >>>> Tsvi Achler MD/PhD >>>> >>>> >>>> >>>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>>> wrote: >>>> >>>>> Tsvi, >>>>> >>>>> While deep learning and feedforward networks have an outsize >>>>> popularity, there are plenty of published sources that cover a much wider >>>>> variety of networks, many of them more biologically based than deep >>>>> learning. A treatment of a range of neural network approaches, going from >>>>> simpler to more complex cognitive functions, is found in my textbook * >>>>> Introduction to Neural and Cognitive Modeling* (3rd edition, >>>>> Routledge, 2019). Also Steve Grossberg's book *Conscious Mind, >>>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures >>>>> with a strong biological basis. >>>>> >>>>> >>>>> Best, >>>>> >>>>> >>>>> Dan Levine >>>>> ------------------------------ >>>>> *From:* Connectionists >>>>> on behalf of Tsvi Achler >>>>> *Sent:* Saturday, October 30, 2021 3:13 AM >>>>> *To:* Schmidhuber Juergen >>>>> *Cc:* connectionists at cs.cmu.edu >>>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>>> Lecture, etc. >>>>> >>>>> Since the title of the thread is Scientific Integrity, I want to point >>>>> out some issues about trends in academia and then especially focusing on >>>>> the connectionist community. >>>>> >>>>> In general analyzing impact factors etc the most important progress >>>>> gets silenced until the mainstream picks it up Impact Factiors in >>>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf >>>>> and >>>>> often this may take a generation >>>>> https://www.nber.org/.../does-science-advance-one-funeral... >>>>> >>>>> . >>>>> >>>>> The connectionist field is stuck on feedforward networks and variants >>>>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>>>> variants that are sometimes labeled as recurrent networks for learning time >>>>> where the feedforward networks can be rewound in time. >>>>> >>>>> This stasis is specifically occuring with the popularity of deep >>>>> learning. This is often portrayed as neurally plausible connectionism but >>>>> requires an implausible amount of rehearsal and is not connectionist if >>>>> this rehearsal is not implemented with neurons (see video link for further >>>>> clarification). >>>>> >>>>> Models which have true feedback (e.g. back to their own inputs) cannot >>>>> learn by backpropagation but there is plenty of evidence these types of >>>>> connections exist in the brain and are used during recognition. Thus they >>>>> get ignored: no talks in universities, no featuring in "premier" journals >>>>> and no funding. >>>>> >>>>> But they are important and may negate the need for rehearsal as needed >>>>> in feedforward methods. Thus may be essential for moving connectionism >>>>> forward. >>>>> >>>>> If the community is truly dedicated to brain motivated algorithms, I >>>>> recommend giving more time to networks other than feedforward networks. >>>>> >>>>> Video: >>>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>>>> >>>>> >>>>> Sincerely, >>>>> Tsvi Achler >>>>> >>>>> >>>>> >>>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>>>> wrote: >>>>> >>>>> Hi, fellow artificial neural network enthusiasts! >>>>> >>>>> The connectionists mailing list is perhaps the oldest mailing list on >>>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>>>> that some of them - as well as their contemporaries - might be able to >>>>> provide additional valuable insights into the history of the field. >>>>> >>>>> Following the great success of massive open online peer review (MOOR) >>>>> for my 2015 survey of deep learning (now the most cited article ever >>>>> published in the journal Neural Networks), I've decided to put forward >>>>> another piece for MOOR. I want to thank the many experts who have already >>>>> provided me with comments on it. Please send additional relevant references >>>>> and suggestions for improvements for the following draft directly to me at >>>>> juergen at idsia.ch: >>>>> >>>>> >>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>>> >>>>> >>>>> The above is a point-for-point critique of factual errors in ACM's >>>>> justification of the ACM A. M. Turing Award for deep learning and a >>>>> critique of the Turing Lecture published by ACM in July 2021. This work can >>>>> also be seen as a short history of deep learning, at least as far as ACM's >>>>> errors and the Turing Lecture are concerned. >>>>> >>>>> I know that some view this as a controversial topic. However, it is >>>>> the very nature of science to resolve controversies through facts. Credit >>>>> assignment is as core to scientific history as it is to machine learning. >>>>> My aim is to ensure that the true history of our field is preserved for >>>>> posterity. >>>>> >>>>> Thank you all in advance for your help! >>>>> >>>>> J?rgen Schmidhuber >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>> >>> -- >>> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >>> Computer Science and Engineering 0404 >>> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >>> CSE Building, Room 4130 >>> University of California San Diego - >>> 9500 Gilman Drive # 0404 >>> La Jolla, Ca. 92093-0404 >>> >>> Email: gary at ucsd.edu >>> Home page: http://www-cse.ucsd.edu/~gary/ >>> Schedule: http://tinyurl.com/b7gxpwo >>> >>> Listen carefully, >>> Neither the Vedas >>> Nor the Qur'an >>> Will teach you this: >>> Put the bit in its mouth, >>> The saddle on its back, >>> Your foot in the stirrup, >>> And ride your wild runaway mind >>> All the way to heaven. >>> >>> -- Kabir >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.srt at gmail.com Fri Nov 5 05:50:28 2021 From: massimo.srt at gmail.com (Massimo Sartori) Date: Fri, 5 Nov 2021 10:50:28 +0100 Subject: Connectionists: [jobs] Opening for an Assistant Professor in Medical Robotics at the University of Twente (women only) Message-ID: Would you like to make an impact with your research and inspire students? Work at a true campus university with a pleasant atmosphere? And immediately appoint your own PhD candidate? Then you will find this role an excellent fit. The Department of Biomechanical Engineering at the University of Twente invites female candidates for an assistant professor position to complement, extend and strengthen the department's competence in medical robots. About the Job As an assistant professor you will work in the Biomechanical Engineering (BE) department of the Faculty of Engineering Technology. The Department of Biomechanical Engineering consists of over 80 members and is engaged in a broad range of biomedical research topics. We focus on the design and control of medical robotic systems for a variety of clinically-relevant applications. Our department has a strong focus on prostheses, wearable exoskeletons, artificial organs, surgical robots, and rehabilitation robots. For example, the research in our department ranges from design and image-guided control of macro-micro-scale surgical robots, to developing neuromusculoskeletal models for control of both bionic limbs, to design and real-time control of soft wearable exo-suits. Moreover, we also evaluate our medical robots in pre-clinical trials. In collaboration with industrial leaders we also conduct research with existing commercial medical robots to further improve the technology. We are part of the TechMed Centre , and have an extensive network of clinical institutions with whom we collaborate. We are also part of the MESA+ NanoLab , and have access to world-class cleanroom facilities. To learn more about the research within the Department of Biomechanical Engineering check our website: https://www.utwente.nl/en/et/be/ We are looking for a promising female Assistant Professor who will be working on topics within the broad area of medical robots. The position is funded by a national program that strives for more female professors in engineering. We are looking for somebody who connects and strengthens our research and educational program on medical robotics, but who will be complementary to our department?s expertise. The candidate is expected to participate in the teaching of courses offered by the department at Bachelor and at Master level to the educational programs technical medicine, industrial design, mechanical engineering, biomedical technology and the new cross-faculty master program Robotics. You will supervise students in their graduation assignments, and you will contribute towards the supervision of the PhD candidate(s). In this role you will have the opportunity to directly appoint your own PhD. And in time, there are opportunities to grow into the role of Associate Professor. All your work takes place on the beautiful, green campus in Twente. Want to know more about the UT? Take a look here: green campus University of Twente YOUR PROFILE - PhD degree and expertise in a field relevant for the position. - Passion for teaching and you hold a University Teaching Qualification for Dutch Universities, a similar qualification, or are willing to obtain this. - Ability to supervise, train, and support junior researchers. - Ability to define and pursue an own research direction, evidenced by the track record and recommendations. - Ability to design and deliver clear and engaging education. - Ability to collaborate in research, educational and organizational projects. - Good communication skills and an excellent command of English. - If applicable, commitment to learn Dutch to interact with students in Dutch. - Interpersonal and organizational skills. - Preferable experience in writing research proposals and obtaining external funding. OUR OFFER We offer a full-time position for 1 year, with the prospect of a permanent position after a positive evaluation: - A full-time position with 30% tax ruling option and a pension scheme. - A gross salary between ? 3.807,- and ? 5.922,- per month, depending on experience and qualifications; - Additional benefits include a holiday allowance of 8 % of the gross annual salary and a year-end bonus of 8.3 %, with a generous annual leave plan. - Professional and personal development programs. - Well-equipped labs with professional lab technicians, like the Neuromechanics, Surgical Robotics and Wearable Robotics labs. - A start-up package that includes funding for the recruitment of 1 PhD that can help in kickstarting your research - Possibilities to collaborate with nearby cilnical institutions. - Proximity to Enschede, a mid-size city with a large social offer, immersed in the nature of the Twente region. - If you do not speak Dutch, we offer courses to learn the Dutch language. - A family-friendly institution that offers parental leave (both paid and unpaid) and career support for partners. INFORMATION AND APPLICATION *Please note that this vacancy is only open to female scientists. This is part of the University of Twente?s strategy to increase the proportion of women among its faculty and to create a working environment that is diverse and inclusive and supportive of excellence in research and teaching. For questions about this position, you can contact Prof. Herman van der Kooij (h.vanderkooij at utwente.nl). You can submit your application (only via the web platform using the link below) before 30 November 2021: - https://www.utwente.nl/en/organisation/careers/!/245/assistant-professor-in-medical-robotics-female-only Applications must include the following documents: - A video (2-minute max) describing your scientific interests, why you applied for this position, and how you connect and could strengthen our research and development in medical robotics. - A cover letter (2-page max) specifying how your experience and skills match the position as well as summarizing work. This letter should also include a research and teaching statement, reflecting your ambition and vision. - A CV including English, proficiency level, nationality, visa requirements, date of birth, experience overview, and publication list. - Contact information for at least two academic references. A support letter will be requested only if your application is considered. ABOUT THE ORGANIZATION The Faculty of Engineering Technology (ET) engages in education and research Mechanical Engineering, Civil Engineering and Industrial Design Engineering with a view to enabling society and industry to innovate and create value using sound, efficient and sustainable technology. We are part of a ?people-first' university of technology, taking our place as an internationally leading centre for smart production, processes and devices in five domains: Health Technology, Maintenance, Smart Regions, Smart Industry and Sustainable Resources. Our faculty is home to some 1,800 Bachelor's and Master's students, 400 employees and 150 PhD candidates and offers three degree programmes: Mechanical Engineering, Civil Engineering and Industrial Design Engineering. Our educational and research programme are closely connected with UT research institutes?Mesa+ Institute, TechMed Centra and Digital Society Institute. University of Twente (UT) University of Twente (UT) has entered the new decade with an ambitious, new vision, mission and strategy. As ?the ultimate people-first university of technology' we are rapidly expanding on our High Tech Human Touch philosophy and the unique role it affords us in society. Everything we do is aimed at maximum impact on people, society and connections through the sustainable utilisation of science and technology. We want to contribute to the development of a fair, digital and sustainable society through our open, inclusive and entrepreneurial attitude. This attitude permeates everything we do and is present in every one of UT's departments and faculties. Building on our rich legacy in merging technical and social sciences, we focus on five distinguishing research domains: Improving healthcare by personalised technologies; Creating intelligent manufacturing systems; Shaping our world with smart materials; Engineering our digital society; and Engineering for a resilient world. As an employer, University of Twente offers jobs that matter. We equip you as a staff member to shape new opportunities both for yourself and for our society. With us, you will be part of a leading tech university that is changing our world for the better. We offer an open, inclusive and entrepreneurial climate, in which we encourage you to make healthy choices, for example, with our flexible, customisable conditions. --- Massimo Sartori Professor and Chair, Neuromechanical Engineering Director, Neuromechanical Modelling and Engineering Lab University of Twente & TechMed Center Faculty of Engineering Technology Department of Biomechanical Engineering Building: Horsting; ? Room: W106 ;? P.O. Box 217 7500 AE Enschede, The Netherlands Personal: https://people.utwente.nl/m.sartori Lab: https://bit.ly/NMLab YouTube: https://bit.ly/NMLTube -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmuller2 at uwo.ca Fri Nov 5 08:45:00 2021 From: lmuller2 at uwo.ca (Lyle Muller) Date: Fri, 5 Nov 2021 12:45:00 +0000 Subject: Connectionists: Western-Fields Seminar Series | Alex Lubotzky Message-ID: <0CB3ECBA-900D-46FA-8D4B-E8D49896D41C@uwo.ca> The ninth talk in the 2021 Western-Fields Seminar Series in Networks, Random Graphs, and Neuroscience is next Thursday (11 November) at noon ET. Alex Lubotzky (http://www.ma.huji.ac.il/~alexlub) will give a talk titled ?The C^3 problem: locally testable codes with constant rate and constant distance? (abstract below). Dr. Lubotzky is Maurice and Clara Weil Chair in Mathematics at Hebrew University. Dr. Lubotzky completed his PhD under the supervision of Hillel Furstenberg (2020 Abel Prize) and has made foundational contributions in modern mathematics, ranging from group theory to number theory and graph theory. He is also one of the founders of the study of Ramanujan graphs. This seminar series features monthly virtual talks from a diverse group of researchers across computational neuroscience, physics, and graph theory. We look forward to a talk from Jeannette Janssen (Dalhousie University) in December. Registration link: https://zoom.us/meeting/register/tJYuf-GppzkjHt0W5HMDpME2UpUiE7ntO5JS ? An error-correcting code is locally testable (LTC) if there is a random tester that reads only a constant number of bits of a given word and decides whether the word is in the code, or at least close to it. A long-standing problem asks if there exists such a code that also satisfies the golden standards of coding theory: constant rate and constant distance. Unlike the classical situation in coding theory, random codes are not LTC, so this problem is a challenge of a new kind. We construct such codes based on what we call (Ramanujan) Left/Right Cayley square complexes. These 2-dimensional objects seem to be of independent interest. The lecture will be self-contained. Joint work with I. Dinue, S. Evra, R. Livne and S. Mozes -- Lyle Muller http://mullerlab.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrawitz at uvic.ca Fri Nov 5 13:01:14 2021 From: akrawitz at uvic.ca (Adam Krawitz) Date: Fri, 5 Nov 2021 17:01:14 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From maanakg at gmail.com Fri Nov 5 11:41:53 2021 From: maanakg at gmail.com (Maanak Gupta) Date: Fri, 5 Nov 2021 10:41:53 -0500 Subject: Connectionists: Third Call (FIRST ROUND): 27th ACM Symposium on Access Control Models and Technologies Message-ID: ACM SACMAT 2022 New York City, New York ----------------------------------------------- | Hybrid Conference (Online + In-person) | ----------------------------------------------- Call for Research Papers ============================================================== Papers offering novel research contributions are solicited for submission. Accepted papers will be presented at the symposium and published by the ACM in the symposium proceedings. In addition to the regular research track, this year SACMAT will again host the special track -- "Blue Sky/Vision Track". Researchers are invited to submit papers describing promising new ideas and challenges of interest to the community as well as access control needs emerging from other fields. We are particularly looking for potentially disruptive and new ideas which can shape the research agenda for the next 10 years. We also encourage submissions to the "Work-in-progress Track" to present ideas that may have not been completely developed and experimentally evaluated. Topics of Interest ============================================================== Submissions to the regular track covering any relevant area of access control are welcomed. Areas include, but are not limited to, the following: * Systems: * Operating systems * Cloud systems and their security * Distributed systems * Fog and Edge-computing systems * Cyber-physical and Embedded systems * Mobile systems * Autonomous systems (e.g., UAV security, autonomous vehicles, etc) * IoT systems (e.g., home-automation systems) * WWW * Design for resiliency * Designing systems with zero-trust architecture * Network: * Network systems (e.g., Software-defined network, Network function virtualization) * Corporate and Military-grade Networks * Wireless and Cellular Networks * Opportunistic Network (e.g., delay-tolerant network, P2P) * Overlay Network * Satellite Network * Privacy and Privacy-enhancing Technologies: * Mixers and Mixnets * Anonymous protocols (e.g., Tor) * Online social networks (OSN) * Anonymous communication and censorship resistance * Access control and identity management with privacy * Cryptographic tools for privacy * Data protection technologies * Attacks on Privacy and their defenses * Authentication: * Password-based Authentication * Biometric-based Authentication * Location-based Authentication * Identity management * Usable authentication * Mechanisms: * Blockchain Technologies * AI/ML Technologies * Cryptographic Technologies * Programming-language based Technologies * Hardware-security Technologies (e.g., Intel SGX, ARM TrustZone) * Economic models and game theory * Trust Management * Usable mechanisms * Data Security: * Big data * Databases and data management * Data leakage prevention * Data protection on untrusted infrastructure * Policies and Models: * Novel policy language design * New Access Control Models * Extension of policy languages * Extension of Models * Analysis of policy languages * Analysis of Models * Policy engineering and policy mining * Verification of policy languages * Efficient enforcement of policies * Usable access control policy New in ACM SACMAT 2022 ============================================================== We are moving ACM SACMAT 2022 to have two submission cycles. Authors submitting papers in the first submission cycle will have the opportunity to receive a major revision verdict in addition to the usual accept and reject verdicts. Authors can decide to prepare a revised version of the paper and submit it to the second submission cycle for consideration. Major revision papers will be reviewed by the program committee members based on the criteria set forward by them in the first submission cycle. Regular Track Paper Submission and Format ============================================================== Papers must be written in?English. Authors are required to use the ACM format for papers, using the two-column SIG Proceedings Template (the sigconf template for LaTex) available in the following link: https://www.acm.org/publications/authors/submissions The length of the paper in the proceedings format must not exceed?twelve?US letter pages formatted for 8.5" x 11" paper and be no more than 5MB in size. It is the responsibility of the authors to ensure that their submissions will print easily on simple default configurations. The submission must be anonymous, so information that might identify the authors - including author names, affiliations, acknowledgments, or obvious self-citations - must be excluded. It is the authors' responsibility to ensure that their anonymity is preserved when citing their work. Submissions should be made to the EasyChair conference management system by the paper submission deadline of: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) All submissions must contain a?significant original contribution. That is, submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal, conference, or workshop. In particular, simultaneous submission of the same work is not allowed. Wherever appropriate, relevant related work, including that of the authors, must be cited. Submissions that are not accepted as full papers may be invited to appear as short papers. At least one author from each accepted paper must register for the conference before the camera-ready deadline. Blue Sky Track Paper Submission and Format ============================================================== All submissions to this track should be in the same format as for the regular track, but the length must not exceed ten US letter pages, and the submissions are not required to be anonymized (optional). Submissions to this track should be submitted to the EasyChair conference management system by the same deadline as for the regular track. Work-in-progress Track Paper Submission and Format ============================================================== Authors are invited to submit papers in the newly introduced work-in-progress track. This track is introduced for (junior) authors, ideally, Ph.D. and Master's students, to obtain early, constructive feedback on their work. Submissions in this track should follow the same format as for the regular track papers while limiting the total number of pages to six US letter pages. Paper submitted in this track should be anonymized and can be submitted to the EasyChair conference management system by the same deadline as for the regular track. Call for Lightning Talk ============================================================== Participants are invited to submit proposals for 5-minute lightning talks describing recently published results, work in progress, wild ideas, etc. Lightning talks are a new feature of SACMAT, introduced this year to partially replace the informal sharing of ideas at in-person meetings. Submissions are expected??by May 27, 2022. Notification of acceptance will be on June 3, 2022. Call for Posters ============================================================== SACMAT 2022 will include a poster session to promote discussion of ongoing projects among researchers in the field of access control and computer security. Posters can cover preliminary or exploratory work with interesting ideas, or research projects in the early stages with promising results in all aspects of access control and computer security. Authors interested in displaying a poster must submit a poster abstract in the same format as for the regular track, but the length must not exceed three US letter pages, and the submission should not be anonymized. The title should start with "Poster:". Accepted poster abstracts will be included in the conference proceedings. Submissions should be emailed to the poster chair by Apr 15th, 2022. The subject line should include "SACMAT 2022 Poster:" followed by the poster title. Call for Demos ============================================================== A demonstration proposal should clearly describe (1) the overall architecture of the system or technology to be demonstrated, and (2) one or more demonstration scenarios that describe how the audience, interacting with the demonstration system or the demonstrator, will gain an understanding of the underlying technology. Submissions will be evaluated based on the motivation of the work behind the use of the system or technology to be demonstrated and its novelty. The subject line should include "SACMAT 2022 Demo:" followed by the demo title. Demonstration proposals should be in the same format as for the regular track, but the length must not exceed four US letter pages, and the submission should not be anonymized. A two-page description of the demonstration will be included in the conference proceedings. Submissions should be emailed to the Demonstrations Chair by Apr 15th, 2022. Financial Conflict of Interest (COI) Disclosure: ============================================================== In the interests of transparency and to help readers form their own judgments of potential bias, ACM SACMAT requires authors and PC members to declare any competing financial and/or non-financial interests in relation to the work described. Definition ------------------------- For the purposes of this policy, competing interests are defined as financial and non-financial interests that could directly undermine, or be perceived to undermine the objectivity, integrity, and value of a publication, through a potential influence on the judgments and actions of authors with regard to objective data presentation, analysis, and interpretation. Financial competing interests include any of the following: Funding: Research support (including salaries, equipment, supplies, and other expenses) by organizations that may gain or lose financially through this publication. A specific role for the funding provider in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript, should be disclosed. Employment: Recent (while engaged in the research project), present or anticipated employment by any organization that may gain or lose financially through this publication. Personal financial interests: Ownership or contractual interest in stocks or shares of companies that may gain or lose financially through publication; consultation fees or other forms of remuneration (including reimbursements for attending symposia) from organizations that may gain or lose financially; patents or patent applications (awarded or pending) filed by the authors or their institutions whose value may be affected by publication. For patents and patent applications, disclosure of the following information is requested: patent applicant (whether author or institution), name of the inventor(s), application number, the status of the application, specific aspect of manuscript covered in the patent application. It is difficult to specify a threshold at which a financial interest becomes significant, but note that many US universities require faculty members to disclose interests exceeding $10,000 or 5% equity in a company. Any such figure is necessarily arbitrary, so we offer as one possible practical alternative guideline: "Any undeclared competing financial interests that could embarrass you were they to become publicly known after your work was published." We do not consider diversified mutual funds or investment trusts to constitute a competing financial interest. Also, for employees in non-executive or leadership positions, we do not consider financial interest related to stocks or shares in their company to constitute a competing financial interest, as long as they are publishing under their company affiliation. Non-financial competing interests: Non-financial competing interests can take different forms, including personal or professional relations with organizations and individuals. We would encourage authors and PC members to declare any unpaid roles or relationships that might have a bearing on the publication process. Examples of non-financial competing interests include (but are not limited to): * Unpaid membership in a government or non-governmental organization * Unpaid membership in an advocacy or lobbying organization * Unpaid advisory position in a commercial organization * Writing or consulting for an educational company * Acting as an expert witness Conference Code of Conduct and Etiquette ============================================================== ACM SACMAT will follow the ACM Policy Against Harassment at ACM Activities. Please familiarize yourself with the ACM Policy Against Harassment (available at https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/ policy-against-discrimination-and-harassment) and guide to Reporting Unacceptable Behavior (available at https://www.acm.org/about-acm/reporting-unacceptable-behavior). AUTHORS TAKE NOTE ============================================================== The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.) Important dates ============================================================== **Note that, these dates are currently only tentative and subject to change.** * Paper submission: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) * Rebuttal: December 16th - December 20th, 2021 (Submission Cycle 1) March 24th - March 28th, 2022 (Submission Cycle 2) * Notifications: January 14th, 2022 (Submission Cycle 1) April 8th, 2022 (Submission Cycle 2) * Systems demo and Poster submissions: April 15th, 2022 * Systems demo and Poster notifications: April 22nd, 2022 * Panel Proposal: March 18th, 2022 * Camera-ready paper submission: April 29th, 2022 * Conference date: June 8 - June 10, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yiz at soe.ucsc.edu Fri Nov 5 14:02:52 2021 From: yiz at soe.ucsc.edu (Yi Zhang) Date: Fri, 5 Nov 2021 11:02:52 -0700 Subject: Connectionists: Tenured Associate or Full Professor of NLP Position at University of California Santa Cruz Message-ID: University of California Santa Cruz has an opening for a TENURED NLP associate or full professor position. If you are interested in teaching, research and living in Silicon Valley California, you are welcome to apply or contact us. Thanks! Apply link: https://recruit.ucsc.edu/JPF01170 Help contact: egregg at ucsc.edu Anticipated start: July 1, 2022, with the academic year beginning in September 2022. APPLICATION WINDOW Open date: October 29th, 2021 Next review date: Friday, Jan 7, 2022 at 11:59pm (Pacific Time) Apply by this date to ensure full consideration by the committee. Final date: Thursday, Jun 30, 2022 at 11:59pm (Pacific Time) Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled. POSITION DESCRIPTION The Department of Computer Science and Engineering at the University of California, Santa Cruz (UCSC) invites applications for a tenured Associate Professor or Full Professor. We seek outstanding applicants with research and teaching expertise in all areas of Natural Language Processing (NLP). We are especially interested in candidates who have contributed to one or more application areas of NLP including but not limited to information extraction, dialogue and interactive systems, semantics, syntax, information retrieval and text mining, question answering, language grounding, speech and multimodality, NLP for social goods, and machine translation. The Department of Computer Science and Engineering is part of the Baskin School of Engineering at UC Santa Cruz. UC Santa Cruz is a member of the Association of American Universities (AAU), an association of the top research universities in the US. Our school has nationally and internationally known researchers in many areas, including theoretical computer science, programming languages, security, distributed systems, storage systems, computer architectures, machine learning, natural language processing, vision, VLSI, and networking. The Baskin School of Engineering is home to six departments, contributing to the richness of its research. Nestled in a redwood forest above the city of Santa Cruz, our beautiful campus has a long history of embracing groundbreaking interdisciplinary work. Of the ten University of California, UCSC is the nearest to Silicon Valley and has close research ties with the local computer industry. Our proximity to Silicon Valley, and our satellite campus there, afford opportunities and avenues for collaboration with researchers working in the many research and development labs in Silicon Valley, as well as with the other San Francisco Bay Area universities. The faculty member will contribute to the M.S. program in NLP located in the UCSC Silicon Valley Campus in Santa Clara, California and will demonstrate established records in research and publications; university teaching at the undergraduate and graduate level (or closely analogous activities); extramural funding awards (or similar success with garnering support for research endeavors). We also value industrial experience or collaboration. We are especially interested in candidates who can contribute to the diversity and excellence of our academic community through their research, teaching, and service. We welcome candidates who understand the barriers facing women and minorities who are underrepresented in higher education careers (as evidenced by life experiences and educational background), and who have experience in equity and diversity with respect to teaching, mentoring, research, life experiences, or service towards building an equitable and diverse scholarly environment. The primary office for this position is located in Santa Clara, due to the expectation of teaching and mentoring students in this location. Space for PhD students for this position is also located in Santa Clara. Graduate level teaching duties will be mainly at the Santa Clara campus with undergraduate courses to be taught at the Santa Cruz campus. The successful applicant will typically spend multiple days per week in Santa Clara and is also expected to spend, on average, one day per week on the Santa Cruz campus (more days will be required when teaching undergraduate courses on the Santa Cruz campus). The ability for on-demand transportation between Santa Clara and Santa Cruz with or without accommodations is essential. The chosen candidate will be expected to sign a statement representing that they are not the subject of any ongoing investigation or disciplinary proceeding at their current academic institution or place of employment, nor have they in the past ten years been formally disciplined at any academic institution/place of employment. In the event the candidate cannot make this representation, they will be expected to disclose in writing to the hiring Dean the circumstances surrounding any formal discipline that they have received, as well as any current or ongoing investigation or disciplinary process of which they are the subject. (Note that discipline includes a negotiated settlement agreement to resolve a matter related to substantiated misconduct.) Department of Computer Science and Engineering: https://engineering.ucsc.edu/departments/computer-science-and-engineering Associate or Full Professor of Natural Language Processing (JPF01170) QUALIFICATIONS Basic qualifications (required at time of application) A Ph.D. (or equivalent foreign degree) in computer science or a field relevant to the advertised position; demonstrated record of research and publications in computer science; and a demonstrated record of teaching experience. APPLICATION REQUIREMENTS Document requirements *Statement of Contributions to Diversity, Equity, and Inclusion - Statement addressing your understanding of the barriers facing traditionally underrepresented groups and your past and/or future contributions to diversity, equity, and inclusion through teaching and professional or public service. Candidates are urged to review guidelines on statements (see https://apo.ucsc.edu/diversity.html) before preparing their application. *Initial screening of applicants will be based on the statement on contributions to diversity, equity, and inclusion. Curriculum Vitae - Your most recently updated C.V. Cover Letter - Letter of application that briefly summarizes your qualifications and interest in the position. Statement of Research Statement of Teaching Reference requirements 3-5 required (contact information only) Applicants must provide the names and contact information of their references. The hiring unit will request confidential letters** from the references of those applicants who are under serious consideration. Please note that your references, or dossier service, will submit their confidential letters directly to the UC Recruit System. **All letters will be treated as confidential per University of California policy and California state law. For any reference letter provided via a third party (i.e., dossier service, career center), direct the author to UCSC?s confidentiality statement at http://apo.ucsc.edu/confstm.htm. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at sscc.fr Fri Nov 5 07:13:00 2021 From: contact at sscc.fr (SSCC) Date: Fri, 5 Nov 2021 12:13:00 +0100 Subject: Connectionists: [SSCC-Updated deadline] Call for Papers (Symposium on Solutions for Smart Cities Challenges) Message-ID: <005801d7d236$1fb8a320$5f29e960$@sscc.fr> Symposium on Solutions for Smart Cities Challenges (SSCC 2021) Gandia, Spain. December 6-9, 2021 (Hybrid) https://www.sscc.fr/sscc2021 Internet of Things (IoT) is used to collect and exchange massive data. This technology promises an immense potential for improving the quality of life, healthcare, manufacturing, transportation, etc. The use of the IoT in smart buildings has a great importance and promising outcomes with a direct impact on our society. Researchers and industrial partners have achieved several applications where they have leveraged various enabling technologies for service enhancement. Many sectors in a smart city can benefit from an enhanced data collection and effective data analysis process done on the data gathered from these smart building devices that mainly consist of HVAC systems. However, the incremental number of connected IoT devices request a scalable and robust network. Consequently, it rises the attack surfaces of devices as well as their connections, which make them more exposed to internal and external attacks. In this context, the challenging issue is how constructing a secure IoT network and preserving its resiliency. SSCC2021 invites submissions discussing the employment of smart solutions and approaches in smart cities. Topics of either theoretical, empirical or applied interest include, but are not limited to: Safety, Security, and Resilience . Smart networks for smart cities . Security management in smart cities . Security in distributed systems . Modeling, analysis and detection of IoT attacks . Data mining for cybersecurity in smart cities . Decentralized architecture for smart cities . Consensus protocols and applications IoT & AI . IoT Indoor deployment . IoT communication protocols . Building information modeling (BIM) IoT-based HVAC control in smart buildings . Artificial Intelligence in Cyber Physical Energy Systems . Optimization for IoT and smart cities . Dynamic scheduling for IoT deployment . Autonomous and Smart decisions Edge and Cloud . Cloud-Edge for IoT and smart cities . Fog and Edge computing for smart cities . Applications/services for Edge AI . Software Platforms for Edge Social aspects and applications . Behavioral and Energy Consumption Analytics . Indoor comfort . Human factors and organizational resilience for distributed systems Important Dates . Paper Submission Date: 14 November, 2021 . Notification to Authors: 23 November, 2021 . Camera Ready Submission: 30 November 2021 Submission System https://easychair.org/conferences/?conf=sscc2021 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik.endres at uni-marburg.de Fri Nov 5 12:44:04 2021 From: dominik.endres at uni-marburg.de (Dominik Endres) Date: Fri, 5 Nov 2021 17:44:04 +0100 Subject: Connectionists: Two Data Scientist positions at the Philipps-Universitaet Marburg, Germany Message-ID: <1d1929c9-3af7-ff88-3d62-65d97fa63156@uni-marburg.de> We are offering *two Data Scientist*(m/f/d) positions as a part of the cluster project *The Adaptive Mind*at the Philipps-Universit?t Marburg, Germany. The positions are associated with the Center of Mind, Brain and Behavior (CMBB) in Marburg. If you are interested in high-performance computing, machine learning and data management for large, interdisciplinary projects, then one of these jobs might be for you. Please contact Prof. Dominik Endres, dominik.endres at uni-marburg.de , for more information. The full job adverts can be found here: https://www.uni-marburg.de/de/universitaet/administration/verwaltung/dezernat2/personalabteilung/bewerber/stellen/wissenschaftliche-stellen/ze-0111-tam-wmz-261121-engl.pdf https://www.uni-marburg.de/de/universitaet/administration/verwaltung/dezernat2/personalabteilung/bewerber/stellen/wissenschaftliche-stellen/ze-0112-tam-wmz-2021-engl.pdf Application deadline is Nov. 26^th , 2021 regards, Dominik Endres -- Prof. Dr. Dominik Endres Theoretische Kognitionswissenschaft - Theoretical Cognitive Science FB 04 Psychologie Tel: 06421-28 23818 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcello.pelillo at gmail.com Fri Nov 5 14:43:08 2021 From: marcello.pelillo at gmail.com (Marcello Pelillo) Date: Fri, 5 Nov 2021 19:43:08 +0100 Subject: Connectionists: Frontiers in Computer Vision - Call for contributions and Special Issue proposals Message-ID: Dear colleagues, the *Computer Vision* section of *Frontiers in Computer Science* is undergoing a massive overhaul in terms of scope, editorial board , and editorial strategies: https://www.frontiersin.org/journals/computer-science/sections/computer-vision We welcome contributions in all relevant areas of computer vision, from both academia and industry. Among the distinguishing features of Frontiers' open-access journals are fast publication time and an innovative collaborative peer-review process (see here for details). We also particularly welcome *Special Issue proposals* ("Research Topics" in Frontiers' terminology) on cutting-edge themes: https://www.frontiersin.org/about/research-topics Here's a list of the most recent ones: - Continual Unsupervised Learning in Computer Vision - Synchronization in Computer Vision - Sketching for Human Expressivity - Deep Generative Models: Algorithms and Applications - Human-Centered Visual Perception - Attentive Models in Vision If you have any questions about the journal, feel free to contact me or the editorial office. Best regards -mp -- Marcello Pelillo, *FIEEE, FIAPR, FAAIA* Professor of Computer Science Ca' Foscari University of Venice, Italy IEEE SMC Distinguished Lecturer Specialty Chief Editor, *Computer Vision - Frontiers in Computer Science* -------------- next part -------------- An HTML attachment was scrubbed... URL: From coralie.gregoire at insa-lyon.fr Fri Nov 5 06:59:51 2021 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Fri, 5 Nov 2021 11:59:51 +0100 (CET) Subject: Connectionists: [CFP EXTENDED DEADLINE] The ACM Web Conference 2022 - Tutorials Proposal - November 11th, 2021 Message-ID: <689378952.5373341.1636109991805.JavaMail.zimbra@insa-lyon.fr> [Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] The ACM Web Conference 2022 [CFP] EXTENDED Deadline for Tutorial Proposals: November 11th, 2021 AoE We invite tutorial proposals to be held at The Web Conference 2022 (formerly known as WWW). The conference will take place online, hosted by Lyon, France, on April 25-29, 2022. ------------------------------------------------------------ Call for Tutorial Proposals *Important Dates* - NEW Deadline: November 11th, 2021 - Notification: December 16, 2021 - Online material due: April 7, 2022 Tutorials chairs: (www2022-tutorials at easychair.org) - Senjuti Basu Roy (New Jersey Institute of Technology, USA) - Riccardo Tommasini (INSA Lyon, France) We invite tutorial proposals on current and emerging topics related to the World Wide Web, broadly construed to include mobile and other Internet and online-enabled modes of interaction and communication. Tutorials are intended to provide a high-quality learning experience to conference attendees. It is expected that tutorials will address an audience with a varied range of interests and backgrounds: beginners, developers, designers, researchers, practitioners, users, lecturers, and representatives of governments and funding agencies who wish to learn new technologies. *Organization Details for Tutorials* The Web Conf 2022 welcomes two types of tutorials. Lecture-style tutorials will be typically 1.5 hours in duration, while hands-on tutorials can be either 1.5 hours or 3 hours long. 1. A lecture-style tutorial (L) will cover the state-of-the-art research, development, and applications in a specific web computing and related area, and stimulate and facilitate future work. Tutorials on interdisciplinary directions, bridging scientific research and applied communities, novel and fast growing directions, and significant applications are highly encouraged. 2. A hands-on tutorial (H) will feature in-depth hands-on training on cutting-edge systems and tools of relevance to the web conference community. These sessions are targeted at novice as well as moderately skilled users. The focus should be on providing hands-on experience to the attendees. Tutorials should introduce the motivation behind the tool, associated fundamental concepts, and work through examples, and demonstrate its application to relatable real-life use cases. The pace of the tutorial should be adequate for beginners, e.g., early-stage Ph.D. and master students. We welcome tutorials on the following topics while this not being an exhaustive list: -Recommender systems -Scaling NLP systems -Advertisement on the Web -Responsible web computing -Web search and mining -Decentralized web data management, web-scale computing, and data integration -Knowledge graph (Common sense Knowledge Graph, Extraction, construction, and maintenance of Knowledge Graphs) -Stream Reasoning, Web velocity, and dynamic knowledge graph -Learning, reasoning, and inference on the Web -Fact-checking in the context of misinformation and disinformation propagation -Human-in-the-loop on the Web -Web-engineering -Conversational AI -Neurocomputing on Web -FAIR web data management The Web Conference 2022 is also featuring special tracks on the Web for Good, on e-sports and online gaming as well on the History of the Web. Tutorials on topics related to these themes are highly encouraged. All tutorials will be part of the main conference technical program. *Submission Guidelines* Tutorial proposals should be in English and should contain no more than five (5) pages in length (according to the ACM format acmart.cls, using the ?sigconf? option). Submissions must be in PDF and must be made through the EasyChair system at https://easychair.org/conferences/?conf=thewebconf2022 (select the Tutorial track). In addition, a video teaser of up to 3 minutes must be prepared as additional material of the submission (see also below). Proposals should follow the following outline: - General information Title of the tutorial Organizers & presenters names, affiliation, contact information, and brief bio. - Abstract 1-2 paragraphs suitable for inclusion in the conference registration material. - Topic and relevance A description of the tutorial topic, providing a sense of both the scope of the tutorial and depth within the scope, and a statement on why the tutorial is important and timely, how it is relevant to The Web Conference, and why the presenters are qualified for a high-quality introduction of the topic. - Style & duration Please indicate whether this will be a lecture-style or hands-on tutorial. In the case of the latter, please indicate the equipment needs for participants (eg. pre-installed Jupyter notebook with specific packages). Please also indicate the proposed duration of the hands-on tutorial (could be 1.5 hours or 3 hours, whereas, lecture-style tutorials are 1.5 hours), together with the justification that a high-quality learning experience will be achieved within the chosen time period. - Audience A description of the intended audience, prerequisite knowledge, and the expected learning outcomes. - Previous editions If the tutorial was given before, where and when was it presented? Please give details on the number of attendees, and how the proposed tutorial differs or builds on the previous ones. If possible, provide a link to the slides of the previous tutorial presentation. - Tutorial materials What tutorial materials will be provided to attendees? Are there any copyright issues? - Video teaser A Video teaser, up to 3 min, is required at the time of submission. The video can be hosted on any video sharing platform (e.g. YouTube) or any file sharing service (e.g. WeTransfer, Dropbox) and the link to the video MUST be included in the proposal. - Organization details Tutorial organizers are required to provide a backup plan that overcomes the potential occurrence of technical problems, e.g., pre-recorded lectures, self-paced exercises. The tutorial presenter(s) will be responsible for making sure that the slides and any material needed for the tutorial are made available online in advance for attendees. Review Process The decision about acceptance or rejection of tutorial proposals will be made by the Tutorial Co-chairs, that may be supported by a small program committee and in consultation with the General and Program Committee Co-chairs, taking into account several factors including the timeliness of the topic, the topic fit with respect to The Web Conference 2022, the coverage of the topic in other tracks of the conference, the capacity of the venue, and the expertise of the presenters. You can reach the Tutorials Chairs at www2022-tutorials at easychair.org ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From schockaerts1 at cardiff.ac.uk Fri Nov 5 11:28:29 2021 From: schockaerts1 at cardiff.ac.uk (Steven Schockaert) Date: Fri, 5 Nov 2021 15:28:29 +0000 Subject: Connectionists: Postdoctoral position at Cardiff University Message-ID: Location: Cardiff, UK Deadline for applications: 25nd November 2021 Start date: 1st April 2022 (or as soon as possible thereafter) Duration: 24 months Keywords: commonsense reasoning, representation learning Details about the post We are looking for a postdoctoral research associate to work on the EPSRC funded project ?Encyclopedic Lexical Representations for Natural Language Processing (ELEXIR)?. The aim of this project is to learn vector space embeddings that capture fine-grained knowledge about concepts. Different from existing approaches, these representations will explicitly represent the properties of, and relationships between concepts. Vectors in the proposed framework will thus intuitively play the role of facts, about which we can reason in a principled way. More details about this post can be found at: https://krb-sjobs.brassring.com/TGnewUI/Search/home/HomeWithPreLoad?partnerid=30011&siteid=5460&PageType=JobDetails&jobid=1879241#jobDetails=1879241_5460 Background about the ELEXIR project The field of Natural Language Processing (NLP) has made unprecedented progress over the last decade, but the extent to which NLP systems ?understand? language is still remarkably limited. A key underlying problem is the need for a vast amount of world knowledge. In this project, we focus on conceptual knowledge, and more in particular on: (i) capturing what properties are associated with a given concept (e.g. lions are dangerous, boats can float); (ii) characterising how different concepts are related (e.g. brooms are used for cleaning, bees produce honey). Our proposed approach relies on the fact that Wikipedia contains a wealth of such knowledge. Unfortunately, however, important properties and relationships are often not explicitly mentioned in text, especially if they follow straightforwardly from other information for a human reader (e.g. if X is an animal that can fly then X probably has wings). Apart from learning to extract knowledge expressed in text, we thus also have to learn how to reason about conceptual knowledge. A central question is how conceptual knowledge should be represented and incorporated in language model architectures. Current NLP systems heavily rely on vector representations in which each concept is represented by a single vector. This approach has important theoretical limitations in terms of what knowledge can be captured, and it only allows for shallow forms of reasoning. In contrast, in symbolic AI, conceptual knowledge is typically represented using facts and rules. This enables powerful forms of reasoning, but symbolic representations are harder to learn and to use in neural networks. The solution we propose relies on a novel hybrid representation framework, which combines the main advantages of vector representations with those of symbolic methods. In particular, we will explicitly represent properties and relationships, as in symbolic frameworks, but these properties and relations will be encoded as vectors. Each concept will thus be associated with several property vectors, while pairs of related concepts will be associated with one or more relation vectors. Our vectors will thus intuitively play the same role that facts play in symbolic frameworks, with associated neural network models then playing the role of rules. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Sat Nov 6 04:57:24 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sat, 6 Nov 2021 08:57:24 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Fri Nov 5 17:42:24 2021 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 05 Nov 2021 14:42:24 -0700 Subject: Connectionists: NEURAL COMPUTATION - November 1, 2021 In-Reply-To: Message-ID: Neural Computation - Volume 33, Number 11 - November 1, 2021 available online for download now: http://www.mitpressjournals.org/toc/neco/33/11 http://cognet.mit.edu/content/neural-computation ----- Article Parametric UMAP Embeddings for Representation and Semi-supervised Learning Tim Sainburg, Leland McInnes, and Timothy Gentner Letters Replay in Deep Learning: Current Approaches and Missing Biological Elements Tyler Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava Siegelmann, Terrence Sejnowski, and Christopher Kanan Flexible Transmitter Network Shao-Qun Zhang, Zhi-Hua Zhou Dynamic Spatiotemporal Pattern Recognition With Recurrent Spiking Neural Network Yueming Wang, Jiangrong Shen, and Yueming Wang Completion of the Infeasible Actions of Others: Goal Inference by Dynamical Invariant Takuma Torii, Shohei Hidaka Temporal Variabilities Provide Additional Category-related Information in Object Category Decoding: A Systematic Comparison of Informative EEG Features Hamid Karimi-Rouzbahani, Mozhgan Shahmohammadi, Ehsan Vahab, Saeed Setayeshi, and Thomas Carlson Expansion of Information in the Binary Autoencoder With Random Binary Weights Viacheslav Osaulenko Asymptotic Input-output Relationship Predicts Electric Field Effect on Sublinear Dendritic Integration of AMPA Synapses Yaqin Fan, Xile Wei, Guosheng Yi, Meili Lu, Jiang Wang, and Bin Deng Task Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates Chen Zeno, Itay Golan, Elad Hoffer, and Daniel Soudry ----- ON-LINE -- http://www.mitpressjournals.org/neco MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-cs at mit.edu ----- From chriskanan at gmail.com Sat Nov 6 09:36:10 2021 From: chriskanan at gmail.com (Christopher Kanan) Date: Sat, 6 Nov 2021 09:36:10 -0400 Subject: Connectionists: Tenured Center/Dept. Director Job Opening in Imaging Science at the Rochester Institute of Technology (Areas include AI, Computer Vision, Human Vision, Remote Sensing, and More) Message-ID: The Rochester Institute of Technology (RIT) seeks a visionary and dynamic leader to serve as the Director of the Chester F. Carlson Center for Imaging Science (CIS). CIS offers BS, MS, and PhD degrees in imaging science, and faculty are engaged in a range of research areas, including machine learning, computer vision, remote sensing, human vision, virtual reality, optics, and more. CIS Director appointments are five-year renewable terms. The position is at the rank of tenured full professor. For more information about the Center and program, please see our annual report and visit our website . The application and full details are available here: https://apptrkr.com/2611364 Deadline for receipt of applications is January 10, 2022. On-campus interviews are expected to commence in the February-March time frame. It is expected that the new director will assume the position no later than July 1, 2022. Please contact Dr. Joel Kastner, CIS Director Search Committee Chair, via e-mail (jhkpci at cis.rit.edu), with any questions about the position. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Nov 6 14:00:28 2021 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 6 Nov 2021 19:00:28 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Summer: early registration December 11 Message-ID: <84133702.2025295.1636221628764@webmail.strato.com> ****************************************************************** 7th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2022 Summer Las Palmas de Gran Canaria, Spain July 25-29, 2022 https://irdta.eu/deeplearn/2022su/ ***************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: December 11, 2021 ****************************************************************** SCOPE: DeepLearn 2022 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Bournemouth, and Guimar?es. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/index.php?option=com_k2&view=item&layout=item&id=360&Itemid=896 STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wahid Bhimji (Lawrence Berkeley National Laboratory), Deep Learning on Supercomputers for Fundamental Science Rich Caruana (Microsoft Research), Friends Don?t Let Friends Deploy Black-box Models: The Importance of Interpretable Neural Nets in Machine Learning Kate Saenko (Boston University), Learning from Biased Data PROFESSORS AND COURSES: T?lay Adal? (University of Maryland Baltimore County), [intermediate] Data Fusion Using Matrix and Tensor Factorizations Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning: From Theory to Applications in the Natural Sciences Arindam Banerjee (University of Illinois Urbana-Champaign), [intermediate/advanced] Deep Generative and Dynamical Models Mikhail Belkin (University of California San Diego), [intermediate/advanced] Modern Machine Learning and Deep Learning through the Prism of Interpolation Dumitru Erhan (Google), [intermediate/advanced] Visual Self-supervised Learning and World Models Arthur Gretton (University College London), [intermediate/advanced] Probability Divergences and Generative Models Phillip Isola (Massachusetts Institute of Technology), [intermediate] Deep Generative Models Mohit Iyyer (University of Massachusetts Amherst), [intermediate/advanced] Natural Language Generation Irwin King (Chinese University of Hong Kong), [introductory/intermediate] Introduction to Graph Neural Networks Vincent Lepetit (Paris Institute of Technology), [intermediate] AI and 3D Geometry for [Self-supervised] 3D Scene Understanding Yan Liu (University of Southern California), [introductory/intermediate] Deep Learning for Time Series Dimitris N. Metaxas (Rutgers, The State University of New Jersey), [intermediate/advanced] Model-based, Explainable, Semisupervised and Unsupervised Machine Learning for Dynamic Analytics in Computer Vision and Medical Image Analysis Sean Meyn (University of Florida), [introductory/intermediate] Reinforcement Learning: Fundamentals, and Roadmaps for Successful Design Louis-Philippe Morency (Carnegie Mellon University), [intermediate/advanced] Multimodal Machine Learning Wojciech Samek (Fraunhofer Heinrich Hertz Institute), [introductory/intermediate] Explainable AI: Concepts, Methods and Applications Clara I. S?nchez (University of Amsterdam), [introductory/intermediate] Mechanisms for Trustworthy AI in Medical Image Analysis and Healthcare Bj?rn W. Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Jonathon Shlens (Google), [introductory/intermediate] Introduction to Deep Learning in Computer Vision Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines Csaba Szepesv?ri (University of Alberta), [intermediate/advanced] Tools and Techniques of Reinforcement Learning to Overcome Bellman's Curse of Dimensionality 1. Murat Tekalp (Ko? University), [intermediate/advanced] Deep Learning for Image/Video Restoration and Compression Alexandre Tkatchenko (University of Luxembourg), [introductory/intermediate] Machine Learning for Physics and Chemistry Li Xiong (Emory University), [introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning Ming Yuan (Columbia University), [intermediate/advanced] Low Rank Tensor Methods in High Dimensional Data Analysis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 17, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. ORGANIZING COMMITTEE: Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://irdta.eu/deeplearn/2022su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Sat Nov 6 05:07:36 2021 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sat, 6 Nov 2021 11:07:36 +0200 Subject: Connectionists: 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022): Fourth Call for Papers Message-ID: *** Fourth Call for Papers *** 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS 2022) May 25-27, 2022, Golden Bay Hotel 5*, Larnaca, Cyprus http://cyprusconferences.org/eais2022/ (Proceedings to be published by the IEEE Xplore Digital Library; Special Journal Issue with Evolving Systems, Springer) IEEE EAIS 2022 will provide a working and friendly atmosphere and will be a leading international forum focusing on the discussion of recent advances, the exchange of recent innovations and the outline of open important future challenges in the area of Evolving and Adaptive Intelligent Systems. Over the past decade, this area has emerged to play an important role on a broad international level in today's real-world applications, especially those ones with high complexity and dynamic changes. Its embedded modelling and learning methodologies are able to cope with real-time demands, changing operation conditions, varying environmental influences, human behaviours, knowledge expansion scenarios and drifts in online data streams. Conference Topics Basic Methodologies Evolving Soft Computing Techniques. Evolving Fuzzy Systems. Evolving Rule-Based Classifiers. Evolving Neuro-Fuzzy Systems. Adaptive Evolving Neural Networks. Online Genetic and Evolutionary Algorithms. Data Stream Mining. Incremental and Evolving Clustering. Adaptive Pattern Recognition. Incremental and Evolving ML Classifiers. Adaptive Statistical Techniques. Evolving Decision Systems. Big Data. Problems and Methodologies in Data Streams Stability, Robustness, Convergence in Evolving Systems. Online Feature Selection and Dimension Reduction. Online Active and Semi-supervised Learning. Online Complexity Reduction. Computational Aspects. Interpretability Issues. Incremental Adaptive Ensemble Methods. Online Bagging and Boosting. Self-monitoring Evolving Systems. Human-Machine Interaction Issues. Hybrid Modelling, Transfer Learning. Reservoir Computing. Applications of EAIS Time Series Prediction. Data Stream Mining and Adaptive Knowledge Discovery. Robotics. Intelligent Transport and Advanced Manufacturing. Advanced Communications and Multimedia Applications. Bioinformatics and Medicine. Online Quality Control and Fault Diagnosis. Condition Monitoring Systems. Adaptive Evolving Controller Design. User Activities Recognition. Huge Database and Web Mining. Visual Inspection and Image Classification. Image Processing. Cloud Computing. Multiple Sensor Networks. Query Systems and Social Networks. Alternative Statistical and Machine Learning Approaches. Submissions Submitted papers should not exceed 8 pages plus at most 2 pages overlength. Submissions of full papers are accepted online through Easy Chair (https://easychair.org/conferences/?conf=eais2022). The EAIS 2022 proceedings will be published on IEEE Xplore Digital Library. Authors of selected papers will be invited to submit extended versions for possible inclusion in a special issue of Evolving Systems - An Interdisciplinary Journal for Advanced Science and Technology (Springer). Important Dates ? Paper submission: January 10, 2022 ? Notification of acceptance/rejection: February 19, 2022 ? Camera ready submission: March 20, 2022 ? Authors registration: March 20, 2022 ? Conference Dates: May 25-27, 2022 Social Media FB: https://www.facebook.com/IEEEEAIS Twitter: https://twitter.com/IEEE_EAIS Linkedin: https://www.linkedin.com/events/2022ieeeconferenceonevolvingand6815560078674972672/ Organization Honorary Chairs ? Dimitar Filev, Ford Motor Co., USA ? Nikola Kasabov, Auckland University of Technology, New Zealand General Chairs ? George A. Papadopoulos, University of Cyprus, Nicosia, Cyprus ? Plamen Angelov, Lancaster University, UK Program Committee Chairs ? Giovanna Castellano, University of Bari, Italy ? Jos? A. Iglesias, Carlos III University of Madrid, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at gmail.com Sat Nov 6 08:15:01 2021 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Sat, 6 Nov 2021 13:15:01 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Hi Adam, This point is well made. However, there are some limitations in how clear one can go: Clarity sometimes requires time, effort and further work. For that reason, people need support before they are finished i.e., before they are able to show things with clarity. I guess (but by no means am I sure) that this is where Tsvi is. He has something promising but still somewhat foggy, something that needs more work. Moreover, there is one more problem that may be haunting Tsvi (again I am not sure): The more a new idea deviates from the currently accepted paradigm, the harder it is to achieve the clarity that you hope for. Radically new ideas are harder to go through people's minds. Much like you made a list of well-accepted examples fitting into the classical paradigm of brain and AI, one can make a list of examples in science where it took a long time until the ideas were accepted because they were not fitting into the classical paradigm of the time. Sometimes it took many years and occasionally only after the death of the author until the field recognized the significance of the ideas. So, could these authors have made their points more clearly, like you advise Tsvi to do, and not wait for so long? We don't know for sure. My opinion is that it is not very likely that they could have done a lot more. It would have been hard for them to write a paper that instantly changes people's minds. Maybe you can watch the video that Tsvi posted and see how clear it is to you. For me, it is pretty damn clear. Still, it is hard to wrap one's head around all the implications of that approach. This is my two cents. Danko Dr. Danko Nikoli? www.danko-nikolic.com https://www.linkedin.com/in/danko-nikolic/ --- A progress usually starts with an insight --- On Fri, Nov 5, 2021 at 10:16 PM Adam Krawitz wrote: > Tsvi, > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > gives me a more neutral perspective. In the spirit of progress: > > > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a paper > making that argument clearly, and I think you will find that the community > is incredibly open to that. Yes, if they see holes in the approach they > will be pointed out, but that is all part of the scientific exchange. > Examples of this approach include: Elman (1990) Finding Structure in Time, > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can provide more > examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be dismissive > of a computational approach that actually works. Why would they? They just > want to solve problems efficiently. Demonstrate that your approach can > solve a problem more effectively (or at least as effectively) as the > existing approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > Clearly communicate the novel contribution of your approach and I think > you will find a receptive audience. > > > > Thanks, > > Adam > > > > > > *From:* Connectionists *On > Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > > > Sincerely, > > -Tsvi > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > > > Sincerely, > > -Tsvi > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > > Randy & Yuko's models take into account feedback and inhibition. > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > > > Thomas, I am happy to have more discussions and/or start a different > thread. > > > > Sincerely, > > Tsvi Achler MD/PhD > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > Tsvi, > > > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook *Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > > > > Best, > > > > > > Dan Levine > ------------------------------ > > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > Sincerely, > > Tsvi Achler > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > Schedule: http://tinyurl.com/b7gxpwo > > > > *Listen carefully,* > *Neither the Vedas* > *Nor the Qur'an* > *Will teach you this:* > *Put the bit in its mouth,* > *The saddle on its back,* > *Your foot in the stirrup,* > *And ride your wild runaway mind* > *All the way to heaven.* > > *-- Kabir* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Nov 6 13:56:44 2021 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 6 Nov 2021 18:56:44 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring: early registration November 15 Message-ID: <75087521.2025139.1636221404636@webmail.strato.com> ****************************************************************** 6th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Spring Guimar?es, Portugal April 18-22, 2022 https://irdta.eu/deeplearn/2022sp/ ***************** Co-organized by: Algoritmi Center University of Minho, Guimar?es Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: November 15, 2021 ****************************************************************** SCOPE: DeepLearn 2022 Spring will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, and Bournemouth. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Spring is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Spring will take place in Guimar?es, in the north of Portugal, listed as UNESCO World Heritage Site and often referred to as the birthplace of the country. The venue will be: Hotel de Guimar?es Eduardo Manuel de Almeida 202 4810-440 Guimar?es http://www.hotel-guimaraes.com/ STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full in vivo online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Christopher Manning (Stanford University), Self-supervised and Naturally Supervised Learning Using Language Kate Smith-Miles (University of Melbourne), Stress-testing Algorithms via Instance Space Analysis Zhongming Zhao (University of Texas, Houston), Deep Learning Approaches for Predicting Virus-Host Interactions and Drug Response PROFESSORS AND COURSES: Eneko Agirre (University of the Basque Country), [introductory/intermediate] Natural Language Processing in the Pretrained Language Model Era Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision Altan ?ak?r (Istanbul Technical University), [introductory] Introduction to Deep Learning with Apache Spark Rylan Conway (Amazon), [introductory/intermediate] Deep Learning for Digital Assistants Jifeng Dai (SenseTime Research), [intermediate] AutoML for Generic Computer Vision Tasks Jianfeng Gao (Microsoft Research), [introductory/intermediate] An Introduction to Conversational Information Retrieval Daniel George (JPMorgan Chase), [introductory] An Introductory Course on Machine Learning and Deep Learning with Mathematica/Wolfram Language Bohyung Han (Seoul National University), [introductory/intermediate] Robust Deep Learning Lina J. Karam (Lebanese American University), [introductory/intermediate] Deep Learning for Quality Robust Visual Recognition Xiaoming Liu (Michigan State University), [intermediate] Deep Learning for Trustworthy Biometrics Jennifer Ngadiuba (Fermi National Accelerator Laboratory), [intermediate] Ultra Low-latency and Low-area Machine Learning Inference at the Edge Lucila Ohno-Machado (University of California, San Diego), [introductory] Use of Predictive Models in Medicine and Biomedical Research Bhiksha Raj (Carnegie Mellon University), [introductory] Quantum Computing and Neural Networks Bart ter Haar Romenij (Eindhoven University of Technology), [intermediate] Deep Learning and Perceptual Grouping Kaushik Roy (Purdue University), [intermediate] Re-engineering Computing with Neuro-inspired Learning: Algorithms, Architecture, and Devices Walid Saad (Virginia Polytechnic Institute and State University), [intermediate/advanced] Machine Learning for Wireless Communications: Challenges and Opportunities Yvan Saeys (Ghent University), [introductory/intermediate] Interpreting Machine Learning Models Martin Schultz (J?lich Research Centre), [intermediate] Deep Learning for Air Quality, Weather and Climate Richa Singh (Indian Institute of Technology, Jodhpur), [introductory/intermediate] Trusted AI Sofia Vallecorsa (European Organization for Nuclear Research), [introductory/intermediate] Deep Generative Models for Science: Example Applications in Experimental Physics Michalis Vazirgiannis (?cole Polytechnique), [intermediate/advanced] Machine Learning with Graphs and Applications Guowei Wei (Michigan State University), [introductory/advanced] Integrating AI and Advanced Mathematics with Experimental Data for Forecasting Emerging SARS-CoV-2 Variants Xiaowei Xu (University of Arkansas, Little Rock), [intermediate/advanced] Deep Learning for NLP and Causal Inference Guoying Zhao (University of Oulu), [introductory/intermediate] Vision-based Emotion AI OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by April 10, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. ORGANIZING COMMITTEE: Dalila Dur?es (Braga, co-chair) Jos? Machado (Braga, co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) Paulo Novais (Braga, co-chair) David Silva (London, co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022sp/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will get exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022sp/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Centro Algoritmi, University of Minho, Guimar?es School of Engineering, University of Minho Intelligent Systems Associate Laboratory, University of Minho Rovira i Virgili University Municipality of Guimar?es Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Nov 6 13:58:23 2021 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 6 Nov 2021 18:58:23 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Winter: early registration November 26 Message-ID: <40556952.2025200.1636221503159@webmail.strato.com> ****************************************************************** 5th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Winter Bournemouth, UK January 17-21, 2022 https://irdta.eu/deeplearn/2022wi/ *********** Co-organized by: Department of Computing and Informatics Bournemouth University Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: November 26, 2021 ****************************************************************** SCOPE: DeepLearn 2022 Winter will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw and Las Palmas de Gran Canaria. Deep learning is a branch of artificial intelligence covering a spectrum of current exciting research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of different environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 22 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main components of the event. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Winter is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Winter will take place in Bournemouth, a coastal resort town on the south coast of England. The venue will be: Talbot Campus Bournemouth University https://www.bournemouth.ac.uk/about/contact-us/directions-maps/directions-our-talbot-campus STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full in vivo online participation will be possible. However, the organizers want to emphasize the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Yi Ma (University of California, Berkeley), White-box Deep (Convolution) Networks from the Principle of Rate Reduction Daphna Weinshall (Hebrew University of Jerusalem), Curriculum Learning in Deep Networks Eric P. Xing (Carnegie Mellon University), It Is Time for Deep Learning to Understand Its Expense Bills PROFESSORS AND COURSES: Peter L. Bartlett (University of California, Berkeley), [intermediate/advanced] Deep Learning: A Statistical Viewpoint Joachim M. Buhmann (Swiss Federal Institute of Technology, Z?rich), [introductory/advanced] Model and Algorithm Validation for Data Science Matias Carrasco Kind (University of Illinois, Urbana-Champaign), [intermediate] Anomaly Detection Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Graph Representation Learning Seungjin Choi (BARO AI Academy), [introductory/intermediate] Bayesian Optimization over Continuous, Discrete, or Hybrid Spaces Sumit Chopra (New York University), [intermediate] Deep Learning in Healthcare R?diger Dillmann (Karlsruhe Institute of Technology), [introductory/intermediate] Building Brains for Robots Marco Duarte (University of Massachusetts, Amherst), [introductory/intermediate] Explainable Machine Learning Charles Elkan (University of California, San Diego), [intermediate] AI and ML Applications in Finance and Retail Jo?o Gama (University of Porto), [introductory] Learning from Data Streams: Challenges, Issues, and Opportunities Claus Horn (Zurich University of Applied Sciences), [intermediate] Deep Learning for Biotechnology Nathalie Japkowicz (American University), [intermediate/advanced] Learning from Class Imbalances Gregor Kasieczka (University of Hamburg), [introductory/intermediate] Deep Learning Fundamental Physics: Rare Signals, Unsupervised Anomaly Detection, and Generative Models Karen Livescu (Toyota Technological Institute at Chicago), [intermediate/advanced] Speech Processing: Automatic Speech Recognition and beyond David McAllester (Toyota Technological Institute at Chicago), [intermediate/advanced] Information Theory for Deep Learning Dhabaleswar K. Panda (Ohio State University), [intermediate] Exploiting High-performance Computing for Deep Learning: Why and How? Fabio Roli (University of Cagliari), [introductory/intermediate] Adversarial Machine Learning Jude W. Shavlik (University of Wisconsin, Madison), [introductory/intermediate] Advising, Explaining, Distilling, and Quantizing Deep Neural Networks Kunal Talwar (Apple), [introductory/intermediate] Foundations of Differentially Private Learning Tinne Tuytelaars (KU Leuven), [introductory/intermediate] Continual Learning in Deep Neural Networks Lyle Ungar (University of Pennsylvania), [intermediate] Natural Language Processing using Deep Learning Yu-Dong Zhang (University of Leicester), [introductory/intermediate] Convolutional Neural Networks and Their Applications to COVID-19 Diagnosis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by January 9, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 9, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 9, 2022. ORGANIZING COMMITTEE: Rashid Bakirov (Bournemouth, co-chair) Nan Jiang (Bournemouth, co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022wi/registration/ The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will get exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022wi/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Bournemouth University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Sat Nov 6 10:06:59 2021 From: rloosemore at susaro.com (Richard Loosemore) Date: Sat, 6 Nov 2021 10:06:59 -0400 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Adam, 1) Tsvi Achler has already done the things you ask, many times over, so it behooves you to check for that before you tell him to do it. Instructing someone to "clearly communicate the novel contribution of your approach" when they have already done is is an insult. 2) The whole point of this discussion is that when someone "makes an argument clearly" the community is NOT "incredibly open to that."? Quite the opposite: the community's attention is fickle, tribal, fad-driven, and fundamentally broken. 3) When you say that you "have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works," that truly boggles the mind. ??? a) There is no precise definition for "actually works" -- there is no global measure of goodness in the space of approaches. ??? b) Getting the attention of someone at e.g. Google is a non-trivial feat in itself: just ignoring outsiders is, for Google, a perfectly acceptable option. ??? c) What do you suppose would be the reaction of an engineer at Google who gets handed a paper by their boss, and is asked "What do you think of this?"? Suppose the paper describes an approach that is inimicable to what that engineer has been doing their whole career. So much so, that if Google goes all-in on this new thing, the engineer's skillset will be devalued to junk status.? What would the engineer do? They would say "I read it. It's just garbage." Best Richard Loosemore On 11/5/21 1:01 PM, Adam Krawitz wrote: > > Tsvi, > > I?m just a lurker on this list, with no skin in the game, but perhaps > that gives me a more neutral perspective. In the spirit of progress: > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a > paper making that argument clearly, and I think you will find that > the community is incredibly open to that. Yes, if they see holes > in the approach they will be pointed out, but that is all part of > the scientific exchange. Examples of this approach include: Elman > (1990) Finding Structure in Time, Kohonen (1990) The > Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: > Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can > provide more examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be > dismissive of a computational approach that actually works. Why > would they? They just want to solve problems efficiently. > Demonstrate that your approach can solve a problem more > effectively (or at least as effectively) as the existing > approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the > power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > Clearly communicate the novel contribution of your approach and I > think you will find a receptive audience. > > Thanks, > > Adam > > *From:*Connectionists > *On Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > Lastly Feedforward methods are predominant in a large part because > they have financial backing from large companies with advertising and > clout like Google and the self-driving craze that?never fully > materialized. > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons.? That means storing all > patterns, mixing them randomly and then presenting to a network to > learn. As far as I know, no one is doing this in the community, so > feedforward methods are only partially connectionist. By allowing > popularity to predominate and choking off funds and presentation of > alternatives we are cheating ourselves from pursuing other more > rigorous brain-like methods. > > Sincerely, > > -Tsvi > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > I looked especially at the inhibitory feedback section of the book > which describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all > activations in the layer.? It also describes the role of inhibition in > sparsity?and feedforward inhibition, > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, > it is very directed based on the neurons that are activated and their > inputs.? This sort of regulation is also the foundation of Homeostatic > Plasticity findings (albeit with changes in Homeostatic regulation in > experiments occurring in a slower time scale).? The regulatory > feedback model describes the effect and role in recognition of those > regulated connections in real time during recognition. > > I would be happy to discuss further and collaborate on writing about > the differences between the approaches for the next book or review. > > And I want to point out to folks, that the system is based on politics > and that is why certain work is not cited like it should, but even > worse these politics are here in the group today and they continue to > very strongly?influence decisions in the connectionist?community > and?holds us back. > > Sincerely, > > -Tsvi > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is actually somewhat better > than the online version (and buying choices on amazon start at $9.99), > there *is* an online version. > > Randy & Yuko's models take into account feedback and inhibition. > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > Does your book include a discussion of Regulatory or Inhibitory > Feedback published in several low impact?journals between 2008 and > 2014 (and in videos subsequently)? > > These are networks where the primary computation is inhibition > back to the inputs that activated them and may be very > counterintuitive given today's trends.? You can almost think of > them as the opposite of Hopfield networks. > > I would love to check inside the book but I dont have an academic > budget that allows me access to it and that is a huge part of the > problem with how information is shared and funding?is allocated. I > could not get access to any of the text or citations > especially?Chapter 4: "Competition, Lateral Inhibition, and > Short-Term Memory", to?weigh in. > > I wish the best circulation for your book, but even if the > Regulatory Feedback Model is in the book, that does not change the > fundamental problem if the book is not readily available. > > The same goes with?Steve Grossberg's book, I cannot easily look > inside.? With regards to Adaptive Resonance I dont subscribe to > lateral inhibition as a predominant mechanism, but I do believe a > function such as vigilance is very important during recognition > and Adaptive Resonance is one of a?very?few models that have it.? > The Regulatory Feedback model I have developed (and Michael > Spratling studies a similar model as well) is built primarily > using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously?during > recognition in order to determine which (single or multiple > neurons together) match the inputs the best without lateral > inhibition. > > Unfortunately within conferences and talks predominated by the > Adaptive Resonance crowd I have experienced the > familiar?dismissiveness and did not have an opportunity to give a > proper talk. This goes back to the larger issue of academic > politics based on small self-selected committees, the same issues > that exist?with the feedforward crowd, and pretty much all of > academia. > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of > the journal systems and the small committee system of academia > developed in the middle ages (and their mutual synergies) block > the use of more modern methods in research.? Thus we are stuck > with this problem, which especially affects those that are trying > to introduce something new and counterintuitive, and hence the > results described in the two?National Bureau of Economic Research > articles I cited in my previous message. > > Thomas, I am happy to have more discussions and/or start a > different thread. > > Sincerely, > > Tsvi Achler MD/PhD > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: > > Tsvi, > > While deep learning and feedforward networks have an outsize > popularity, there are plenty of published sources that cover a > much wider variety of networks, many of them more biologically > based than deep learning.? A treatment of a range of neural > network approaches, going from simpler to more complex > cognitive functions, is found in my textbook /Introduction to > Neural and Cognitive Modeling/ (3rd edition, Routledge, > 2019).? Also Steve Grossberg's book /Conscious Mind, Resonant > Brain/ (Oxford, 2021) emphasizes a variety of architectures > with a strong biological basis. > > Best, > > Dan Levine > > ------------------------------------------------------------------------ > > *From:*Connectionists > on behalf of > Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 > Turing Lecture, etc. > > Since the title of the thread is Scientific Integrity, I want > to point out some issues about trends in academia and then > especially focusing on the connectionist community. > > In general analyzing impact factors etc the most important > progress gets silenced until the mainstream picks it up Impact > Factiors in novel research > www.nber.org/.../working_papers/w22180/w22180.pdf > ??and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > ??. > > The connectionist field is stuck on feedforward networks and > variants such as with inhibition of competitors (e.g. lateral > inhibition), or other variants that are sometimes labeled as > recurrent networks for learning time where the feedforward > networks can be rewound in time. > > This stasis is specifically?occuring?with the popularity of > deep learning.? This is often portrayed as neurally plausible > connectionism but requires an implausible amount of rehearsal > and is not connectionist if this rehearsal is not implemented > with neurons (see video link for further clarification). > > Models which have true feedback (e.g. back to their own > inputs) cannot learn by backpropagation but there is plenty of > evidence these types of connections exist in the brain and are > used during recognition. Thus they get ignored: no talks in > universities, no featuring in "premier" journals and no funding. > > But they are important and may negate the need for rehearsal > as needed in feedforward methods.? Thus may be essential for > moving connectionism forward. > > If the community is truly dedicated to brain motivated > algorithms, I recommend giving more time to networks other > than feedforward networks. > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > Sincerely, > > Tsvi Achler > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest > mailing list on ANNs, and many neural net pioneers are > still subscribed to it. I am hoping that some of them - as > well as their contemporaries - might be able to provide > additional valuable insights into the history of the field. > > Following the great success of massive open online peer > review (MOOR) for my 2015 survey of deep learning (now the > most cited article ever published in the journal Neural > Networks), I've decided to put forward another piece for > MOOR. I want to thank the many experts who have already > provided me with comments on it. Please send additional > relevant references and suggestions for improvements for > the following draft directly to me at juergen at idsia.ch: > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors > in ACM's justification of the ACM A. M. Turing Award for > deep learning and a critique of the Turing Lecture > published by ACM in July 2021. This work can also be seen > as a short history of deep learning, at least as far as > ACM's errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. > However, it is the very nature of science to resolve > controversies through facts. Credit assignment is as core > to scientific history as it is to machine learning. My aim > is to ensure that the true history of our field is > preserved for posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego ? ? ? ? ? ?- > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > Schedule: http://tinyurl.com/b7gxpwo > > /Listen carefully,// > //Neither the Vedas// > //Nor the Qur'an// > //Will teach you this:// > //Put the bit in its mouth,// > //The saddle on its back,// > //Your foot in the stirrup,// > //And ride your wild runaway mind// > //All the way to heaven./ > > /-- Kabir/// > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doya at oist.jp Sun Nov 7 01:05:33 2021 From: doya at oist.jp (Kenji Doya) Date: Sun, 7 Nov 2021 06:05:33 +0000 Subject: Connectionists: Theoretical Sciences Visiting Program 2022 at OIST (Application by Nov. 30, 2021) Message-ID: <9842B306-E171-4DA2-B6EF-B4608A18AEDD@oist.jp> We are pleased to call for Long-Term Visiting Scholars in Theoretical Sciences at OIST as below. **** As part of an initiative to develop a vibrant program in theoretical sciences, OIST invites applications for Visiting Scholars in the Theoretical Sciences. The Okinawa Institute of Science and Technology (OIST) is a research institution with a Ph.D. program. Outstanding graduate students and faculty are being attracted internationally. Applicants should have a Ph.D. in a relevant field and be independent researchers, i.e., should have an academic standing equivalent to senior postdoc, junior group leader, or faculty member at a research-intensive institution. The position has a flexible term of 2 to 12 months taken continuously starting between April 2022 and March 2023 with a preference for longer-term visitors. Visitors through this program are encouraged to collaborate with one or more research units in OIST during their stay. The appointment may be taken while on leave or sabbatical from another university and comes with housing, research funds, and moving allowance. This program can provide a per diem as per OIST rules. In exceptional situations where the applicant does not receive any salary from grants or their institution at the time of their visit to OIST, successful candidates may be able to apply for support with salary, where this is necessary to facilitate the visit, and they can provide evidence of their need. No teaching duties come with the position. However, candidates are strongly encouraged to give a short series of lectures to students and researchers in the OIST community on a topic of their choosing, and we ask that they include details of this in their cover letter. OIST is an institution with no departments, aiming to eliminate barriers between people working in different fields. It provides a family-friendly working environment including the multilingual Child Development Center and has proactive policies designed to promote a culture of diversity. OIST is an equal opportunity, affirmative action educator, and employer. Further details can be found at https://www.oist.jp/. Qualifications Applicants should have a Ph.D. in a relevant field and be independent researchers, i.e., should have an academic standing equivalent to senior postdoc, junior group leader, or faculty member at a research-intensive institution. Application Instructions To apply, please go to our Interfolio page: https://apply.interfolio.com/96304 Applicants should submit a CV including a list of publications (limit 10 pages), and provide a brief description of their research to date and proposed research while visiting OIST (limit 2 pages). In your proposal, please explicitly mention OIST units you wish to collaborate with, and include your intended start date and length of stay in your proposal (date range is acceptable). All visits are subject to the availability of funding. To be considered for this year?s funding the applicant?s intended stay should start no later than March 2023. Applications received by November 30th 2021, are guaranteed full consideration. If you have any questions, please contact the Faculty Affairs Office at tsvp at oist.jp ---- Kenji Doya Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University 1919-1 Tancha, Onna, Okinawa 904-0495, Japan Phone: +81-98-966-8594; Fax: +81-98-966-2891 https://groups.oist.jp/ncu From ASIM.ROY at asu.edu Sun Nov 7 03:12:47 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sun, 7 Nov 2021 08:12:47 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Hi Tsvi, You will find definitions of localist and distributed representation in this paper. I refer to more than one source and they essentially mean the same. It?s a fairly standard definition used in cognitive and neuroscience. https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00551/full This paper has some definitions of grandmother cells: https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00300/full Your argument about rehearsal is interesting. Perhaps do a video explaining the difference between your method and the globalist method. Rehearsal is definitely computationally expensive. If there?s a smarter way to do the same, that would definitely be of interest. Juergen, sorry that this discussion deviates from what you started. Improper credit assignment is indeed a problem in this field. I had this note about 10 years back from one of the SVM gurus: ?Your RBF nets are very close to Support Vector Machines with kernels, so you were ahead of the curve. Now SVM are one of the most successful learning methodologies.? The problem in this field shows up in many other forms, one being fear of dominant personalities and their theories. I have seen that first hand in my private debates over the years. Best, Asim From: Tsvi Achler Sent: Saturday, November 6, 2021 10:23 PM To: Asim Roy Cc: Adam Krawitz ; connectionists ; Schmidhuber Juergen ; Levine, Daniel S ; Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Hi Asim, I love your globalist-localist dichotomy as it refers to learning. In localist learning only the neuron that is associated with the learning label needs to be modified to learn. Feed forward methods are globalist because if you want to modify a label you have to rehearse with all the old and the new and many weights may change anywhere within the network. Unfortunately when I try to publish papers using this distinction I find globalist and localist can mean many things to related literatures. Do you have good references that nail down the definitions and history from the beginning of connectionism? I would love to put that up and clarify it within a general search, maybe even place it on Wikipedia. I find the localist-globalist distinction very important.. Regulatory feedback learns in a localist way. This is why it does not need the same rehearsal as feed forward networks. Regulatory feedback may also help address the criticism by those who subscribe to globalist methods and criticise the need of non-distributed single grandmother cells. Regulatory feedback can support a localist method that is also distributed. Due to regulatory feedback, if multiple neurons encode the same thing then they create a linearly dependent type of situation. You can create many neurons that encode the same thing (let's say grandmother) and get the same result as if you had one neuron. In this paradigm all of the neurons that encode the same thing can simply be summed up and they act as a single neuron. Thus any combination of neurons can be used in total to represent the same thing and if one neuron dies this does not affect the sum at all. This linearly dependent situation also yields interesting results especially when you put a regulatory feedback network into the paradigm of LTP and LTD experiments. Within LTP & LTD activation and induction, you can get changes where a different linearly dependent neuron takes priority, looking like synaptic change but occurring without any synaptic change. This can greatly change our understanding of brain recognition and design of experiments. Although I may not expect the community to immediately adopt this paradigm and results, I do expect it to be accepted as a possibility to be thought about and analyzed, and placed in literature where it will be read. Instead it is snuffed out. You won't be able to read it in a publication. For example, I had a conference paper on this linear dependence accepted a while ago, but since I did not have funding, I could not provide the $500 the conference demanded. As a pay-to-play paywall journal they did not care about the situation. Moreover as a long term non-funded researcher I am no longer willing to put my time and efforts into publishing work where it will not be read or where a cost to the author is demanded even if there is no funding. If the connectionist community and academia really wants to support new ideas, it has to equitably provide funding, make publications free, or make things less political. If there are enough people interested and subscribed to view videos I can make a video to show the dynamics that create this situation just like I did for regulatory feedback. Sincerely -Tsvi On Sat, Nov 6, 2021, 1:57 AM Asim Roy > wrote: Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists > On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Sun Nov 7 01:23:08 2021 From: achler at gmail.com (Tsvi Achler) Date: Sat, 6 Nov 2021 22:23:08 -0700 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Hi Asim, I love your globalist-localist dichotomy as it refers to learning. In localist learning only the neuron that is associated with the learning label needs to be modified to learn. Feed forward methods are globalist because if you want to modify a label you have to rehearse with all the old and the new and many weights may change anywhere within the network. Unfortunately when I try to publish papers using this distinction I find globalist and localist can mean many things to related literatures. Do you have good references that nail down the definitions and history from the beginning of connectionism? I would love to put that up and clarify it within a general search, maybe even place it on Wikipedia. I find the localist-globalist distinction very important.. Regulatory feedback learns in a localist way. This is why it does not need the same rehearsal as feed forward networks. Regulatory feedback may also help address the criticism by those who subscribe to globalist methods and criticise the need of non-distributed single grandmother cells. Regulatory feedback can support a localist method that is also distributed. Due to regulatory feedback, if multiple neurons encode the same thing then they create a linearly dependent type of situation. You can create many neurons that encode the same thing (let's say grandmother) and get the same result as if you had one neuron. In this paradigm all of the neurons that encode the same thing can simply be summed up and they act as a single neuron. Thus any combination of neurons can be used in total to represent the same thing and if one neuron dies this does not affect the sum at all. This linearly dependent situation also yields interesting results especially when you put a regulatory feedback network into the paradigm of LTP and LTD experiments. Within LTP & LTD activation and induction, you can get changes where a different linearly dependent neuron takes priority, looking like synaptic change but occurring without any synaptic change. This can greatly change our understanding of brain recognition and design of experiments. Although I may not expect the community to immediately adopt this paradigm and results, I do expect it to be accepted as a possibility to be thought about and analyzed, and placed in literature where it will be read. Instead it is snuffed out. You won't be able to read it in a publication. For example, I had a conference paper on this linear dependence accepted a while ago, but since I did not have funding, I could not provide the $500 the conference demanded. As a pay-to-play paywall journal they did not care about the situation. Moreover as a long term non-funded researcher I am no longer willing to put my time and efforts into publishing work where it will not be read or where a cost to the author is demanded even if there is no funding. If the connectionist community and academia really wants to support new ideas, it has to equitably provide funding, make publications free, or make things less political. If there are enough people interested and subscribed to view videos I can make a video to show the dynamics that create this situation just like I did for regulatory feedback. Sincerely -Tsvi On Sat, Nov 6, 2021, 1:57 AM Asim Roy wrote: > Over a period of more than 25 years, I have had the opportunity to argue > about the brain in both public forums and private discussions. And they > included very well-known scholars such as Walter Freeman (UC-Berkeley), > Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland > (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen > Institute), Teuvo Kohonen (Finland) and many others, some of whom are on > this list. And many became good friends through these debates. > > > > We argued about many issues over the years, but the one that baffled me > the most was the one about localist vs. distributed representation. Here?s > the issue. As far as I know, although all the Nobel prizes in the field of > neurophysiology ? from Hubel and Wiesel (simple and complex cells) and > Moser and O?Keefe (grid and place cells) to the current one on discovery of > temperature and touch sensitive receptors and neurons - are about finding > ?meaning? in single or a group of dedicated cells, the distributed > representation theory has yet to explain these findings of ?meaning.? > Contrary to the assertion that the field is open-minded, I think most in > this field are afraid the to cross the red line. > > > > Horace Barlow was the exception. He was perhaps the only neuroscientist > who was willing to cross the red line and declare that ?grandmother cells > will be found.? After a debate on this issue in 2012, which included Walter > Freeman and others, Horace visited me in Phoenix at the age of 91 for > further discussion. > > > > If the field is open minded, would love to hear how distributed > representation is compatible with finding ?meaning? in the activations of > single or a dedicated group of cells. > > > > Asim Roy > > Professor, Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > > > > > *From:* Connectionists *On > Behalf Of *Adam Krawitz > *Sent:* Friday, November 5, 2021 10:01 AM > *To:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Tsvi, > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > gives me a more neutral perspective. In the spirit of progress: > > > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a paper > making that argument clearly, and I think you will find that the community > is incredibly open to that. Yes, if they see holes in the approach they > will be pointed out, but that is all part of the scientific exchange. > Examples of this approach include: Elman (1990) Finding Structure in Time, > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can provide more > examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be dismissive > of a computational approach that actually works. Why would they? They just > want to solve problems efficiently. Demonstrate that your approach can > solve a problem more effectively (or at least as effectively) as the > existing approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > Clearly communicate the novel contribution of your approach and I think > you will find a receptive audience. > > > > Thanks, > > Adam > > > > > > *From:* Connectionists *On > Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > > > Sincerely, > > -Tsvi > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > > > Sincerely, > > -Tsvi > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is > actually somewhat better than the online version (and buying choices on > amazon start at $9.99), there *is* an online version. > > > > Randy & Yuko's models take into account feedback and inhibition. > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > > > Thomas, I am happy to have more discussions and/or start a different > thread. > > > > Sincerely, > > Tsvi Achler MD/PhD > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > Tsvi, > > > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook *Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > > > > Best, > > > > > > Dan Levine > ------------------------------ > > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > Sincerely, > > Tsvi Achler > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > Schedule: http://tinyurl.com/b7gxpwo > > > > > *Listen carefully,* > *Neither the Vedas* > *Nor the Qur'an* > *Will teach you this:* > *Put the bit in its mouth,* > *The saddle on its back,* > *Your foot in the stirrup,* > *And ride your wild runaway mind* > *All the way to heaven.* > > *-- Kabir* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at gmail.com Sun Nov 7 03:13:37 2021 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Sun, 7 Nov 2021 09:13:37 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Message-ID: I agree with Richard. Would it make sense to have a conference, a journal, a special issue of a journal, or a book dedicated solely to ideas in neuroscience that challenge the establishment? These ideas would still need to be in agreement with the empirical data though but, at the same time, they must be as much in disagreement with the current dominant paradigm(s) as possible. Moreover, would it make sense to rate the ideas, not based on how many other scientists like them, but how many other lifetime works they are likely to destroy (like the career of Roger's hypothetical engineer at Google)? Maybe something good could get born out of such effort. But who is going to compile the list and edit the book? Who is willing to shoot themselves in the foot for the (potential) good of neuroscience? Regards, Dr. Danko Nikoli? www.danko-nikolic.com https://www.linkedin.com/in/danko-nikolic/ --- A progress usually starts with an insight --- On Sun, Nov 7, 2021 at 12:31 AM Richard Loosemore wrote: > > Adam, > > 1) Tsvi Achler has already done the things you ask, many times over, so it > behooves you to check for that before you tell him to do it. Instructing > someone to "clearly communicate the novel contribution of your approach" > when they have already done is is an insult. > > 2) The whole point of this discussion is that when someone "makes an > argument clearly" the community is NOT "incredibly open to that." Quite > the opposite: the community's attention is fickle, tribal, fad-driven, and > fundamentally broken. > > 3) When you say that you "have trouble believing that Google or anyone > else will be dismissive of a computational approach that actually works," > that truly boggles the mind. > > a) There is no precise definition for "actually works" -- there is no > global measure of goodness in the space of approaches. > > b) Getting the attention of someone at e.g. Google is a non-trivial > feat in itself: just ignoring outsiders is, for Google, a perfectly > acceptable option. > > c) What do you suppose would be the reaction of an engineer at Google > who gets handed a paper by their boss, and is asked "What do you think of > this?" Suppose the paper describes an approach that is inimicable to what > that engineer has been doing their whole career. So much so, that if Google > goes all-in on this new thing, the engineer's skillset will be devalued to > junk status. What would the engineer do? They would say "I read it. It's > just garbage." > > Best > > Richard Loosemore > > > > On 11/5/21 1:01 PM, Adam Krawitz wrote: > > Tsvi, > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > gives me a more neutral perspective. In the spirit of progress: > > > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a paper > making that argument clearly, and I think you will find that the community > is incredibly open to that. Yes, if they see holes in the approach they > will be pointed out, but that is all part of the scientific exchange. > Examples of this approach include: Elman (1990) Finding Structure in Time, > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can provide more > examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be dismissive > of a computational approach that actually works. Why would they? They just > want to solve problems efficiently. Demonstrate that your approach can > solve a problem more effectively (or at least as effectively) as the > existing approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > Clearly communicate the novel contribution of your approach and I think > you will find a receptive audience. > > > > Thanks, > > Adam > > > > > > *From:* Connectionists > *On Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > > > Sincerely, > > -Tsvi > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > > > Sincerely, > > -Tsvi > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > > Randy & Yuko's models take into account feedback and inhibition. > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > > > Thomas, I am happy to have more discussions and/or start a different > thread. > > > > Sincerely, > > Tsvi Achler MD/PhD > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > Tsvi, > > > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook *Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > > > > Best, > > > > > > Dan Levine > ------------------------------ > > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > Sincerely, > > Tsvi Achler > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > Schedule: http://tinyurl.com/b7gxpwo > > > > *Listen carefully,* > *Neither the Vedas* > *Nor the Qur'an* > *Will teach you this:* > *Put the bit in its mouth,* > *The saddle on its back,* > *Your foot in the stirrup,* > *And ride your wild runaway mind* > *All the way to heaven.* > > *-- Kabir* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srodrigues at bcamath.org Sun Nov 7 03:44:51 2021 From: srodrigues at bcamath.org (Serafim Rodrigues) Date: Sun, 7 Nov 2021 09:44:51 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Message-ID: The points made by Richard, Danko and Tsvi are rock solid to me! I fully second their points! By the way Tsvi, thank you for proposing a novel idea to the community and in fact I went through your paper and video. I was not aware of your work and I found it very stimulating! Science needs these debates and more open minded Scientists... With many thanks Serafim On Sun, 7 Nov 2021 at 00:28, Richard Loosemore wrote: > > Adam, > > 1) Tsvi Achler has already done the things you ask, many times over, so it > behooves you to check for that before you tell him to do it. Instructing > someone to "clearly communicate the novel contribution of your approach" > when they have already done is is an insult. > > 2) The whole point of this discussion is that when someone "makes an > argument clearly" the community is NOT "incredibly open to that." Quite > the opposite: the community's attention is fickle, tribal, fad-driven, and > fundamentally broken. > > 3) When you say that you "have trouble believing that Google or anyone > else will be dismissive of a computational approach that actually works," > that truly boggles the mind. > > a) There is no precise definition for "actually works" -- there is no > global measure of goodness in the space of approaches. > > b) Getting the attention of someone at e.g. Google is a non-trivial > feat in itself: just ignoring outsiders is, for Google, a perfectly > acceptable option. > > c) What do you suppose would be the reaction of an engineer at Google > who gets handed a paper by their boss, and is asked "What do you think of > this?" Suppose the paper describes an approach that is inimicable to what > that engineer has been doing their whole career. So much so, that if Google > goes all-in on this new thing, the engineer's skillset will be devalued to > junk status. What would the engineer do? They would say "I read it. It's > just garbage." > > Best > > Richard Loosemore > > > > On 11/5/21 1:01 PM, Adam Krawitz wrote: > > Tsvi, > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > gives me a more neutral perspective. In the spirit of progress: > > > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a paper > making that argument clearly, and I think you will find that the community > is incredibly open to that. Yes, if they see holes in the approach they > will be pointed out, but that is all part of the scientific exchange. > Examples of this approach include: Elman (1990) Finding Structure in Time, > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can provide more > examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be dismissive > of a computational approach that actually works. Why would they? They just > want to solve problems efficiently. Demonstrate that your approach can > solve a problem more effectively (or at least as effectively) as the > existing approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > Clearly communicate the novel contribution of your approach and I think > you will find a receptive audience. > > > > Thanks, > > Adam > > > > > > *From:* Connectionists > *On Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > > > Sincerely, > > -Tsvi > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > > > Sincerely, > > -Tsvi > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > > Randy & Yuko's models take into account feedback and inhibition. > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > > > Thomas, I am happy to have more discussions and/or start a different > thread. > > > > Sincerely, > > Tsvi Achler MD/PhD > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > Tsvi, > > > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook *Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > > > > Best, > > > > > > Dan Levine > ------------------------------ > > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > Sincerely, > > Tsvi Achler > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > Schedule: http://tinyurl.com/b7gxpwo > > > > *Listen carefully,* > *Neither the Vedas* > *Nor the Qur'an* > *Will teach you this:* > *Put the bit in its mouth,* > *The saddle on its back,* > *Your foot in the stirrup,* > *And ride your wild runaway mind* > *All the way to heaven.* > > *-- Kabir* > > > -- Serafim Rodrigues Group Leader *BCAM - *Basque Center for Applied Mathematics Alameda de Mazarredo, 14 E-48009 Bilbao, Basque Country - Spain Tel. +34 946 567 842 srodrigues at bcamath.org | www.bcamath.org/srodrigues *(**matematika mugaz bestalde)* -------------- next part -------------- An HTML attachment was scrubbed... URL: From barak at pearlmutter.net Sun Nov 7 12:57:54 2021 From: barak at pearlmutter.net (Barak A. Pearlmutter) Date: Sun, 7 Nov 2021 17:57:54 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Message-ID: How can we hope to understand the brain when we cannot even understand each other? From pubconference at gmail.com Sun Nov 7 20:12:55 2021 From: pubconference at gmail.com (Pub Conference) Date: Sun, 7 Nov 2021 20:12:55 -0500 Subject: Connectionists: Call for paper-Neural Computing and Applications, Topical Collection on Interpretation of Deep Learning Message-ID: Neural Computing and Applications Topical Collection on Interpretation of Deep Learning: Prediction, Representation, Modeling and Utilization https://www.springer.com/journal/521/updates/19187658 Aims, Scope and Objective While Big Data offers the great potential for revolutionizing all aspects of our society, harvesting of valuable knowledge from Big Data is an extremely challenging task. The large scale and rapidly growing information hidden in the unprecedented volumes of non-traditional data requires the development of decision-making algorithms. Recent successes in machine learning, particularly deep learning, has led to breakthroughs in real-world applications such as autonomous driving, healthcare, cybersecurity, speech and image recognition, personalized news feeds, and financial markets. While these models may provide the state-of-the-art and impressive prediction accuracies, they usually offer little insight into the inner workings of the model and how a decision is made. The decision-makers cannot obtain human-intelligible explanations for the decisions of models, which impede the applications in mission-critical areas. This situation is even severely worse in complex data analytics. It is, therefore, imperative to develop explainable computation intelligent learning models with excellent predictive accuracy to provide safe, reliable, and scientific basis for determination. Numerous recent works have presented various endeavors on this issue but left many important questions unresolved. The first challenging problem is how to construct self-explanatory models or how to improve the explicit understanding and explainability of a model without the loss of accuracy. In addition, high dimensional or ultra-high dimensional data are common in large and complex data analytics. In these cases, the construction of interpretable model becomes quite difficult and complex. Further, how to evaluate and quantify the explainability of a model is lack of consistent and clear description. Moreover, auditable, repeatable, and reliable process of the computational models is crucial to decision-makers. For example, decision-makers need explicit explanation and analysis of the intermediate features produced in a model, thus the interpretation of intermediate processes is requisite. Subsequently, the problem of efficient optimization exists in explainable computational intelligent models. These raise many essential issues on how to develop explainable data analytics in computational intelligence. This Topical Collection aims to bring together original research articles and review articles that will present the latest theoretical and technical advancements of machine and deep learning models. We hope that this Topical Collection will: 1) improve the understanding and explainability of machine learning and deep neural networks; 2) enhance the mathematical foundation of deep neural networks; and 3) increase the computational efficiency and stability of the machine and deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following: - Interpretability of deep learning models - Quantifying or visualizing the interpretability of deep neural networks - Neural networks, fuzzy logic, and evolutionary based interpretable control systems - Supervised, unsupervised, and reinforcement learning - Extracting understanding from large-scale and heterogeneous data - Dimensionality reduction of large scale and complex data and sparse modeling - Stability improvement of deep neural network optimization - Optimization methods for deep learning - Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data) - Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security Guest Editors Nian Zhang (Lead Guest Editor), University of the District of Columbia, Washington, DC, USA, nzhang at udc.edu Jian Wang, China University of Petroleum (East China), Qingdao, China, wangjiannl at upc.edu.cn Leszek Rutkowski, Czestochowa University of Technology, Poland, leszek.rutkowski at pcz.pl Important Dates Deadline for Submissions: March 31, 2022 First Review Decision: May 31, 2022 Revisions Due: June 30, 2022 Deadline for 2nd Review: July 31, 2022 Final Decisions: August 31, 2022 Final Manuscript: September 30, 2022 Peer Review Process All the papers will go through peer review, and will be reviewed by at least three reviewers. A thorough check will be completed, and the guest editors will check any significant similarity between the manuscript under consideration and any published paper or submitted manuscripts of which they are aware. In such case, the article will be directly rejected without proceeding further. Guest editors will make all reasonable effort to receive the reviewer?s comments and recommendation on time. The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended to be considered for this special issue (with at least 30% difference from the original works). Submission Guidelines Paper submissions for the special issue should strictly follow the submission format and guidelines ( https://www.springer.com/journal/521/submission-guidelines). Each manuscript should not exceed 16 pages in length (inclusive of figures and tables). Manuscripts must be submitted to the journal online system at https://www.editorialmanager.com/ncaa/default.aspx. Authors should select ?TC: Interpretation of Deep Learning? during the submission step ?Additional Information?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gros at itp.uni-frankfurt.de Sun Nov 7 08:36:43 2021 From: gros at itp.uni-frankfurt.de (Claudius Gros) Date: Sun, 07 Nov 2021 14:36:43 +0100 Subject: Connectionists: =?utf-8?b?Pz09P3V0Zi04P3E/ICBTY2llbnRpZmljIElu?= =?utf-8?q?tegrity=2C_the_2021_Turing_Lecture=2C_etc=2E?= In-Reply-To: Message-ID: <8906-6187d680-38d-6b9f5000@141956708> Hi Danko, everybody. An online workshop on 'non-conventional ideas in the neurosciences' sounds like a very good idea! It could be informal, and hence not too much work. Claudius On Sunday, November 07, 2021 09:13 CET, Danko Nikolic wrote: > I agree with Richard. > > Would it make sense to have a conference, a journal, a special issue of a > journal, or a book dedicated solely to ideas in neuroscience that > challenge the establishment? These ideas would still need to be in > agreement with the empirical data though but, at the same time, they must > be as much in disagreement with the current dominant paradigm(s) as > possible. Moreover, would it make sense to rate the ideas, not based on how > many other scientists like them, but how many other lifetime works they are > likely to destroy (like the career of Roger's hypothetical engineer at > Google)? > > Maybe something good could get born out of such effort. > > But who is going to compile the list and edit the book? Who is willing to > shoot themselves in the foot for the (potential) good of neuroscience? > > Regards, > > > > Dr. Danko Nikoli? > www.danko-nikolic.com > https://www.linkedin.com/in/danko-nikolic/ > --- A progress usually starts with an insight --- > > > On Sun, Nov 7, 2021 at 12:31 AM Richard Loosemore > wrote: > > > > > Adam, > > > > 1) Tsvi Achler has already done the things you ask, many times over, so it > > behooves you to check for that before you tell him to do it. Instructing > > someone to "clearly communicate the novel contribution of your approach" > > when they have already done is is an insult. > > > > 2) The whole point of this discussion is that when someone "makes an > > argument clearly" the community is NOT "incredibly open to that." Quite > > the opposite: the community's attention is fickle, tribal, fad-driven, and > > fundamentally broken. > > > > 3) When you say that you "have trouble believing that Google or anyone > > else will be dismissive of a computational approach that actually works," > > that truly boggles the mind. > > > > a) There is no precise definition for "actually works" -- there is no > > global measure of goodness in the space of approaches. > > > > b) Getting the attention of someone at e.g. Google is a non-trivial > > feat in itself: just ignoring outsiders is, for Google, a perfectly > > acceptable option. > > > > c) What do you suppose would be the reaction of an engineer at Google > > who gets handed a paper by their boss, and is asked "What do you think of > > this?" Suppose the paper describes an approach that is inimicable to what > > that engineer has been doing their whole career. So much so, that if Google > > goes all-in on this new thing, the engineer's skillset will be devalued to > > junk status. What would the engineer do? They would say "I read it. It's > > just garbage." > > > > Best > > > > Richard Loosemore > > > > > > > > On 11/5/21 1:01 PM, Adam Krawitz wrote: > > > > Tsvi, > > > > > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > > gives me a more neutral perspective. In the spirit of progress: > > > > > > > > 1. If you have a neural network approach that you feel provides a new > > and important perspective on cognitive processes, then write up a paper > > making that argument clearly, and I think you will find that the community > > is incredibly open to that. Yes, if they see holes in the approach they > > will be pointed out, but that is all part of the scientific exchange. > > Examples of this approach include: Elman (1990) Finding Structure in Time, > > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > > ?new? approach to modelling cognition). I?m sure others can provide more > > examples. > > 2. I?m much less familiar with how things work on the applied side, > > but I have trouble believing that Google or anyone else will be dismissive > > of a computational approach that actually works. Why would they? They just > > want to solve problems efficiently. Demonstrate that your approach can > > solve a problem more effectively (or at least as effectively) as the > > existing approaches, and they will come running. Examples of this include: > > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > > > > > Clearly communicate the novel contribution of your approach and I think > > you will find a receptive audience. > > > > > > > > Thanks, > > > > Adam > > > > > > > > > > > > *From:* Connectionists > > *On Behalf Of *Tsvi Achler > > *Sent:* November 4, 2021 9:46 AM > > *To:* gary at ucsd.edu > > *Cc:* connectionists at cs.cmu.edu > > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > > Lecture, etc. > > > > > > > > Lastly Feedforward methods are predominant in a large part because they > > have financial backing from large companies with advertising and clout like > > Google and the self-driving craze that never fully materialized. > > > > > > > > Feedforward methods are not fully connectionist unless rehearsal for > > learning is implemented with neurons. That means storing all patterns, > > mixing them randomly and then presenting to a network to learn. As far as > > I know, no one is doing this in the community, so feedforward methods are > > only partially connectionist. By allowing popularity to predominate and > > choking off funds and presentation of alternatives we are cheating > > ourselves from pursuing other more rigorous brain-like methods. > > > > > > > > Sincerely, > > > > -Tsvi > > > > > > > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > > > Gary- Thanks for the accessible online link to the book. > > > > > > > > I looked especially at the inhibitory feedback section of the book which > > describes an Air Conditioner AC type feedback. > > > > It then describes a general field-like inhibition based on all activations > > in the layer. It also describes the role of inhibition in sparsity and > > feedforward inhibition, > > > > > > > > The feedback described in Regulatory Feedback is similar to the AC > > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > > > Thus for context, regulatory feedback is not a field-like inhibition, it > > is very directed based on the neurons that are activated and their inputs. > > This sort of regulation is also the foundation of Homeostatic Plasticity > > findings (albeit with changes in Homeostatic regulation in experiments > > occurring in a slower time scale). The regulatory feedback model describes > > the effect and role in recognition of those regulated connections in real > > time during recognition. > > > > > > > > I would be happy to discuss further and collaborate on writing about the > > differences between the approaches for the next book or review. > > > > > > > > And I want to point out to folks, that the system is based on politics and > > that is why certain work is not cited like it should, but even worse these > > politics are here in the group today and they continue to very > > strongly influence decisions in the connectionist community and holds us > > back. > > > > > > > > Sincerely, > > > > -Tsvi > > > > > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > > > Tsvi - While I think Randy and Yuko's book > > is actually somewhat better than > > the online version (and buying choices on amazon start at $9.99), there > > *is* an online version. > > > > Randy & Yuko's models take into account feedback and inhibition. > > > > > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > > > Daniel, > > > > > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > > published in several low impact journals between 2008 and 2014 (and in > > videos subsequently)? > > > > These are networks where the primary computation is inhibition back to the > > inputs that activated them and may be very counterintuitive given today's > > trends. You can almost think of them as the opposite of Hopfield networks. > > > > > > > > I would love to check inside the book but I dont have an academic budget > > that allows me access to it and that is a huge part of the problem with how > > information is shared and funding is allocated. I could not get access to > > any of the text or citations especially Chapter 4: "Competition, Lateral > > Inhibition, and Short-Term Memory", to weigh in. > > > > > > > > I wish the best circulation for your book, but even if the Regulatory > > Feedback Model is in the book, that does not change the fundamental problem > > if the book is not readily available. > > > > > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > > as a predominant mechanism, but I do believe a function such as vigilance > > is very important during recognition and Adaptive Resonance is one of > > a very few models that have it. The Regulatory Feedback model I have > > developed (and Michael Spratling studies a similar model as well) is built > > primarily using the vigilance type of connections and allows multiple > > neurons to be evaluated at the same time and continuously during > > recognition in order to determine which (single or multiple neurons > > together) match the inputs the best without lateral inhibition. > > > > > > > > Unfortunately within conferences and talks predominated by the Adaptive > > Resonance crowd I have experienced the familiar dismissiveness and did not > > have an opportunity to give a proper talk. This goes back to the larger > > issue of academic politics based on small self-selected committees, the > > same issues that exist with the feedforward crowd, and pretty much all of > > academia. > > > > > > > > Today's information age algorithms such as Google's can determine > > relevance of information and ways to display them, but hegemony of the > > journal systems and the small committee system of academia developed in the > > middle ages (and their mutual synergies) block the use of more modern > > methods in research. Thus we are stuck with this problem, which especially > > affects those that are trying to introduce something new and > > counterintuitive, and hence the results described in the two National > > Bureau of Economic Research articles I cited in my previous message. > > > > > > > > Thomas, I am happy to have more discussions and/or start a different > > thread. > > > > > > > > Sincerely, > > > > Tsvi Achler MD/PhD > > > > > > > > > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > > > Tsvi, > > > > > > > > While deep learning and feedforward networks have an outsize popularity, > > there are plenty of published sources that cover a much wider variety of > > networks, many of them more biologically based than deep learning. A > > treatment of a range of neural network approaches, going from simpler to > > more complex cognitive functions, is found in my textbook *Introduction > > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > > emphasizes a variety of architectures with a strong biological basis. > > > > > > > > > > > > Best, > > > > > > > > > > > > Dan Levine > > ------------------------------ > > > > *From:* Connectionists on > > behalf of Tsvi Achler > > *Sent:* Saturday, October 30, 2021 3:13 AM > > *To:* Schmidhuber Juergen > > *Cc:* connectionists at cs.cmu.edu > > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > > Lecture, etc. > > > > > > > > Since the title of the thread is Scientific Integrity, I want to point out > > some issues about trends in academia and then especially focusing on the > > connectionist community. > > > > > > > > In general analyzing impact factors etc the most important progress gets > > silenced until the mainstream picks it up Impact Factiors in novel > > research www.nber.org/.../working_papers/w22180/w22180.pdf > > and > > often this may take a generation > > https://www.nber.org/.../does-science-advance-one-funeral... > > > > . > > > > > > > > The connectionist field is stuck on feedforward networks and variants such > > as with inhibition of competitors (e.g. lateral inhibition), or other > > variants that are sometimes labeled as recurrent networks for learning time > > where the feedforward networks can be rewound in time. > > > > > > > > This stasis is specifically occuring with the popularity of deep > > learning. This is often portrayed as neurally plausible connectionism but > > requires an implausible amount of rehearsal and is not connectionist if > > this rehearsal is not implemented with neurons (see video link for further > > clarification). > > > > > > > > Models which have true feedback (e.g. back to their own inputs) cannot > > learn by backpropagation but there is plenty of evidence these types of > > connections exist in the brain and are used during recognition. Thus they > > get ignored: no talks in universities, no featuring in "premier" journals > > and no funding. > > > > > > > > But they are important and may negate the need for rehearsal as needed in > > feedforward methods. Thus may be essential for moving connectionism > > forward. > > > > > > > > If the community is truly dedicated to brain motivated algorithms, I > > recommend giving more time to networks other than feedforward networks. > > > > > > > > Video: > > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > > > > > > Sincerely, > > > > Tsvi Achler > > > > > > > > > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > > wrote: > > > > Hi, fellow artificial neural network enthusiasts! > > > > The connectionists mailing list is perhaps the oldest mailing list on > > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > > that some of them - as well as their contemporaries - might be able to > > provide additional valuable insights into the history of the field. > > > > Following the great success of massive open online peer review (MOOR) for > > my 2015 survey of deep learning (now the most cited article ever published > > in the journal Neural Networks), I've decided to put forward another piece > > for MOOR. I want to thank the many experts who have already provided me > > with comments on it. Please send additional relevant references and > > suggestions for improvements for the following draft directly to me at > > juergen at idsia.ch: > > > > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > > > > The above is a point-for-point critique of factual errors in ACM's > > justification of the ACM A. M. Turing Award for deep learning and a > > critique of the Turing Lecture published by ACM in July 2021. This work can > > also be seen as a short history of deep learning, at least as far as ACM's > > errors and the Turing Lecture are concerned. > > > > I know that some view this as a controversial topic. However, it is the > > very nature of science to resolve controversies through facts. Credit > > assignment is as core to scientific history as it is to machine learning. > > My aim is to ensure that the true history of our field is preserved for > > posterity. > > > > Thank you all in advance for your help! > > > > J?rgen Schmidhuber > > > > > > > > > > > > > > > > > > > > > > -- > > > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > > > Computer Science and Engineering 0404 > > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > > CSE Building, Room 4130 > > University of California San Diego - > > 9500 Gilman Drive # 0404 > > La Jolla, Ca. 92093-0404 > > > > Email: gary at ucsd.edu > > Home page: http://www-cse.ucsd.edu/~gary/ > > > > Schedule: http://tinyurl.com/b7gxpwo > > > > > > > > *Listen carefully,* > > *Neither the Vedas* > > *Nor the Qur'an* > > *Will teach you this:* > > *Put the bit in its mouth,* > > *The saddle on its back,* > > *Your foot in the stirrup,* > > *And ride your wild runaway mind* > > *All the way to heaven.* > > > > *-- Kabir* > > > > > > -- ### ### Prof. Dr. Claudius Gros ### http://itp.uni-frankfurt.de/~gros ### ### Complex and Adaptive Dynamical Systems, A Primer ### A graduate-level textbook, Springer (2008/10/13/15) ### ### Life for barren exoplanets: The Genesis project ### https://link.springer.com/article/10.1007/s10509-016-2911-0 ### From ASIM.ROY at asu.edu Sun Nov 7 14:01:30 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sun, 7 Nov 2021 19:01:30 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Yoshua, I am indeed feeling that I can have the cake and eat it too. Accepting the fact that neural activations in the brain have ?meaning and interpretation? is a huge step forward for the field. I would conjecture that it opens the door to new theories in cognitive and neuro sciences. You are definitely crossing the red line and that?s great. Can we have references to some of your papers? By the way, I think I understand what you mean by disentangling. There are probably simpler ways to disentangle and get to Explainable AI. But please send us the references. Best, Asim From: Yoshua Bengio Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu; Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy > a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists > On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Sun Nov 7 14:15:12 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Sun, 7 Nov 2021 19:15:12 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: All, Amir Hussain, Editor-in-Chief, Cognitive Computation, is inviting us to do a special issue on the topics under discussion here. They could be short position papers summarizing ideas for moving forward in this field. He promised reviews within two weeks. If that works out, we could have the special issue published rather quickly. Please email me if you are interested. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Yoshua Bengio Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu; Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy > a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists > On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Sun Nov 7 07:29:33 2021 From: achler at gmail.com (Tsvi Achler) Date: Sun, 7 Nov 2021 04:29:33 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Thank you Richard and Danko for helping answer Adam's questions/comments. Adam's questions make sense when given the outward narrative of academic departments, grant agencies and even companies. These narratives are meant to bring more funds to the institutions promulgating them, but ultimately misinform the public and na?ve students. The reality is much different. Think of statements coming from academia being as reliable as statements coming from politicians who are trying to get themselves promoted and elected. To address the statement "I think you will find that the community is incredibly open" from an academic - computational neuroscience perspective I have found it is absolutely the opposite. Each field is siloed and unnecessarily competitive. Notice for example when I talked about my approach I got 3 responses suggesting I am mistaken and what I am doing is X and I should read Y. Each X and Y varied greatly (there is no way all of them could be true at once and most likely the commenters have not even looked at the mechanism) yet any such review coming back from an article or grant application would reject the article or grant. I have written countless grant and articles where I purposefully write several times in the abstract, and within the article that: the process of feedback in regulatory feedback occurs during recognition, not to adjust weights but to determine neuron activations. The review comes back with: clearly the author does not understand how learning in feedforward works, they are wrong and feedback occurs during learning to adjust weights. One such review will reject the work and in a group of 3 to 4 reviewers there is very little chance that all of the reviewers that did not completely understand the novel mechanism not will say "wrong this is X read Y " instead of all saying hey this is a mechanism I am unfamiliar with, but seems interesting so publish it/award funding anyways. This is why novel research cannot be published or supported by academia. This is in huge contrast to grant agencies and departments who make decisions by committees going beyond themselves to explain how they are looking for novel and multidisciplinary research. In fact I talked to an NSF director who said that they are conservative because of committee decisions. When I asked him then why do you project the opposite message about novelty, he replied "this is operational". Thus nonchalantly acknowledging that projecting deceiving messages is part of their core. Now the business world is more open minded and honest (your "google" statements are actually referring to the business world). They do not care what type of mechanism as long as it "works". But what they do care about is solving a business case; that is solving a business problem and demonstrating that it makes or saves money. Often the value of solution is nothing about the technology but knowing how to address a certain business problem. One thing that they expressly do not want to invest in is a "science project". Thus as a person who focused on brain science neuroscience, cognitive psychology, etc. is not particularly prepared to address business problems. Although not ideal for a scientist, I find it is at least a more open minded, honest and forthcoming system and indeed I am working on the business problem these days. Adam, I hope that answers some of your (and others') questions. I have also created a video channel to help those from the outside understand how the problems of academia occur and possibly how they may be solved. See the "Updating Research" channel and playlist https://www.youtube.com/playlist?list=PLM3bZImI0fj3rM3ZrzSYbfozkf8m4102j Sincerely, -Tsvi On Fri, Nov 5, 2021 at 2:13 PM Adam Krawitz wrote: > Tsvi, > > > > I?m just a lurker on this list, with no skin in the game, but perhaps that > gives me a more neutral perspective. In the spirit of progress: > > > > 1. If you have a neural network approach that you feel provides a new > and important perspective on cognitive processes, then write up a paper > making that argument clearly, and I think you will find that the community > is incredibly open to that. Yes, if they see holes in the approach they > will be pointed out, but that is all part of the scientific exchange. > Examples of this approach include: Elman (1990) Finding Structure in Time, > Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow > a Mind: Statistics, Structure, and Abstraction (not neural nets, but a > ?new? approach to modelling cognition). I?m sure others can provide more > examples. > 2. I?m much less familiar with how things work on the applied side, > but I have trouble believing that Google or anyone else will be dismissive > of a computational approach that actually works. Why would they? They just > want to solve problems efficiently. Demonstrate that your approach can > solve a problem more effectively (or at least as effectively) as the > existing approaches, and they will come running. Examples of this include: > Tesauro?s TD-Gammon, which was influential in demonstrating the power of > RL, and LeCun et al.?s convolutional NN for the MNIST digits. > > > > Clearly communicate the novel contribution of your approach and I think > you will find a receptive audience. > > > > Thanks, > > Adam > > > > > > *From:* Connectionists *On > Behalf Of *Tsvi Achler > *Sent:* November 4, 2021 9:46 AM > *To:* gary at ucsd.edu > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Lastly Feedforward methods are predominant in a large part because they > have financial backing from large companies with advertising and clout like > Google and the self-driving craze that never fully materialized. > > > > Feedforward methods are not fully connectionist unless rehearsal for > learning is implemented with neurons. That means storing all patterns, > mixing them randomly and then presenting to a network to learn. As far as > I know, no one is doing this in the community, so feedforward methods are > only partially connectionist. By allowing popularity to predominate and > choking off funds and presentation of alternatives we are cheating > ourselves from pursuing other more rigorous brain-like methods. > > > > Sincerely, > > -Tsvi > > > > > > On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: > > Gary- Thanks for the accessible online link to the book. > > > > I looked especially at the inhibitory feedback section of the book which > describes an Air Conditioner AC type feedback. > > It then describes a general field-like inhibition based on all activations > in the layer. It also describes the role of inhibition in sparsity and > feedforward inhibition, > > > > The feedback described in Regulatory Feedback is similar to the AC > feedback but occurs for each neuron individually, vis-a-vis its inputs. > > Thus for context, regulatory feedback is not a field-like inhibition, it > is very directed based on the neurons that are activated and their inputs. > This sort of regulation is also the foundation of Homeostatic Plasticity > findings (albeit with changes in Homeostatic regulation in experiments > occurring in a slower time scale). The regulatory feedback model describes > the effect and role in recognition of those regulated connections in real > time during recognition. > > > > I would be happy to discuss further and collaborate on writing about the > differences between the approaches for the next book or review. > > > > And I want to point out to folks, that the system is based on politics and > that is why certain work is not cited like it should, but even worse these > politics are here in the group today and they continue to very > strongly influence decisions in the connectionist community and holds us > back. > > > > Sincerely, > > -Tsvi > > > > On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: > > Tsvi - While I think Randy and Yuko's book > is actually somewhat better than > the online version (and buying choices on amazon start at $9.99), there > *is* an online version. > > Randy & Yuko's models take into account feedback and inhibition. > > > > On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: > > Daniel, > > > > Does your book include a discussion of Regulatory or Inhibitory Feedback > published in several low impact journals between 2008 and 2014 (and in > videos subsequently)? > > These are networks where the primary computation is inhibition back to the > inputs that activated them and may be very counterintuitive given today's > trends. You can almost think of them as the opposite of Hopfield networks. > > > > I would love to check inside the book but I dont have an academic budget > that allows me access to it and that is a huge part of the problem with how > information is shared and funding is allocated. I could not get access to > any of the text or citations especially Chapter 4: "Competition, Lateral > Inhibition, and Short-Term Memory", to weigh in. > > > > I wish the best circulation for your book, but even if the Regulatory > Feedback Model is in the book, that does not change the fundamental problem > if the book is not readily available. > > > > The same goes with Steve Grossberg's book, I cannot easily look inside. > With regards to Adaptive Resonance I dont subscribe to lateral inhibition > as a predominant mechanism, but I do believe a function such as vigilance > is very important during recognition and Adaptive Resonance is one of > a very few models that have it. The Regulatory Feedback model I have > developed (and Michael Spratling studies a similar model as well) is built > primarily using the vigilance type of connections and allows multiple > neurons to be evaluated at the same time and continuously during > recognition in order to determine which (single or multiple neurons > together) match the inputs the best without lateral inhibition. > > > > Unfortunately within conferences and talks predominated by the Adaptive > Resonance crowd I have experienced the familiar dismissiveness and did not > have an opportunity to give a proper talk. This goes back to the larger > issue of academic politics based on small self-selected committees, the > same issues that exist with the feedforward crowd, and pretty much all of > academia. > > > > Today's information age algorithms such as Google's can determine > relevance of information and ways to display them, but hegemony of the > journal systems and the small committee system of academia developed in the > middle ages (and their mutual synergies) block the use of more modern > methods in research. Thus we are stuck with this problem, which especially > affects those that are trying to introduce something new and > counterintuitive, and hence the results described in the two National > Bureau of Economic Research articles I cited in my previous message. > > > > Thomas, I am happy to have more discussions and/or start a different > thread. > > > > Sincerely, > > Tsvi Achler MD/PhD > > > > > > > > On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: > > Tsvi, > > > > While deep learning and feedforward networks have an outsize popularity, > there are plenty of published sources that cover a much wider variety of > networks, many of them more biologically based than deep learning. A > treatment of a range of neural network approaches, going from simpler to > more complex cognitive functions, is found in my textbook *Introduction > to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also > Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) > emphasizes a variety of architectures with a strong biological basis. > > > > > > Best, > > > > > > Dan Levine > ------------------------------ > > *From:* Connectionists on > behalf of Tsvi Achler > *Sent:* Saturday, October 30, 2021 3:13 AM > *To:* Schmidhuber Juergen > *Cc:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > > > Since the title of the thread is Scientific Integrity, I want to point out > some issues about trends in academia and then especially focusing on the > connectionist community. > > > > In general analyzing impact factors etc the most important progress gets > silenced until the mainstream picks it up Impact Factiors in novel > research www.nber.org/.../working_papers/w22180/w22180.pdf > and > often this may take a generation > https://www.nber.org/.../does-science-advance-one-funeral... > > . > > > > The connectionist field is stuck on feedforward networks and variants such > as with inhibition of competitors (e.g. lateral inhibition), or other > variants that are sometimes labeled as recurrent networks for learning time > where the feedforward networks can be rewound in time. > > > > This stasis is specifically occuring with the popularity of deep > learning. This is often portrayed as neurally plausible connectionism but > requires an implausible amount of rehearsal and is not connectionist if > this rehearsal is not implemented with neurons (see video link for further > clarification). > > > > Models which have true feedback (e.g. back to their own inputs) cannot > learn by backpropagation but there is plenty of evidence these types of > connections exist in the brain and are used during recognition. Thus they > get ignored: no talks in universities, no featuring in "premier" journals > and no funding. > > > > But they are important and may negate the need for rehearsal as needed in > feedforward methods. Thus may be essential for moving connectionism > forward. > > > > If the community is truly dedicated to brain motivated algorithms, I > recommend giving more time to networks other than feedforward networks. > > > > Video: > https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 > > > > > Sincerely, > > Tsvi Achler > > > > > > > > On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > > > > -- > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > Computer Science and Engineering 0404 > IF USING FEDEX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego - > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > Schedule: http://tinyurl.com/b7gxpwo > > > > *Listen carefully,* > *Neither the Vedas* > *Nor the Qur'an* > *Will teach you this:* > *Put the bit in its mouth,* > *The saddle on its back,* > *Your foot in the stirrup,* > *And ride your wild runaway mind* > *All the way to heaven.* > > *-- Kabir* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arbib at usc.edu Mon Nov 8 03:00:42 2021 From: arbib at usc.edu (Michael Arbib) Date: Mon, 8 Nov 2021 08:00:42 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Message-ID: How can we hope to understand each other when we cannot even understand the brain? Or more accurately, the variations of brains in diverse bodies growing up in diverse societies .... A long way from the abstractions that have been at the core of this thread. Michael A. Arbib Adjunct Professor of Psychology, University of California at San Diego Former and Founding Coordinator, Advisory Council, Academy of Neuroscience for Architecture Emeritus at the University of Southern California: University Professor; Fletcher Jones Professor of Computer Science; Professor of Biological Sciences, Biomedical Engineering, Electrical Engineering, Neuroscience, & Psychology. When Brains Meet Buildings: A Conversation Between Neuroscience and Architecture was published by Oxford University Press in August 2021. https://global.oup.com/academic/product/when-brains-meet-buildings-9780190060954 -----Original Message----- From: Connectionists On Behalf Of Barak A. Pearlmutter Sent: Sunday, November 7, 2021 6:58 PM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. How can we hope to understand the brain when we cannot even understand each other? From wduch at umk.pl Mon Nov 8 03:49:29 2021 From: wduch at umk.pl (Wlodzislaw Duch) Date: Mon, 8 Nov 2021 09:49:29 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <30c7773c-2925-268b-9b96-8ba95938708f@susaro.com> Message-ID: <6a17f2b1-a4e7-6a54-f699-2f5eed3cdf1f@umk.pl> This is a very good point - little research has been done on complex aspects of brain dynamics that can explain human behavior. Each subculture has different set of concepts and their associations that determine the way the see the world. Neuroscientists cannot test it on rats, psychologists do behavioral tests but then confabulate to explain results. As the first step towards understanding "us" and our brains I have just published a theoretical paper trying to link meme transmission and formation of conspiracy theories with attractor neural networks: Duch W. (2021). Memetics and Neural Models of Conspiracy Theories. Patterns. Cell Press.?? DOI:https://doi.org/10.1016/j.patter.2021.100353 Best, Wlodek Duch W dniu 08.11.2021 o?09:00, Michael Arbib pisze: > How can we hope to understand each other when we cannot even understand the brain? > > Or more accurately, the variations of brains in diverse bodies growing up in diverse societies .... A long way from the abstractions that have been at the core of this thread. > > > Michael A. Arbib > Adjunct Professor of Psychology, University of California at San Diego > Former and Founding Coordinator, Advisory Council, Academy of Neuroscience for Architecture > Emeritus at the University of Southern California: University Professor; Fletcher Jones Professor of Computer Science; Professor of Biological Sciences, Biomedical Engineering, Electrical Engineering, Neuroscience, & Psychology. > > When Brains Meet Buildings: A Conversation Between Neuroscience and Architecture was published by Oxford University Press in August 2021. > https://global.oup.com/academic/product/when-brains-meet-buildings-9780190060954 > > > -----Original Message----- > From: Connectionists On Behalf Of Barak A. Pearlmutter > Sent: Sunday, November 7, 2021 6:58 PM > To:connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. > > How can we hope to understand the brain when we cannot even understand each other? > > > -- Prof. W?odzis?aw Duch Fellow, International Neural Network Society Past President, European Neural Network Society Head, Neurocognitive Laboratory, CMIT NCU, Poland Google: Wlodzislaw Duch -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengioy at iro.umontreal.ca Mon Nov 8 08:41:25 2021 From: bengioy at iro.umontreal.ca (bengioy at iro.umontreal.ca) Date: Mon, 08 Nov 2021 13:41:25 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: <009694e7d7f49e3c3add53640026f060@iro.umontreal.ca> (my first e-mail to connectionists bounced, sorry for duplicate sending, Asim) Here is a selection of relevant recent work to answer your request: *** Modular system-2 / Global Workspace Theory -inspired deep learning *** * Inductive Biases for Deep Learning of Higher-Level Cognition. https://arxiv.org/abs/2011.15091 (https://arxiv.org/abs/2011.15091) * Compositional Attention: Disentangling Search and Retrieval. https://arxiv.org/abs/2110.09419 (https://arxiv.org/abs/2110.09419) * Discrete-Valued Neural Communication. https://arxiv.org/abs/2107.02367 (https://arxiv.org/abs/2107.02367) * A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning. https://arxiv.org/abs/2106.02097 (https://arxiv.org/abs/2106.02097) * Coordination Among Neural Modules Through a Shared Global Workspace. https://arxiv.org/abs/2103.01197 (https://arxiv.org/abs/2103.01197) * Neural Production Systems. https://arxiv.org/abs/2103.01937 (https://arxiv.org/abs/2103.01937) *** Causal discovery with deep learning *** * A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. https://arxiv.org/abs/1901.10912 (https://arxiv.org/abs/1901.10912) * Learning Neural Causal Models with Active Interventions. https://arxiv.org/abs/2109.02429 (https://arxiv.org/abs/2109.02429) * Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning. https://arxiv.org/abs/2110.15796 (https://arxiv.org/abs/2110.15796) * Toward Causal Representation Learning. https://ieeexplore.ieee.org/abstract/document/9363924 (https://ieeexplore.ieee.org/abstract/document/9363924) I am currently working on the merge of the above two threads of modularity and causality... November 7, 2021 2:01 PM, "Asim Roy" )> wrote: Yoshua, I am indeed feeling that I can have the cake and eat it too. Accepting the fact that neural activations in the brain have ?meaning and interpretation? is a huge step forward for the field. I would conjecture that it opens the door to new theories in cognitive and neuro sciences. You are definitely crossing the red line and that?s great. Can we have references to some of your papers? By the way, I think I understand what you mean by disentangling. There are probably simpler ways to disentangle and get to Explainable AI. But please send us the references. Best, Asim From: Yoshua Bengio Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu); Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec (mailto:julie.mongeau at mila.quebec) Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy (https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!JYoK0hORlllDPMK5nxG1MV8TRdHc4uGvWM3awogw4qslieKdtCnnX7G9gvkI0Xg%24) From: Connectionists On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: * If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. * I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu (mailto:gary at ucsd.edu) Cc: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu (mailto:gary at ucsd.edu) wrote: Tsvi - While I think Randy and Yuko's book (https://urldefense.com/v3/__https:/www.amazon.com/dp/0262650541/__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7U4snDyk0%24)is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. (https://urldefense.com/v3/__https:/compcogneuro.org/__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UH2qn4go%24) Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ------------------------------------ From: Connectionists on behalf of Tsvi Achler Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen Cc: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.nber.org*2Fsystem*2Ffiles*2Fworking_papers*2Fw22180*2Fw22180.pdf*3Ffbclid*3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300122043*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=9o*2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UD9hRGNg%24) and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.nber.org*2Fdigest*2Fmar16*2Fdoes-science-advance-one-funeral-time*3Ffbclid*3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300132034*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UapVS1t0%24) . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.youtube.com*2Fwatch*3Fv*3Dm2qee6j5eew*26list*3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3*26index*3D2&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300132034*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8*2BFcOkkNEWZ9w*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UzMnNL04%24) Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch (mailto:juergen at idsia.ch): https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fpeople.idsia.ch*2F*juergen*2Fscientific-integrity-turing-award-deep-learning.html&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300142030*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw*3D&reserved=0__;JSUlJX4lJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UNznV_Qo%24) The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu (mailto:gary at ucsd.edu) Home page: http://www-cse.ucsd.edu/~gary/ (https://urldefense.com/v3/__http:/www-cse.ucsd.edu/*gary/__;fg!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7U-G68mLE%24) Schedule: http://tinyurl.com/b7gxpwo (https://urldefense.com/v3/__http:/tinyurl.com/b7gxpwo__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UcMz40H8%24) Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From poirazi at imbb.forth.gr Mon Nov 8 13:25:54 2021 From: poirazi at imbb.forth.gr (Yiota Poirazi) Date: Mon, 8 Nov 2021 20:25:54 +0200 Subject: Connectionists: Announcing DENDRITES 2022 Message-ID: Dear colleagues, We are pleased to announce that after the cancellation of our 2020 meeting because of the covid-19 pandemic, DENDRITES 2022 will take place in Heraklion, Crete on 23-26 May 2022. This is the 4th of a very successful series of meetings on the island of Crete that is dedicated to dendrites. The meeting will bring together scientific leaders from around the globe to present their theoretical and experimental work on dendrites. The meeting program is designed to facilitate discussions of new ideas and discoveries, in a relaxed atmosphere that emphasizes interaction. DENDRITES 2022 EMBO Workshop on Dendritic Anatomy, Molecules and Function Heraklion, Crete, Greece 23-26 May 2022 http://meetings.embo.org/event/20-dendrites *Organizing committee:* Yiota Poirazi, IMBB-FORTH, GR Kristen Harris, University of Texas-Austin, US Michael H?usser, University College London, GB Matthew Larkum, Humboldt University of Berlin, DE *Registration and abstract submission will open soon at:* http://meetings.embo.org/event/20-dendrites Submissions of abstracts are due by *February 1st, **2022.* Notifications will be provided by February 28th, 2022. Potential attendees are strongly encouraged to submit an abstract as presenters will have registration priority. For more information about the conference, please refer to our web site or send email to info at mitos.com.gr We look forward to seeing you at DENDRITES 2022! on behalf of the organizers, Yiota Poirazi -- Panayiota Poirazi, Ph.D. Research Director Institute of Molecular Biology and Biotechnology (IMBB) Foundation of Research and Technology-Hellas (FORTH) Vassilika Vouton, P.O.Box 1385, GR 70013, Heraklion, Crete GREECE Tel: +30 2810 391139 Fax: +30 2810 391101 ?mail: poirazi at imbb.forth.gr Lab site: www.dendrites.gr -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Tue Nov 9 04:21:38 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 9 Nov 2021 09:21:38 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Ali, Yoshua is absolutely right with these statements: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Abstractions are used all over the brain, from the lowest to the highest levels, in the form of multimodal cells. And there is plenty of evidence for these abstract, interpretable cells at higher levels of processing. That?s what Yoshua is referring to. You really can?t get these abstract, interpretable representations with the original definition of distributed representation. You have to modify that definition to include meaningful, interpretable units. That?s the crux of the conflict with the original definitions. And what Yoshua is doing is modifying that definition. That?s all there is to it. That modifications opens the door to lots of new theories that would be consistent with findings in the brain. That ?disentanglement? of concepts that Joshua is referring to can be done in many other ways that also leads to symbolic AI models. What he is referring to is symbolic form of representation. Asim From: Ali Minai Sent: Tuesday, November 9, 2021 12:56 AM To: Asim Roy Cc: Yoshua Bengio ; connectionists at cs.cmu.edu; Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim This is a perennial issue, but I don't think one should see "localized vs. distributed" as a dichotomy. Neurons (or groups of neurons) all over the brain are obviously tuned to specific things in specific contexts - place cells are an obvious example, as are the cells in orientation columns, and indeed, much of the early visual system. That's why we can decode place from hippocampal recordings in dreaming rats. But in many cases, the same cells are tuned to something else in another context (again, think of place cells). The notion of "holographically" distributed representations is a nice abstract idea but I doubt if it applies anywhere in the brain. However, any object/concept, etc., is better represented by several units (neurons, columns, hypercolumns, whatever) than by a single one, and in a way that makes these representations both modular and redundant enough to ensure robustness. Two old but still very interesting papers in this regard are: Tsunoda K, Yamane Y, Nishizaki M, & Tanifuji M (2001) Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat Neurosci 4:832?838. Wang, G., Tanaka, K. & Tanifuji M (2001) Optical imaging of functional organization in the monkey inferotemporal cortex. Science 272:1665-1668. If there's one big lesson biologists have learned in the last 50 years, it is the importance of modularity - not in the Fodorian sense but in the sense of Herb Simon, Jordan Pollack, et al. The more we think of the brain as a complex system, the clearer the importance of modularity and synergy will become. If all other complex systems exploit multi-scale modularity and synergistic coordination to achieve all sorts of useful attributes, it's inconceivable that the brain does not. Too much of AI - including neural networks - has been oblivious to biology, and too confident in the ability of abstractions to solve very complicated problems. The only way "real" AI will be achieved is by getting much closer to biology - not just in the superficial use of brain-like neural networks, but with much deeper attention to all aspects of the biological processes that still remain the only producers to intelligence known to us. Though modular representations can lead to better explainability to a limited degree (again, think of the dreaming rat), I am quite skeptical about "explainable AI" in general. Full explainability (e.g., including explainability of motivations) implies reductionistic analysis, but true intelligence will always be emergent and, by definition, quite full of surprises. The more "explainable" we try to make AI, the more we are squeezing out exactly that emergent creativity that is the hallmark of intelligence. Of course, such a demand is fine when we are just building machine learning-based tools for driving cars or detecting spam, but that is far from true general intelligence. We will only have built true AI when the thing we have created is as mysterious to us in its purposes as our cat or our teenage child :-). But that is a whole new debate. The special issue sounds like an interesting idea. Cheers Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Past-President (2015-2016) International Neural Network Society Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Sun, Nov 7, 2021 at 2:15 PM Asim Roy > wrote: All, Amir Hussain, Editor-in-Chief, Cognitive Computation, is inviting us to do a special issue on the topics under discussion here. They could be short position papers summarizing ideas for moving forward in this field. He promised reviews within two weeks. If that works out, we could have the special issue published rather quickly. Please email me if you are interested. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Yoshua Bengio > Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy > Cc: Adam Krawitz >; connectionists at cs.cmu.edu; Juyang Weng > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy > a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists > On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From pubconference at gmail.com Mon Nov 8 14:18:03 2021 From: pubconference at gmail.com (Pub Conference) Date: Mon, 8 Nov 2021 14:18:03 -0500 Subject: Connectionists: [Journal] IJAIT Call for Special Issue on Explainable Machine Learning in Methodologies and Applications Message-ID: Computational Intelligence Journal Special Issue on Explainable Machine Learning in Methodologies and Applications https://onlinelibrary.wiley.com/page/journal/14678640/homepage/special_issues.htm This special issue aims to bring together original research articles and review articles that will present the latest theoretical and technical advancements of machine and deep learning models. We hope that this Special Issue will: 1) improve the understanding and explainability of machine learning and deep neural networks; 2) enhance the mathematical foundation of deep neural networks; and 3) increase the computational efficiency and stability of the machine and deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following: ? Interpretability of deep learning models ? Quantifying or visualizing the interpretability of deep neural networks ? Neural networks, fuzzy logic, and evolutionary based interpretable control systems ? Supervised, unsupervised, and reinforcement learning ? Extracting understanding from large-scale and heterogeneous data ? Dimensionality reduction of large scale and complex data and sparse modeling ? Stability improvement of deep neural network optimization ? Optimization methods for deep learning ? Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data) ? Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security Important Dates Deadline for Submissions: March 31, 2022 First Review Decision: May 31,?2022 Revisions Due: June 30, 2022 Deadline for 2nd Review: July 31, 2022 Final Decisions: August 31, 2022 ? Final Manuscript: September 30, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessio.ferone at uniparthenope.it Tue Nov 9 06:28:42 2021 From: alessio.ferone at uniparthenope.it (ALESSIO FERONE) Date: Tue, 9 Nov 2021 11:28:42 +0000 Subject: Connectionists: EAIS 2022 - Special Session on Active Learning for Concept and Feature Drift Detection Message-ID: <3E5C545F-9876-4D35-8AEC-891338DCF452@uniparthenope.it> ******Apologies for multiple posting****** _____________________________________________ EAIS 2022 Special Session on Active Learning for Concept and Feature Drift Detection https://sites.google.com/view/cisslabssfuzzieee/home https://cyprusconferences.org/eais2022/ _____________________________________________ In the Deep Learning (DL) hyperconnected era, classifiers consume labels and labelled data at unprecedented rates. While computational power is no more an issue in model training, the time, cost, or human supervision required to produce high quality labelled data seriously hinders the maximum size of the available labelled datasets and hence the extensibility of DL in many challenging domains. To tackle the label-scarcity problem, Transfer Learning (TL) and Active Learning (AL) have both been exploited, the former using pre-trained models from a different domain and the latter choosing the best subset of instances to be labelled. Specifically, in the case of streaming data, labels may not be available for each instance of the stream (or being very costly, or come too fast for a human expert), data may have a very short lifespan, batch methods are unenforceable and issues due to concept and feature drift, or model switch, may prevent the training to be effective. The aim of the special session is to host original papers and reviews on recent research advances and state?of?the?art methods in the fields of Computational Intelligence, Machine Learning, Data Mining and Distributed Computing methodologies concerning TL and AL techniques for concept and feature drift detection on streaming data. Relevant topics within this context include, but are not limited to: Computational Intelligence Machine learning and Deep Learning Sparse Coding Data Mining Fuzzy and Neuro?Fuzzy Systems Probabilistic and statistical modelling Active Learning Transfer Learning Cost-Sensitive Learning Online Learning Concept Drift Detection Feature Drift Detection Online Feature Learning/Extraction/Selection Important Dates: Paper submission: January 10, 2022 Notification of acceptance/rejection: February 19, 2022 Camera ready submission: March 20, 2022 Submission: Submitted papers should not exceed 8 pages plus at most 2 pages overlength Submissions of full papers are accepted online through EasyChair system https://cyprusconferences.org/eais2022/index.php/submission/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiako at ieee.org Tue Nov 9 18:05:46 2021 From: tiako at ieee.org (Pierre F. Tiako) Date: Tue, 9 Nov 2021 17:05:46 -0600 Subject: Connectionists: Free Call for Participation 2021 OkIP Conf on Automated & Intelligent Systems, Nov 17, Online & Oklahoma City, USA Message-ID: --- Free Free Free Call for Participation ------------- 2021 OkIP International Conference on Automated and Intelligent Systems (CAIS) MNTC Conference Center, Oklahoma City, OK, USA & Online November 17, 2021 https://eventutor.com/e/CAIS001 ** Keynotes/Invited Talks ?Machine Learning for Critical Systems Security? - Nancy R. Mead, Ph.D., Carnegie Mellon University, USA "Blockchain Technology and its implications in Business Applications and Healthcare IT" - Akhil Kumar, Ph.D., Penn State University, USA ?Sustainable Energy Harvesting and Wireless Power Transfer Systems? - Manos M. Tentzeris, Ph.D., Georgia Institute of Technology, USA ** Tentative Program https://eventutor.com/event/4/page/40-programschedule ** Registration: Free or with fees at your ONLY discretion and situation. First register with Eventutor [1] before completing and emailing us the full registration form [2]. [1] https://eventutor.com/event/4/registrations/1/ [2] https://eventutor.com/event/4/attachments/6/58/2021_OkIP_Non_Authors_Registration_Form.pdf ** Next year conference: October 3-6, 2022 MNTC Conference Center, Oklahoma City, OK, USA ** Sponsor Oklahoma International Publishing (OkIP) 1911 Linwood Blvd Suite 100 Oklahoma City OK 73106, USA Please feel free to contact us for any inquiry at: info at okipublishing.com Pierre Tiako General Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Tue Nov 9 11:26:29 2021 From: juyang.weng at gmail.com (Juyang Weng) Date: Tue, 9 Nov 2021 11:26:29 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Dear Ali, I agree "localized vs. distributed" as a dichotomy is too simple, as I discussed with Asim before. However, "importance of modularity" is just a slightly higher level mistake from people who worked on "neuroscience" experiments, of the same nature as the "localized vs. distributed" dichotomy. "Localized vs. distributed" is too simple; "modularity" is also too simple to be true too. Unfortunately, neuroscientists have spent time on many experiments whose answers have already been predicted by our Developmental Networks (DN), especially DN-2. However, there is one known model that is holistic in terms of general-purpose computing. That is Universal Turing machines, although they were not for brains to start with. Researchers in this challenging brain subject did not seem to pay sufficient attention to the emergent Universal Turing Machine (in our Developmental Networks) as a holistic model of an entire brain. If we look into how excitatory cells and inhibitory sells migrate and connect, as I explained in my NAI book, https://www.amazon.com/Natural-Artificial-Intelligence-Juyang-Weng/dp/0985875712 it is impossible to have brain modules as you stated. If we insist on our discussion on Brodmann areas, then Brodmann areas are primarily resource areas (like, not exactly the same as, registers, cash, RAM, and disks) not functional areas. DN-1 has modules, but not functional modules, but resource modules. DN-2 is more correct in that even the boundaries of resource modules are adaptive. Some lower brain areas synthesize specific types of neuro-transmitters (as explained in my book above), e.g., serotonin, but such areas are still resource modules, not brain-function models. A brain uses serotonin for many different functions. In summary, no Brodmann areas should be assigned to any specific brain functions (like edges, but resources), including lower brain areas, such as raphe nuclei (that synthesize serotonin) and hippocampus (also resources, not functions). Your cited examples are well known and support my above fundamental view, backed up by DN as a full algorithmic model of the entire brain (not brain modules). That is, "place cells" are a misnomer. I am writing something sensitive. I hope this connectionist at cmu will not reject it. Best regards, -John On Tue, Nov 9, 2021 at 2:55 AM Ali Minai wrote: > Asim > > This is a perennial issue, but I don't think one should see "localized vs. > distributed" as a dichotomy. Neurons (or groups of neurons) all over the > brain are obviously tuned to specific things in specific contexts - place > cells are an obvious example, as are the cells in orientation columns, and > indeed, much of the early visual system. That's why we can decode place > from hippocampal recordings in dreaming rats. But in many cases, the same > cells are tuned to something else in another context (again, think of place > cells). The notion of "holographically" distributed representations is a > nice abstract idea but I doubt if it applies anywhere in the brain. > However, any object/concept, etc., is better represented by several units > (neurons, columns, hypercolumns, whatever) than by a single one, and in a > way that makes these representations both modular and redundant enough to > ensure robustness. Two old but still very interesting papers in this regard > are: > > Tsunoda K, Yamane Y, Nishizaki M, & Tanifuji M (2001) Complex objects are > represented in macaque inferotemporal cortex by the combination of > feature columns. Nat Neurosci 4:832?838. > > > Wang, G., Tanaka, K. & Tanifuji M (2001) Optical imaging of functional > organization in the monkey inferotemporal cortex. Science 272:1665-1668. > > > If there's one big lesson biologists have learned in the last 50 years, it > is the importance of modularity - not in the Fodorian sense but in the > sense of Herb Simon, Jordan Pollack, et al. The more we think of the brain > as a complex system, the clearer the importance of modularity and synergy > will become. If all other complex systems exploit multi-scale modularity > and synergistic coordination to achieve all sorts of useful attributes, > it's inconceivable that the brain does not. Too much of AI - including > neural networks - has been oblivious to biology, and too confident in the > ability of abstractions to solve very complicated problems. The only way > "real" AI will be achieved is by getting much closer to biology - not just > in the superficial use of brain-like neural networks, but with much deeper > attention to all aspects of the biological processes that still remain the > only producers to intelligence known to us. > > > Though modular representations can lead to better explainability to a > limited degree (again, think of the dreaming rat), I am quite skeptical > about "explainable AI" in general. Full explainability (e.g., including > explainability of motivations) implies reductionistic analysis, but true > intelligence will always be emergent and, by definition, quite full of > surprises. The more "explainable" we try to make AI, the more we are > squeezing out exactly that emergent creativity that is the hallmark of > intelligence. Of course, such a demand is fine when we are just building > machine learning-based tools for driving cars or detecting spam, but that > is far from true general intelligence. We will only have built true AI when > the thing we have created is as mysterious to us in its purposes as our cat > or our teenage child :-). But that is a whole new debate. > > > The special issue sounds like an interesting idea. > > > Cheers > > > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Past-President (2015-2016) > International Neural Network Society > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > On Sun, Nov 7, 2021 at 2:15 PM Asim Roy wrote: > >> All, >> >> >> >> Amir Hussain, Editor-in-Chief, Cognitive Computation, is inviting us to >> do a special issue on the topics under discussion here. They could be short >> position papers summarizing ideas for moving forward in this field. He >> promised reviews within two weeks. If that works out, we could have the >> special issue published rather quickly. >> >> >> >> Please email me if you are interested. >> >> >> >> Asim Roy >> >> Professor, Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> >> >> *From:* Yoshua Bengio >> *Sent:* Sunday, November 7, 2021 8:55 AM >> *To:* Asim Roy >> *Cc:* Adam Krawitz ; connectionists at cs.cmu.edu; Juyang >> Weng >> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >> Lecture, etc. >> >> >> >> Asim, >> >> >> >> You can have your cake and eat it too with modular neural net >> architectures. You still have distributed representations but you have >> modular specialization. Many of my papers since 2019 are on this theme. It >> is consistent with the specialization seen in the brain, but keep in mind >> that there is a huge number of neurons there, and you still don't see >> single grand-mother cells firing alone, they fire in a pattern that is >> meaningful both locally (in the same region/module) and globally (different >> modules cooperate and compete according to the Global Workspace Theory and >> Neural Workspace Theory which have inspired our work). Finally, our recent >> work on learning high-level 'system-2'-like representations and their >> causal dependencies seeks to learn 'interpretable' entities (with natural >> language) that will emerge at the highest levels of representation (not >> clear how distributed or local these will be, but much more local than in a >> traditional MLP). This is a different form of disentangling than adopted in >> much of the recent work on unsupervised representation learning but shares >> the idea that the "right" abstract concept (related to those we can name >> verbally) will be "separated" (disentangled) from each other (which >> suggests that neuroscientists will have an easier time spotting them in >> neural activity). >> >> >> >> -- Yoshua >> >> >> >> *I'm overwhelmed by emails, so I won't be able to respond quickly or >> directly. Please write to my assistant in case of time sensitive matter or >> if it entails scheduling: julie.mongeau at mila.quebec >> * >> >> >> >> >> >> >> >> Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy a ?crit : >> >> Over a period of more than 25 years, I have had the opportunity to argue >> about the brain in both public forums and private discussions. And they >> included very well-known scholars such as Walter Freeman (UC-Berkeley), >> Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland >> (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen >> Institute), Teuvo Kohonen (Finland) and many others, some of whom are on >> this list. And many became good friends through these debates. >> >> >> >> We argued about many issues over the years, but the one that baffled me >> the most was the one about localist vs. distributed representation. Here?s >> the issue. As far as I know, although all the Nobel prizes in the field of >> neurophysiology ? from Hubel and Wiesel (simple and complex cells) and >> Moser and O?Keefe (grid and place cells) to the current one on discovery of >> temperature and touch sensitive receptors and neurons - are about finding >> ?meaning? in single or a group of dedicated cells, the distributed >> representation theory has yet to explain these findings of ?meaning.? >> Contrary to the assertion that the field is open-minded, I think most in >> this field are afraid the to cross the red line. >> >> >> >> Horace Barlow was the exception. He was perhaps the only neuroscientist >> who was willing to cross the red line and declare that ?grandmother cells >> will be found.? After a debate on this issue in 2012, which included Walter >> Freeman and others, Horace visited me in Phoenix at the age of 91 for >> further discussion. >> >> >> >> If the field is open minded, would love to hear how distributed >> representation is compatible with finding ?meaning? in the activations of >> single or a dedicated group of cells. >> >> >> >> Asim Roy >> >> Professor, Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Adam Krawitz >> *Sent:* Friday, November 5, 2021 10:01 AM >> *To:* connectionists at cs.cmu.edu >> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >> Lecture, etc. >> >> >> >> Tsvi, >> >> >> >> I?m just a lurker on this list, with no skin in the game, but perhaps >> that gives me a more neutral perspective. In the spirit of progress: >> >> >> >> 1. If you have a neural network approach that you feel provides a new >> and important perspective on cognitive processes, then write up a paper >> making that argument clearly, and I think you will find that the community >> is incredibly open to that. Yes, if they see holes in the approach they >> will be pointed out, but that is all part of the scientific exchange. >> Examples of this approach include: Elman (1990) Finding Structure in Time, >> Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow >> a Mind: Statistics, Structure, and Abstraction (not neural nets, but a >> ?new? approach to modelling cognition). I?m sure others can provide more >> examples. >> 2. I?m much less familiar with how things work on the applied side, >> but I have trouble believing that Google or anyone else will be dismissive >> of a computational approach that actually works. Why would they? They just >> want to solve problems efficiently. Demonstrate that your approach can >> solve a problem more effectively (or at least as effectively) as the >> existing approaches, and they will come running. Examples of this include: >> Tesauro?s TD-Gammon, which was influential in demonstrating the power of >> RL, and LeCun et al.?s convolutional NN for the MNIST digits. >> >> >> >> Clearly communicate the novel contribution of your approach and I think >> you will find a receptive audience. >> >> >> >> Thanks, >> >> Adam >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Tsvi Achler >> *Sent:* November 4, 2021 9:46 AM >> *To:* gary at ucsd.edu >> *Cc:* connectionists at cs.cmu.edu >> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >> Lecture, etc. >> >> >> >> Lastly Feedforward methods are predominant in a large part because they >> have financial backing from large companies with advertising and clout like >> Google and the self-driving craze that never fully materialized. >> >> >> >> Feedforward methods are not fully connectionist unless rehearsal for >> learning is implemented with neurons. That means storing all patterns, >> mixing them randomly and then presenting to a network to learn. As far as >> I know, no one is doing this in the community, so feedforward methods are >> only partially connectionist. By allowing popularity to predominate and >> choking off funds and presentation of alternatives we are cheating >> ourselves from pursuing other more rigorous brain-like methods. >> >> >> >> Sincerely, >> >> -Tsvi >> >> >> >> >> >> On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: >> >> Gary- Thanks for the accessible online link to the book. >> >> >> >> I looked especially at the inhibitory feedback section of the book which >> describes an Air Conditioner AC type feedback. >> >> It then describes a general field-like inhibition based on all >> activations in the layer. It also describes the role of inhibition in >> sparsity and feedforward inhibition, >> >> >> >> The feedback described in Regulatory Feedback is similar to the AC >> feedback but occurs for each neuron individually, vis-a-vis its inputs. >> >> Thus for context, regulatory feedback is not a field-like inhibition, it >> is very directed based on the neurons that are activated and their inputs. >> This sort of regulation is also the foundation of Homeostatic Plasticity >> findings (albeit with changes in Homeostatic regulation in experiments >> occurring in a slower time scale). The regulatory feedback model describes >> the effect and role in recognition of those regulated connections in real >> time during recognition. >> >> >> >> I would be happy to discuss further and collaborate on writing about the >> differences between the approaches for the next book or review. >> >> >> >> And I want to point out to folks, that the system is based on politics >> and that is why certain work is not cited like it should, but even worse >> these politics are here in the group today and they continue to very >> strongly influence decisions in the connectionist community and holds us >> back. >> >> >> >> Sincerely, >> >> -Tsvi >> >> >> >> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: >> >> Tsvi - While I think Randy and Yuko's book >> is >> actually somewhat better than the online version (and buying choices on >> amazon start at $9.99), there *is* an online version. >> >> >> >> Randy & Yuko's models take into account feedback and inhibition. >> >> >> >> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >> >> Daniel, >> >> >> >> Does your book include a discussion of Regulatory or Inhibitory Feedback >> published in several low impact journals between 2008 and 2014 (and in >> videos subsequently)? >> >> These are networks where the primary computation is inhibition back to >> the inputs that activated them and may be very counterintuitive given >> today's trends. You can almost think of them as the opposite of Hopfield >> networks. >> >> >> >> I would love to check inside the book but I dont have an academic budget >> that allows me access to it and that is a huge part of the problem with how >> information is shared and funding is allocated. I could not get access to >> any of the text or citations especially Chapter 4: "Competition, Lateral >> Inhibition, and Short-Term Memory", to weigh in. >> >> >> >> I wish the best circulation for your book, but even if the Regulatory >> Feedback Model is in the book, that does not change the fundamental problem >> if the book is not readily available. >> >> >> >> The same goes with Steve Grossberg's book, I cannot easily look inside. >> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >> as a predominant mechanism, but I do believe a function such as vigilance >> is very important during recognition and Adaptive Resonance is one of >> a very few models that have it. The Regulatory Feedback model I have >> developed (and Michael Spratling studies a similar model as well) is built >> primarily using the vigilance type of connections and allows multiple >> neurons to be evaluated at the same time and continuously during >> recognition in order to determine which (single or multiple neurons >> together) match the inputs the best without lateral inhibition. >> >> >> >> Unfortunately within conferences and talks predominated by the Adaptive >> Resonance crowd I have experienced the familiar dismissiveness and did not >> have an opportunity to give a proper talk. This goes back to the larger >> issue of academic politics based on small self-selected committees, the >> same issues that exist with the feedforward crowd, and pretty much all of >> academia. >> >> >> >> Today's information age algorithms such as Google's can determine >> relevance of information and ways to display them, but hegemony of the >> journal systems and the small committee system of academia developed in the >> middle ages (and their mutual synergies) block the use of more modern >> methods in research. Thus we are stuck with this problem, which especially >> affects those that are trying to introduce something new and >> counterintuitive, and hence the results described in the two National >> Bureau of Economic Research articles I cited in my previous message. >> >> >> >> Thomas, I am happy to have more discussions and/or start a different >> thread. >> >> >> >> Sincerely, >> >> Tsvi Achler MD/PhD >> >> >> >> >> >> >> >> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: >> >> Tsvi, >> >> >> >> While deep learning and feedforward networks have an outsize popularity, >> there are plenty of published sources that cover a much wider variety of >> networks, many of them more biologically based than deep learning. A >> treatment of a range of neural network approaches, going from simpler to >> more complex cognitive functions, is found in my textbook *Introduction >> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also >> Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) >> emphasizes a variety of architectures with a strong biological basis. >> >> >> >> >> >> Best, >> >> >> >> >> >> Dan Levine >> ------------------------------ >> >> *From:* Connectionists >> on behalf of Tsvi Achler >> *Sent:* Saturday, October 30, 2021 3:13 AM >> *To:* Schmidhuber Juergen >> *Cc:* connectionists at cs.cmu.edu >> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >> Lecture, etc. >> >> >> >> Since the title of the thread is Scientific Integrity, I want to point >> out some issues about trends in academia and then especially focusing on >> the connectionist community. >> >> >> >> In general analyzing impact factors etc the most important progress gets >> silenced until the mainstream picks it up Impact Factiors in novel >> research www.nber.org/.../working_papers/w22180/w22180.pdf >> and >> often this may take a generation >> https://www.nber.org/.../does-science-advance-one-funeral... >> >> . >> >> >> >> The connectionist field is stuck on feedforward networks and variants >> such as with inhibition of competitors (e.g. lateral inhibition), or other >> variants that are sometimes labeled as recurrent networks for learning time >> where the feedforward networks can be rewound in time. >> >> >> >> This stasis is specifically occuring with the popularity of deep >> learning. This is often portrayed as neurally plausible connectionism but >> requires an implausible amount of rehearsal and is not connectionist if >> this rehearsal is not implemented with neurons (see video link for further >> clarification). >> >> >> >> Models which have true feedback (e.g. back to their own inputs) cannot >> learn by backpropagation but there is plenty of evidence these types of >> connections exist in the brain and are used during recognition. Thus they >> get ignored: no talks in universities, no featuring in "premier" journals >> and no funding. >> >> >> >> But they are important and may negate the need for rehearsal as needed in >> feedforward methods. Thus may be essential for moving connectionism >> forward. >> >> >> >> If the community is truly dedicated to brain motivated algorithms, I >> recommend giving more time to networks other than feedforward networks. >> >> >> >> Video: >> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >> >> >> >> >> Sincerely, >> >> Tsvi Achler >> >> >> >> >> >> >> >> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >> wrote: >> >> Hi, fellow artificial neural network enthusiasts! >> >> The connectionists mailing list is perhaps the oldest mailing list on >> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >> that some of them - as well as their contemporaries - might be able to >> provide additional valuable insights into the history of the field. >> >> Following the great success of massive open online peer review (MOOR) for >> my 2015 survey of deep learning (now the most cited article ever published >> in the journal Neural Networks), I've decided to put forward another piece >> for MOOR. I want to thank the many experts who have already provided me >> with comments on it. Please send additional relevant references and >> suggestions for improvements for the following draft directly to me at >> juergen at idsia.ch: >> >> >> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >> >> >> The above is a point-for-point critique of factual errors in ACM's >> justification of the ACM A. M. Turing Award for deep learning and a >> critique of the Turing Lecture published by ACM in July 2021. This work can >> also be seen as a short history of deep learning, at least as far as ACM's >> errors and the Turing Lecture are concerned. >> >> I know that some view this as a controversial topic. However, it is the >> very nature of science to resolve controversies through facts. Credit >> assignment is as core to scientific history as it is to machine learning. >> My aim is to ensure that the true history of our field is preserved for >> posterity. >> >> Thank you all in advance for your help! >> >> J?rgen Schmidhuber >> >> >> >> >> >> >> >> >> -- >> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> Computer Science and Engineering 0404 >> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego - >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> >> >> Schedule: http://tinyurl.com/b7gxpwo >> >> >> >> >> *Listen carefully,* >> *Neither the Vedas* >> *Nor the Qur'an* >> *Will teach you this:* >> *Put the bit in its mouth,* >> *The saddle on its back,* >> *Your foot in the stirrup,* >> *And ride your wild runaway mind* >> *All the way to heaven.* >> >> *-- Kabir* >> >> -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolano0 at hotmail.com Wed Nov 10 01:47:57 2021 From: mmolano0 at hotmail.com (manuel molano) Date: Wed, 10 Nov 2021 06:47:57 +0000 Subject: Connectionists: Second session of the Brains Through Time Reading Club Message-ID: Dear all, We would like to bring to your attention the second session of the Brains Through Time Reading Club! It will be on the first of December (6pm CEST / 12 EST) and will include the participation of Maria Tosches, Luis Puelles and Paul Cisek. We will dedicate ~2 hours to present and discuss the main ideas of the second chapter of the book: The Origin of Vertebrates: Invertebrate Chordates and Cyclostomes. You can register HERE. In case you missed the first session, you can watch it HERE or read a quick summary of the first chapter HERE. See you there! The Braining Club About the BTT reading club: As the name implies, the idea is to review the book Brains Through Time, by Georg Striedter and Glenn Northcutt. Brains Through Time is a masterful synthesis of much of what is known about brain evolution, and offers great insights to anyone interested in a broad understanding of how the brain produces behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lfbaa at di.ubi.pt Wed Nov 10 04:24:35 2021 From: lfbaa at di.ubi.pt (=?UTF-8?Q?Lu=C3=ADs_Alexandre?=) Date: Wed, 10 Nov 2021 09:24:35 +0000 (WET) Subject: Connectionists: IbPRIA 2022: request for distribution of CFP Message-ID: Subject: IbPRIA 2022, Aveiro, Portugal, May 3-6 Dear Colleagues, It is my pleasure to invite you to join us at the IbPRIA 2022 international conference that will be held in Aveiro (Portugal), May 4-6th, 2022.?Tutorials (included in the registration fees) will be offered on the afternoon of May 3rd. IbPRIA is a single-track scientific event looking for new theoretical results, techniques and main applications on any aspect of pattern recognition and image analysis, being co-organised by the Portuguese APRP and Spanish AERFAI chapters of the IAPR (International Association for Pattern Recognition). For more details?(invited speakers , tutorials , scientific programme, awards and important dates) please visit the IbPRIA webpage at http://www.ibpria.org/2022/ . All accepted papers will appear in the IbPRIA conference proceedings and will be published in Springer Lecture Notes in Computer Science Series . In addition, a short list of presented papers will be invited to submit extended versions for possible publication in the Springer journal?Pattern Analysis and Applications . Looking forward to seeingyou in Aveiro! On behalf of the Local Organizing Committee, Armando J. Pinho University of Aveiro, Portugal -- Armando J. Pinho Information Systems and Processing Group, IEETA / DETI University of Aveiro, 3810-193 Aveiro, Portugal http://wiki.ieeta.pt/wiki/index.php/Armando_J._Pinho -------------- next part -------------- A non-text attachment was scrubbed... Name: IbPRIA2022_Poster.pdf Type: application/pdf Size: 344822 bytes Desc: IbPRIA2022_Poster.pdf URL: From cactus-internship at tuebingen.mpg.de Wed Nov 10 09:56:14 2021 From: cactus-internship at tuebingen.mpg.de (Hannah Nonnenberg) Date: Wed, 10 Nov 2021 15:56:14 +0100 Subject: Connectionists: =?iso-8859-1?q?Paid_Research_Internships_in_AI_an?= =?iso-8859-1?q?d_Brain_Research_=28CaCT=FCS=29?= Message-ID: <029f01d7d643$1e672cb0$5b358610$@tuebingen.mpg.de> Our new CaCT?S program offers paid research internships at the Max Planck Institutes in T?bingen (Germany) to students who face significant constraints in their pursuit of a career in AI or brain research. We would be very grateful if you could help us reach students who may benefit from our program. You can do that by distributing the call for applications in your professional networks, among your colleagues and any interested students, or by sharing our posts on social media. You may find details on the program and links to our social media posts below. We also attach a poster for physical distribution. We thank you in advance for any support you are willing to give us. If you have any questions, please feel free to contact us. Kind regards, Hannah Nonnenberg - on behalf of the CaCT?S Internship coordination team - About the CaCT?S Internship program cactus-internship.tuebingen.mpg.de The sciences of biological and artificial intelligence are rapidly growing research fields that need enthusiastic minds with a keen interest in solving challenging questions. The Max Planck Institutes for Biological Cybernetics and Intelligent Systems offer up to 10 students at the Bachelor or Master level paid three-months internships during the summer of 2022. Successful applicants will work with top-level scientists on research projects spanning machine learning, electrical engineering, theoretical neuroscience, behavioral experiments and data analysis. The CaCT?S Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community. Application deadline: 3rd January 2022 Social Media Links Twitter: https://twitter.com/MPICybernetics/status/1458390860240986115?s=20 Facebook: https://www.facebook.com/MPICybernetics Instagram https://www.instagram.com/p/CWF9P_SqA0w/?utm_source=ig_web_copy_link LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:6864156559514071040 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cactus_poster.pdf Type: application/pdf Size: 254061 bytes Desc: not available URL: From jane.zurbruegg at ai.ethz.ch Wed Nov 10 07:21:49 2021 From: jane.zurbruegg at ai.ethz.ch (=?utf-8?B?WnVyYnLDvGdnIEphbmU=?=) Date: Wed, 10 Nov 2021 12:21:49 +0000 Subject: Connectionists: Call for ETH AI Center Fellowships Message-ID: Dear Members of the Connectionists Mail List, The ETH AI Center is the central hub for artificial intelligence at ETH Z?rich ? an interdisciplinary coalition with over 100 affiliated professors and research groups. We would like to advertise our fully-funded PhD and Post-Doc Fellowship opportunities to your students. Consistently ranked amongst the best places to live in the world, Z?rich hosts a vibrant ecosystem for research, industry, and start-ups in artificial intelligence and machine learning. The ETH AI Center Fellowship is one of the top programs for pursuing a PhD or Post-Doc in AI & Machine Learning worldwide. As a result, we target extraordinary students who aim at shaping the future of AI and like to work in an interdisciplinary setting. Further information can be found at https://ai.ethz.ch/fellows2022. Application deadlines: * PhD Students: November 30, 2021 * Post-Docs: December 15, 2021 If you know somebody who could be interested in applying, we would be grateful if you could support us by forwarding them this message. Please let us know if you have any questions. Best regards, On behalf of the ETH AI Center, Jane Zurbr?gg -- Jane Zurbr?gg Student Assistant ETH AI Center ETH Zurich CAB E74 Department of Computer Science Universit?tstrasse 6 8006 Zurich -------------- next part -------------- An HTML attachment was scrubbed... URL: From coralie.gregoire at insa-lyon.fr Wed Nov 10 12:54:52 2021 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Wed, 10 Nov 2021 18:54:52 +0100 (CET) Subject: Connectionists: [CFP] The ACM Web Conference 2022 Special Tracks - NEW DEADLINES - History of The Web Message-ID: <762450032.322598.1636566892144.JavaMail.zimbra@insa-lyon.fr> [Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] [CFP] The ACM Web Conference 2022 Special Tracks - NEW DEADLINES - History of The Web We invite contributions to the Special Tracks of The Web Conference 2022 (formerly known as WWW). The conference will take place online, France, on April 25-29, 2022. *Important dates: NEW* - Abstract: December 2, 2021 - Full paper: December 9, 2021 - Acceptance notification: January 13, 2022 No rebuttal is foreseen. ------------------------------------------------------------ *Special track History of the Web* Track chairs: Dame Wendy Hall (University of Southampton, UK) and Luc Mariaux (?cole Centrale de Lyon ? France (retired)) You can reach the track chairs at: www2022-history at easychair.org The World Wide Web was invented at CERN by Sir Tim Berners-Lee in 1989 and in 1993, CERN put the World Wide Web software in the public domain. In May 1994 Robert Cailliau organized the First International WWW Conference in Geneva and following that event in August 1994 he launched with Joseph Hardin the IW3C2 formally incorporated in May 1996 as a non-profit Association under Swiss law. In 2022 this conference will become The ACM Web Conference. The 2022 edition of this conference is therefore the 31st in the series and takes place on the 32nd anniversary of the Web. During this period, the Web and its applications have become widely available around the world and many new technologies have emerged. The evolution of the Web has been made of great scientific advances, but also of anecdotal events that have contributed to build the Web as we know it today. After more than thirty years, it is time to keep track of all these events, so we invite all those who participated in this collective adventure to share the information they have. We also invite those whose field of technical, sociological, or philosophical research concerns the evolution or the impact of the Web to submit their work. Three kinds of contributions are expected: - Research papers focussing on the history of the Web, - Papers explaining how the evolution of the Web has impacted our professional or private life, - Papers describing some anecdotic events related to the evolution of the Web. All submissions will be peer-reviewed and evaluated on the basis of originality, relevance, quality, and technical, sociological, or historical contribution. *Submission guidelines* For the special tracks, submissions are limited to 8 content pages, which should include all figures and tables, but excluding supplementary material and references. In addition, you can include 2 additional pages of supplementary material. The total number of pages with supplementary material and references must not exceed 12 pages. The papers must be formatted according to the instructions below. Submissions will be handled via Easychair, at https://easychair.org/conferences/?conf=thewebconf2022. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template. For overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty. Submissions for review must be in PDF format. They must be self-contained and written in English. *Author identity* The review process will be double-blind. The submitted document should not include any author names, affiliations, or other identifying information. This may include, but is not restricted to: acknowledgements, self-citations, references to prior work by the author(s) etc. You may explicitly refer in the paper to organisations that provided datasets, hosted experiments, or deployed solutions and tools. In other words, instead of saying that ?we analysed the logs of a major search engine?, the authors may name the search engine in question. The reviewers will be informed that naming organisations in papers does not necessarily imply that the authors are currently affiliated with said organisation. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. All accepted papers will be published by ACM and will be available via the ACM Digital Library. To be included in the Proceedings, at least one author of each accepted paper must register for the conference and present the paper there. ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From djaeger at emory.edu Wed Nov 10 12:55:52 2021 From: djaeger at emory.edu (Jaeger, Dieter) Date: Wed, 10 Nov 2021 17:55:52 +0000 Subject: Connectionists: A Dendrites Virtual Workshop hosted at Emory University Dec. 7. Message-ID: Dear Colleagues: We are pleased to invite you to our virtual workshop entitled "Are dendrites necessary for cortical computation?? to be held on Tuesday, December 7, 10:00am - 1:00 pm EST. This workshop is in the series of Emory University Initiative in Theory And Modeling of Living Systems (TMLS, http://livingtheory.emory.edu) virtual workshops, exploring provocative questions on the interface of biology and physical sciences. See https://www.youtube.com/c/EmoryTMLS for recordings of previous workshops in the program. Abstract: Many network models for modeling cortical function are built with single compartment units using a variety of more or less simple integrate and fire dynamics. On the other hand, many biologically oriented research groups place great emphasis on the complex and detailed properties of active processing in pyramidal neuron dendrites. The question to be addressed in this workshop is whether essential computational properties of biological cortical networks rely on dendritic processing, and if so, which? Finally, could such properties be formalized in simpler ways than using full biophysical detail and substantially add to the function of more abstract network modeling? Six exciting speakers will present their thoughts on the matter, and the audience will be able to interact with them in real time, asking questions through the YouTube comments interface. Free registration is requested at: https://forms.gle/9HzNgrYbJPYBUfSw9 We expect to stream the workshop live at the TMLS YouTube channel https://www.youtube.com/c/EmoryTMLS and we will send information, updates/changes to all registered participants before the workshop begins. Confirmed Speakers and Titles: Dieter Jaeger, Emory University, Introduction Claudia Clopath, Imperial College London Brent Doiron, University of Chicago ?Cell specific inhibitory plasticity in assembly formation? Bartlett Mel, University of Southern California "How to optimize a dendrite for binary subunit pooling? Yiota Poirazi, IMBB-FORTH Crete ?Dendritic solutions to logical operations in models of human neurons? Andreas St?ckel, University of Waterloo. ?Passive Dendrites as a Computational Resource in the Neural Engineering Framework? Xiao-Jing Wang, New York University ?Dendritic gating in the whole neocortex? Please feel free to forward this announcement to anyone that you think might be interested. Hope to see you there! Thank you, Dieter Jaeger (workshop organizer) Dieter Jaeger Professor, Department of Biology Emory University djaeger at emory.edu https://scholarblogs.emory.edu/jaegerlab/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From janet.hsiao at gmail.com Thu Nov 11 00:36:45 2021 From: janet.hsiao at gmail.com (Janet Hsiao) Date: Thu, 11 Nov 2021 13:36:45 +0800 Subject: Connectionists: Postdoctoral Position in Explainable AI at University of Hong Kong & Huawei HKRC Message-ID: *Postdoctoral Position: Explainable AI* *University of Hong Kong & **Huawei Hong Kong Research Center * Applicants are invited for appointment as a *Post-doctoral Fellow *to work with both the Attention Brain and Cognition Lab at the Department of Psychology, University of Hong Kong, and Huawei Hong Kong Research Center (HKRC), to commence as soon as possible for a period of 1 year, with the possibility of renewal. Applicants must have a Ph.D. degree in Computer Science, Cognitive Science, or related fields. Applicants should have excellent programming and algorithmic skills, be proficient in at least one of the mainstream programming languages including Matlab/Python etc., and be curious, self-motivated and willing to participate in R&D over innovative and interdisciplinary topics. Familiarity with topics in explainable AI and deep learning methods is a plus. The appointee will work with Dr. Janet Hsiao (Department of Psychology, University of Hong Kong) and Huawei HKRC on projects related to explainable AI. Information about the research in the lab can be obtained at http://abc.psy.hku.hk/. The partnering team from Huawei HKRC consists of researchers from various research units, whose research areas include deep learning framework, trustworthy AI software, fundamental AI theory and so on. For more information about the position, please contact Dr. Janet Hsiao at jhsiao at hku.hk. A highly competitive salary commensurate with qualifications and experience will be offered, in addition to annual leave and medical benefits. Applicants should send a completed application with a cover letter, an up-to-date C.V. including academic qualifications, research experience, publications, and three letters of reference to Dr. Janet Hsiao at jhsiao at hku.hk, with the subject line ?Post-doctoral Position?. *Review of applications will start immediately and continue until* *the position is filled*. We thank applicants for their interest, but advise that only candidates shortlisted for interviews will be notified of the application result. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Wed Nov 10 15:20:39 2021 From: cgf at isep.ipp.pt (Carlos) Date: Wed, 10 Nov 2021 20:20:39 +0000 Subject: Connectionists: CFP: International School and Conference on Network Science (NetSci-X 2022) Message-ID: ================================== International School and Conference on Network Science NetSci-X 2022 Porto, Portugal February 8-11, 2022 https://netscix.dcc.fc.up.pt/ ================================== Important Dates ----------------------------- Full Paper/Abstract Submission: November 26, 2021 (23:59:59 AoE) Author Notification: December 20, 2021 Keynote Speakers ----------------------------- Jure Leskovec, Stanford University, USA Jurgen Kurths, Humboldt University Berlin, Germany Manuela Veloso, JP Morgan AI Research & CMU, USA Stefano Boccaletti, Institute for Complex Systems, Florence, Italy Tijana Milenkovic, University of Notre Dame, USA Tiziana Di Matteo, King's College London, UK Tracks ----------------------------- We are now welcoming submissions to the Abstracts or Proceedings Track. All submissions will undergo a peer-review process. Abstracts Track: extended abstracts should not exceed 3 pages, including figures and references. Abstracts will be accepted for oral or poster presentation, and will appear in the book of abstracts only. Proceedings Track: full papers should have between 8 and 14 pages and follow the Springer Proceedings format. Accepted full papers will be presented at the conference and published by Springer. Only previously unpublished, original submissions will be accepted. Description ----------------------------- NetSci-X is the Network Science Society?s signature winter conference. It extends the popular NetSci conference series (https://netscisociety.net/events/netsci) to provide an additional forum for a growing community of academics and practitioners working on formal, computational, and application aspects of complex networks. The conference will be highly interdisciplinary, spanning the boundaries of traditional disciplines. Specific topics of interest include (but are not limited to): Models of Complex Networks Structural Network Properties Algorithms for Network Analysis Graph Mining Large-Scale Graph Analytics Epidemics Resilience and Robustness Community Structure Motifs and Subgraph Patterns Link Prediction Multilayer/Multiplex Networks Temporal and Spatial Networks Dynamics on and of Complex Networks Network Controllability Synchronization in Networks Percolation, Resilience, Phase Transitions Network Geometry Network Neuroscience Network Medicine Bioinformatics and Earth Sciences Applications Mobility and Urban Networks Computational Social Sciences Rumor and Viral Marketing Economics and Financial Networks Instructions for Submissions ----------------------- All papers and abstracts should be submitted electronically in PDF format. The website includes detailed information about the submission process. General Chairs Fernando Silva, University of Porto, Portugal Jos? Mendes, University of Aveiro, Portugal Ros?rio Laureano, Lisbon University Institute (ISCTE), Portugal Program Chair Pedro Ribeiro, Universidade do Porto Main Contact for NetSci-X 2022 netscix at dcc.fc.up.pt Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From juyang.weng at gmail.com Wed Nov 10 20:07:19 2021 From: juyang.weng at gmail.com (Juyang Weng) Date: Wed, 10 Nov 2021 20:07:19 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Dear Tsvi, I can see that communications among us all are EXTREMELY difficult for many reasons. The most important reason is probably that we do not agree with what a human brain does. However, without understanding Turing machines, one cannot see why the brain must at least do APFGP below. (1) We must not consider what the brain does is "pattern recognition" (your term "work"), like many deep learning studies have been doing. (I am also guilty of the first deep learning for 3D, Cresceptron 1992 and 1997, which the three Turing Awardees 2018 failed to cite.) (2) The brain learns to automatically program from physical experience throughout its lifetime, or what I called Autonomous Programming For General Purposes (APFGP). See J. Weng, "Autonomous Programming for General Purposes: Theory," *International Journal of Huamnoid Robotics*, vol. 17, no. 14, pp. 1-36, September, 14, 2020. PDF file . (3) I have had the honor to look through your Regulatory Network. As I understand, your top-down connections that many neuroscientists call "regulatory" are inaccurate because they do not understand Turing machines. Top-down projections are not "Regulatory" but "Contexts" instead in the sense of Turing machines. My NAI book explains the details with mathematical proofs. (4) Please accept my apologies for using such direct terms on this email list. I do not mean disrespect to neuroscientists, but many neuroscientists do not understand what the brain does since they have not carefully understood Turing machines, especially the universal Turing machines. Have you learned universal Turing machines? Best regards, -John On Wed, Nov 10, 2021 at 6:34 PM Tsvi Achler wrote: > Dear Juyang & Ali > > Modularity and localized learning does not work with feedforward networks, > but it works within networks where neurons inhibit their own inputs, like > Regulatory Feedback. > Each neuron can be configured as an individual unit and the weights can be > the value of inputs present when a supervised signal is given. This > is localized learning e.g. representing likelihood without distribution or > the original simple Hebbian type learning without normalization or > adjustment based on error. > > The key to regulatory feedback is that for each input a neuron receives it > sends an inhibitory projection linked to its activation to that same input > (e.g. by connecting back to inhibitory interneurons). > > Neurons in the regulatory feedback configuration are modular, you > can include them or not include them in the network and they can even be > dynamically taken out anytime by artificially forcing a neuron activation > to be zero during recognition iterations. > > When recognizing the iterative feedback interactions between neurons > cooperatively and competitively determines which neurons are the best fit > to the input. The iterations during recognition are the "glue" that allows > modular neurons to interact together to find the best result that includes > the contribution (and updated contributions) of all of the modular neurons. > Thus the results of recognition are distributed via potential contributions > of many neurons but the learning is localized. > > I know initially this is very counterintuitive and foreign but I encourage > you to understand the mechanism and results through the videos and papers > before making statements that: modularity is not possible or "too simple". > It is just not possible or clear in feedforward networks. > > Two papers where I analyze the modularity are: > 1) Achler T., Non-Oscillatory Dynamics to Disambiguate Pattern Mixtures, > Chapter 4 in Relevance of the Time Domain to Neural Network Models, Eds: > Rao R, Cecchi G A, pp 57-74, Springer 2011 > which builds up a network by adding nodes, and also introduces potential > scenarios of linear dependence I described in my previous message > > 2) Achler T., Amir E., Input Feedback Networks: Classification and > Inference Based on Network Structure, Artificial General Intelligence, V1: > 15-26, IOS 2008. > an even older paper which discusses adding modular nodes ad infinitum and > their continued contribution to the network. This paper goes back to when > I used a binary weight version which I called Input Feedback Networks and > did not use the terms modular, but concepts and mathematics apply. > > I am happy to forward the pdf copies to those who ask for them. > Video playlist: > https://www.youtube.com/playlist?list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3 > > Sincerely, > -Tsvi > > > On Wed, Nov 10, 2021 at 12:41 AM Juyang Weng > wrote: > >> Dear Ali, >> >> I agree "localized vs. distributed" as a dichotomy is too simple, as I >> discussed with Asim before. >> >> However, "importance of modularity" is just a slightly higher level >> mistake from people who worked on "neuroscience" experiments, of the same >> nature as the "localized vs. distributed" dichotomy. "Localized vs. >> distributed" is too simple; "modularity" is also too simple to be true >> too. Unfortunately, neuroscientists have spent time on many experiments >> whose answers have already been predicted by our Developmental Networks >> (DN), especially DN-2. >> >> However, there is one known model that is holistic in terms >> of general-purpose computing. That is Universal Turing machines, although >> they were not for brains to start with. Researchers in this challenging >> brain subject did not seem to pay sufficient attention to the emergent >> Universal Turing Machine (in our Developmental Networks) as a holistic >> model of an entire brain. If we look into how excitatory cells and >> inhibitory sells migrate and connect, as I explained in my NAI book, >> >> https://www.amazon.com/Natural-Artificial-Intelligence-Juyang-Weng/dp/0985875712 >> it is impossible to have brain modules as you stated. >> >> If we insist on our discussion on Brodmann areas, then Brodmann areas are >> primarily resource areas (like, not exactly the same as, registers, cash, >> RAM, and disks) not functional areas. DN-1 has modules, but not >> functional modules, but resource modules. DN-2 is more correct in that >> even the boundaries of resource modules are adaptive. Some lower brain >> areas synthesize specific types of neuro-transmitters (as explained in my >> book above), e.g., serotonin, but such areas are still resource modules, >> not brain-function models. A brain uses serotonin for many different >> functions. >> >> In summary, no Brodmann areas should be assigned to any specific brain >> functions (like edges, but resources), including lower brain areas, such as >> raphe nuclei (that synthesize serotonin) and hippocampus (also resources, >> not functions). Your cited examples are well known and support my above >> fundamental view, backed up by DN as a full algorithmic model of the entire >> brain (not brain modules). >> That is, "place cells" are a misnomer. >> >> I am writing something sensitive. I hope this connectionist at cmu will not >> reject it. >> >> Best regards, >> -John >> >> On Tue, Nov 9, 2021 at 2:55 AM Ali Minai wrote: >> >>> Asim >>> >>> This is a perennial issue, but I don't think one should see "localized >>> vs. distributed" as a dichotomy. Neurons (or groups of neurons) all over >>> the brain are obviously tuned to specific things in specific contexts - >>> place cells are an obvious example, as are the cells in orientation >>> columns, and indeed, much of the early visual system. That's why we can >>> decode place from hippocampal recordings in dreaming rats. But in many >>> cases, the same cells are tuned to something else in another context >>> (again, think of place cells). The notion of "holographically" distributed >>> representations is a nice abstract idea but I doubt if it applies anywhere >>> in the brain. However, any object/concept, etc., is better represented by >>> several units (neurons, columns, hypercolumns, whatever) than by a single >>> one, and in a way that makes these representations both modular and >>> redundant enough to ensure robustness. Two old but still very interesting >>> papers in this regard are: >>> >>> Tsunoda K, Yamane Y, Nishizaki M, & Tanifuji M (2001) Complex objects >>> are represented in macaque inferotemporal cortex by the combination of >>> feature columns. Nat Neurosci 4:832?838. >>> >>> >>> Wang, G., Tanaka, K. & Tanifuji M (2001) Optical imaging of functional >>> organization in the monkey inferotemporal cortex. Science 272:1665-1668. >>> >>> >>> If there's one big lesson biologists have learned in the last 50 years, >>> it is the importance of modularity - not in the Fodorian sense but in the >>> sense of Herb Simon, Jordan Pollack, et al. The more we think of the brain >>> as a complex system, the clearer the importance of modularity and synergy >>> will become. If all other complex systems exploit multi-scale modularity >>> and synergistic coordination to achieve all sorts of useful attributes, >>> it's inconceivable that the brain does not. Too much of AI - including >>> neural networks - has been oblivious to biology, and too confident in the >>> ability of abstractions to solve very complicated problems. The only way >>> "real" AI will be achieved is by getting much closer to biology - not just >>> in the superficial use of brain-like neural networks, but with much deeper >>> attention to all aspects of the biological processes that still remain the >>> only producers to intelligence known to us. >>> >>> >>> Though modular representations can lead to better explainability to a >>> limited degree (again, think of the dreaming rat), I am quite skeptical >>> about "explainable AI" in general. Full explainability (e.g., including >>> explainability of motivations) implies reductionistic analysis, but true >>> intelligence will always be emergent and, by definition, quite full of >>> surprises. The more "explainable" we try to make AI, the more we are >>> squeezing out exactly that emergent creativity that is the hallmark of >>> intelligence. Of course, such a demand is fine when we are just building >>> machine learning-based tools for driving cars or detecting spam, but that >>> is far from true general intelligence. We will only have built true AI when >>> the thing we have created is as mysterious to us in its purposes as our cat >>> or our teenage child :-). But that is a whole new debate. >>> >>> >>> The special issue sounds like an interesting idea. >>> >>> >>> Cheers >>> >>> >>> Ali >>> >>> >>> >>> *Ali A. Minai, Ph.D.* >>> Professor and Graduate Program Director >>> Complex Adaptive Systems Lab >>> Department of Electrical Engineering & Computer Science >>> 828 Rhodes Hall >>> University of Cincinnati >>> Cincinnati, OH 45221-0030 >>> >>> Past-President (2015-2016) >>> International Neural Network Society >>> >>> Phone: (513) 556-4783 >>> Fax: (513) 556-7326 >>> Email: Ali.Minai at uc.edu >>> minaiaa at gmail.com >>> >>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>> >>> >>> >>> On Sun, Nov 7, 2021 at 2:15 PM Asim Roy wrote: >>> >>>> All, >>>> >>>> >>>> >>>> Amir Hussain, Editor-in-Chief, Cognitive Computation, is inviting us to >>>> do a special issue on the topics under discussion here. They could be short >>>> position papers summarizing ideas for moving forward in this field. He >>>> promised reviews within two weeks. If that works out, we could have the >>>> special issue published rather quickly. >>>> >>>> >>>> >>>> Please email me if you are interested. >>>> >>>> >>>> >>>> Asim Roy >>>> >>>> Professor, Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> >>>> >>>> >>>> >>>> *From:* Yoshua Bengio >>>> *Sent:* Sunday, November 7, 2021 8:55 AM >>>> *To:* Asim Roy >>>> *Cc:* Adam Krawitz ; connectionists at cs.cmu.edu; >>>> Juyang Weng >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> >>>> >>>> Asim, >>>> >>>> >>>> >>>> You can have your cake and eat it too with modular neural net >>>> architectures. You still have distributed representations but you have >>>> modular specialization. Many of my papers since 2019 are on this theme. It >>>> is consistent with the specialization seen in the brain, but keep in mind >>>> that there is a huge number of neurons there, and you still don't see >>>> single grand-mother cells firing alone, they fire in a pattern that is >>>> meaningful both locally (in the same region/module) and globally (different >>>> modules cooperate and compete according to the Global Workspace Theory and >>>> Neural Workspace Theory which have inspired our work). Finally, our recent >>>> work on learning high-level 'system-2'-like representations and their >>>> causal dependencies seeks to learn 'interpretable' entities (with natural >>>> language) that will emerge at the highest levels of representation (not >>>> clear how distributed or local these will be, but much more local than in a >>>> traditional MLP). This is a different form of disentangling than adopted in >>>> much of the recent work on unsupervised representation learning but shares >>>> the idea that the "right" abstract concept (related to those we can name >>>> verbally) will be "separated" (disentangled) from each other (which >>>> suggests that neuroscientists will have an easier time spotting them in >>>> neural activity). >>>> >>>> >>>> >>>> -- Yoshua >>>> >>>> >>>> >>>> *I'm overwhelmed by emails, so I won't be able to respond quickly or >>>> directly. Please write to my assistant in case of time sensitive matter or >>>> if it entails scheduling: julie.mongeau at mila.quebec >>>> * >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy a ?crit : >>>> >>>> Over a period of more than 25 years, I have had the opportunity to >>>> argue about the brain in both public forums and private discussions. And >>>> they included very well-known scholars such as Walter Freeman >>>> (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), >>>> Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof >>>> Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of >>>> whom are on this list. And many became good friends through these debates. >>>> >>>> >>>> >>>> We argued about many issues over the years, but the one that baffled me >>>> the most was the one about localist vs. distributed representation. Here?s >>>> the issue. As far as I know, although all the Nobel prizes in the field of >>>> neurophysiology ? from Hubel and Wiesel (simple and complex cells) and >>>> Moser and O?Keefe (grid and place cells) to the current one on discovery of >>>> temperature and touch sensitive receptors and neurons - are about finding >>>> ?meaning? in single or a group of dedicated cells, the distributed >>>> representation theory has yet to explain these findings of ?meaning.? >>>> Contrary to the assertion that the field is open-minded, I think most in >>>> this field are afraid the to cross the red line. >>>> >>>> >>>> >>>> Horace Barlow was the exception. He was perhaps the only neuroscientist >>>> who was willing to cross the red line and declare that ?grandmother cells >>>> will be found.? After a debate on this issue in 2012, which included Walter >>>> Freeman and others, Horace visited me in Phoenix at the age of 91 for >>>> further discussion. >>>> >>>> >>>> >>>> If the field is open minded, would love to hear how distributed >>>> representation is compatible with finding ?meaning? in the activations of >>>> single or a dedicated group of cells. >>>> >>>> >>>> >>>> Asim Roy >>>> >>>> Professor, Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> >>>> >>>> >>>> >>>> >>>> >>>> *From:* Connectionists *On >>>> Behalf Of *Adam Krawitz >>>> *Sent:* Friday, November 5, 2021 10:01 AM >>>> *To:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> >>>> >>>> Tsvi, >>>> >>>> >>>> >>>> I?m just a lurker on this list, with no skin in the game, but perhaps >>>> that gives me a more neutral perspective. In the spirit of progress: >>>> >>>> >>>> >>>> 1. If you have a neural network approach that you feel provides a >>>> new and important perspective on cognitive processes, then write up a paper >>>> making that argument clearly, and I think you will find that the community >>>> is incredibly open to that. Yes, if they see holes in the approach they >>>> will be pointed out, but that is all part of the scientific exchange. >>>> Examples of this approach include: Elman (1990) Finding Structure in Time, >>>> Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow >>>> a Mind: Statistics, Structure, and Abstraction (not neural nets, but a >>>> ?new? approach to modelling cognition). I?m sure others can provide more >>>> examples. >>>> 2. I?m much less familiar with how things work on the applied side, >>>> but I have trouble believing that Google or anyone else will be dismissive >>>> of a computational approach that actually works. Why would they? They just >>>> want to solve problems efficiently. Demonstrate that your approach can >>>> solve a problem more effectively (or at least as effectively) as the >>>> existing approaches, and they will come running. Examples of this include: >>>> Tesauro?s TD-Gammon, which was influential in demonstrating the power of >>>> RL, and LeCun et al.?s convolutional NN for the MNIST digits. >>>> >>>> >>>> >>>> Clearly communicate the novel contribution of your approach and I think >>>> you will find a receptive audience. >>>> >>>> >>>> >>>> Thanks, >>>> >>>> Adam >>>> >>>> >>>> >>>> >>>> >>>> *From:* Connectionists *On >>>> Behalf Of *Tsvi Achler >>>> *Sent:* November 4, 2021 9:46 AM >>>> *To:* gary at ucsd.edu >>>> *Cc:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> >>>> >>>> Lastly Feedforward methods are predominant in a large part because they >>>> have financial backing from large companies with advertising and clout like >>>> Google and the self-driving craze that never fully materialized. >>>> >>>> >>>> >>>> Feedforward methods are not fully connectionist unless rehearsal for >>>> learning is implemented with neurons. That means storing all patterns, >>>> mixing them randomly and then presenting to a network to learn. As far as >>>> I know, no one is doing this in the community, so feedforward methods are >>>> only partially connectionist. By allowing popularity to predominate and >>>> choking off funds and presentation of alternatives we are cheating >>>> ourselves from pursuing other more rigorous brain-like methods. >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> -Tsvi >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: >>>> >>>> Gary- Thanks for the accessible online link to the book. >>>> >>>> >>>> >>>> I looked especially at the inhibitory feedback section of the book >>>> which describes an Air Conditioner AC type feedback. >>>> >>>> It then describes a general field-like inhibition based on all >>>> activations in the layer. It also describes the role of inhibition in >>>> sparsity and feedforward inhibition, >>>> >>>> >>>> >>>> The feedback described in Regulatory Feedback is similar to the AC >>>> feedback but occurs for each neuron individually, vis-a-vis its inputs. >>>> >>>> Thus for context, regulatory feedback is not a field-like inhibition, >>>> it is very directed based on the neurons that are activated and their >>>> inputs. This sort of regulation is also the foundation of Homeostatic >>>> Plasticity findings (albeit with changes in Homeostatic regulation in >>>> experiments occurring in a slower time scale). The regulatory feedback >>>> model describes the effect and role in recognition of those regulated >>>> connections in real time during recognition. >>>> >>>> >>>> >>>> I would be happy to discuss further and collaborate on writing about >>>> the differences between the approaches for the next book or review. >>>> >>>> >>>> >>>> And I want to point out to folks, that the system is based on politics >>>> and that is why certain work is not cited like it should, but even worse >>>> these politics are here in the group today and they continue to very >>>> strongly influence decisions in the connectionist community and holds us >>>> back. >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> -Tsvi >>>> >>>> >>>> >>>> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu >>>> wrote: >>>> >>>> Tsvi - While I think Randy and Yuko's book >>>> is >>>> actually somewhat better than the online version (and buying choices on >>>> amazon start at $9.99), there *is* an online version. >>>> >>>> >>>> >>>> Randy & Yuko's models take into account feedback and inhibition. >>>> >>>> >>>> >>>> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >>>> >>>> Daniel, >>>> >>>> >>>> >>>> Does your book include a discussion of Regulatory or Inhibitory >>>> Feedback published in several low impact journals between 2008 and 2014 >>>> (and in videos subsequently)? >>>> >>>> These are networks where the primary computation is inhibition back to >>>> the inputs that activated them and may be very counterintuitive given >>>> today's trends. You can almost think of them as the opposite of Hopfield >>>> networks. >>>> >>>> >>>> >>>> I would love to check inside the book but I dont have an academic >>>> budget that allows me access to it and that is a huge part of the problem >>>> with how information is shared and funding is allocated. I could not get >>>> access to any of the text or citations especially Chapter 4: "Competition, >>>> Lateral Inhibition, and Short-Term Memory", to weigh in. >>>> >>>> >>>> >>>> I wish the best circulation for your book, but even if the Regulatory >>>> Feedback Model is in the book, that does not change the fundamental problem >>>> if the book is not readily available. >>>> >>>> >>>> >>>> The same goes with Steve Grossberg's book, I cannot easily look >>>> inside. With regards to Adaptive Resonance I dont subscribe to lateral >>>> inhibition as a predominant mechanism, but I do believe a function such as >>>> vigilance is very important during recognition and Adaptive Resonance is >>>> one of a very few models that have it. The Regulatory Feedback model I >>>> have developed (and Michael Spratling studies a similar model as well) is >>>> built primarily using the vigilance type of connections and allows multiple >>>> neurons to be evaluated at the same time and continuously during >>>> recognition in order to determine which (single or multiple neurons >>>> together) match the inputs the best without lateral inhibition. >>>> >>>> >>>> >>>> Unfortunately within conferences and talks predominated by the Adaptive >>>> Resonance crowd I have experienced the familiar dismissiveness and did not >>>> have an opportunity to give a proper talk. This goes back to the larger >>>> issue of academic politics based on small self-selected committees, the >>>> same issues that exist with the feedforward crowd, and pretty much all of >>>> academia. >>>> >>>> >>>> >>>> Today's information age algorithms such as Google's can determine >>>> relevance of information and ways to display them, but hegemony of the >>>> journal systems and the small committee system of academia developed in the >>>> middle ages (and their mutual synergies) block the use of more modern >>>> methods in research. Thus we are stuck with this problem, which especially >>>> affects those that are trying to introduce something new and >>>> counterintuitive, and hence the results described in the two National >>>> Bureau of Economic Research articles I cited in my previous message. >>>> >>>> >>>> >>>> Thomas, I am happy to have more discussions and/or start a different >>>> thread. >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> Tsvi Achler MD/PhD >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>>> wrote: >>>> >>>> Tsvi, >>>> >>>> >>>> >>>> While deep learning and feedforward networks have an outsize >>>> popularity, there are plenty of published sources that cover a much wider >>>> variety of networks, many of them more biologically based than deep >>>> learning. A treatment of a range of neural network approaches, going from >>>> simpler to more complex cognitive functions, is found in my textbook *Introduction >>>> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). >>>> Also Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, >>>> 2021) emphasizes a variety of architectures with a strong biological basis. >>>> >>>> >>>> >>>> >>>> >>>> Best, >>>> >>>> >>>> >>>> >>>> >>>> Dan Levine >>>> ------------------------------ >>>> >>>> *From:* Connectionists >>>> on behalf of Tsvi Achler >>>> *Sent:* Saturday, October 30, 2021 3:13 AM >>>> *To:* Schmidhuber Juergen >>>> *Cc:* connectionists at cs.cmu.edu >>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>>> Lecture, etc. >>>> >>>> >>>> >>>> Since the title of the thread is Scientific Integrity, I want to point >>>> out some issues about trends in academia and then especially focusing on >>>> the connectionist community. >>>> >>>> >>>> >>>> In general analyzing impact factors etc the most important progress >>>> gets silenced until the mainstream picks it up Impact Factiors in >>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf >>>> and >>>> often this may take a generation >>>> https://www.nber.org/.../does-science-advance-one-funeral... >>>> >>>> . >>>> >>>> >>>> >>>> The connectionist field is stuck on feedforward networks and variants >>>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>>> variants that are sometimes labeled as recurrent networks for learning time >>>> where the feedforward networks can be rewound in time. >>>> >>>> >>>> >>>> This stasis is specifically occuring with the popularity of deep >>>> learning. This is often portrayed as neurally plausible connectionism but >>>> requires an implausible amount of rehearsal and is not connectionist if >>>> this rehearsal is not implemented with neurons (see video link for further >>>> clarification). >>>> >>>> >>>> >>>> Models which have true feedback (e.g. back to their own inputs) cannot >>>> learn by backpropagation but there is plenty of evidence these types of >>>> connections exist in the brain and are used during recognition. Thus they >>>> get ignored: no talks in universities, no featuring in "premier" journals >>>> and no funding. >>>> >>>> >>>> >>>> But they are important and may negate the need for rehearsal as needed >>>> in feedforward methods. Thus may be essential for moving connectionism >>>> forward. >>>> >>>> >>>> >>>> If the community is truly dedicated to brain motivated algorithms, I >>>> recommend giving more time to networks other than feedforward networks. >>>> >>>> >>>> >>>> Video: >>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>>> >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> Tsvi Achler >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>>> wrote: >>>> >>>> Hi, fellow artificial neural network enthusiasts! >>>> >>>> The connectionists mailing list is perhaps the oldest mailing list on >>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>>> that some of them - as well as their contemporaries - might be able to >>>> provide additional valuable insights into the history of the field. >>>> >>>> Following the great success of massive open online peer review (MOOR) >>>> for my 2015 survey of deep learning (now the most cited article ever >>>> published in the journal Neural Networks), I've decided to put forward >>>> another piece for MOOR. I want to thank the many experts who have already >>>> provided me with comments on it. Please send additional relevant references >>>> and suggestions for improvements for the following draft directly to me at >>>> juergen at idsia.ch: >>>> >>>> >>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>>> >>>> >>>> The above is a point-for-point critique of factual errors in ACM's >>>> justification of the ACM A. M. Turing Award for deep learning and a >>>> critique of the Turing Lecture published by ACM in July 2021. This work can >>>> also be seen as a short history of deep learning, at least as far as ACM's >>>> errors and the Turing Lecture are concerned. >>>> >>>> I know that some view this as a controversial topic. However, it is the >>>> very nature of science to resolve controversies through facts. Credit >>>> assignment is as core to scientific history as it is to machine learning. >>>> My aim is to ensure that the true history of our field is preserved for >>>> posterity. >>>> >>>> Thank you all in advance for your help! >>>> >>>> J?rgen Schmidhuber >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >>>> >>>> Computer Science and Engineering 0404 >>>> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >>>> CSE Building, Room 4130 >>>> University of California San Diego >>>> - >>>> 9500 Gilman Drive # 0404 >>>> La Jolla, Ca. 92093-0404 >>>> >>>> Email: gary at ucsd.edu >>>> Home page: http://www-cse.ucsd.edu/~gary/ >>>> >>>> >>>> Schedule: http://tinyurl.com/b7gxpwo >>>> >>>> >>>> >>>> >>>> *Listen carefully,* >>>> *Neither the Vedas* >>>> *Nor the Qur'an* >>>> *Will teach you this:* >>>> *Put the bit in its mouth,* >>>> *The saddle on its back,* >>>> *Your foot in the stirrup,* >>>> *And ride your wild runaway mind* >>>> *All the way to heaven.* >>>> >>>> *-- Kabir* >>>> >>>> >> >> -- >> Juyang (John) Weng >> > -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Wed Nov 10 18:33:42 2021 From: achler at gmail.com (Tsvi Achler) Date: Wed, 10 Nov 2021 15:33:42 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Dear Juyang & Ali Modularity and localized learning does not work with feedforward networks, but it works within networks where neurons inhibit their own inputs, like Regulatory Feedback. Each neuron can be configured as an individual unit and the weights can be the value of inputs present when a supervised signal is given. This is localized learning e.g. representing likelihood without distribution or the original simple Hebbian type learning without normalization or adjustment based on error. The key to regulatory feedback is that for each input a neuron receives it sends an inhibitory projection linked to its activation to that same input (e.g. by connecting back to inhibitory interneurons). Neurons in the regulatory feedback configuration are modular, you can include them or not include them in the network and they can even be dynamically taken out anytime by artificially forcing a neuron activation to be zero during recognition iterations. When recognizing the iterative feedback interactions between neurons cooperatively and competitively determines which neurons are the best fit to the input. The iterations during recognition are the "glue" that allows modular neurons to interact together to find the best result that includes the contribution (and updated contributions) of all of the modular neurons. Thus the results of recognition are distributed via potential contributions of many neurons but the learning is localized. I know initially this is very counterintuitive and foreign but I encourage you to understand the mechanism and results through the videos and papers before making statements that: modularity is not possible or "too simple". It is just not possible or clear in feedforward networks. Two papers where I analyze the modularity are: 1) Achler T., Non-Oscillatory Dynamics to Disambiguate Pattern Mixtures, Chapter 4 in Relevance of the Time Domain to Neural Network Models, Eds: Rao R, Cecchi G A, pp 57-74, Springer 2011 which builds up a network by adding nodes, and also introduces potential scenarios of linear dependence I described in my previous message 2) Achler T., Amir E., Input Feedback Networks: Classification and Inference Based on Network Structure, Artificial General Intelligence, V1: 15-26, IOS 2008. an even older paper which discusses adding modular nodes ad infinitum and their continued contribution to the network. This paper goes back to when I used a binary weight version which I called Input Feedback Networks and did not use the terms modular, but concepts and mathematics apply. I am happy to forward the pdf copies to those who ask for them. Video playlist: https://www.youtube.com/playlist?list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3 Sincerely, -Tsvi On Wed, Nov 10, 2021 at 12:41 AM Juyang Weng wrote: > Dear Ali, > > I agree "localized vs. distributed" as a dichotomy is too simple, as I > discussed with Asim before. > > However, "importance of modularity" is just a slightly higher level > mistake from people who worked on "neuroscience" experiments, of the same > nature as the "localized vs. distributed" dichotomy. "Localized vs. > distributed" is too simple; "modularity" is also too simple to be true > too. Unfortunately, neuroscientists have spent time on many experiments > whose answers have already been predicted by our Developmental Networks > (DN), especially DN-2. > > However, there is one known model that is holistic in terms > of general-purpose computing. That is Universal Turing machines, although > they were not for brains to start with. Researchers in this challenging > brain subject did not seem to pay sufficient attention to the emergent > Universal Turing Machine (in our Developmental Networks) as a holistic > model of an entire brain. If we look into how excitatory cells and > inhibitory sells migrate and connect, as I explained in my NAI book, > > https://www.amazon.com/Natural-Artificial-Intelligence-Juyang-Weng/dp/0985875712 > it is impossible to have brain modules as you stated. > > If we insist on our discussion on Brodmann areas, then Brodmann areas are > primarily resource areas (like, not exactly the same as, registers, cash, > RAM, and disks) not functional areas. DN-1 has modules, but not > functional modules, but resource modules. DN-2 is more correct in that > even the boundaries of resource modules are adaptive. Some lower brain > areas synthesize specific types of neuro-transmitters (as explained in my > book above), e.g., serotonin, but such areas are still resource modules, > not brain-function models. A brain uses serotonin for many different > functions. > > In summary, no Brodmann areas should be assigned to any specific brain > functions (like edges, but resources), including lower brain areas, such as > raphe nuclei (that synthesize serotonin) and hippocampus (also resources, > not functions). Your cited examples are well known and support my above > fundamental view, backed up by DN as a full algorithmic model of the entire > brain (not brain modules). > That is, "place cells" are a misnomer. > > I am writing something sensitive. I hope this connectionist at cmu will not > reject it. > > Best regards, > -John > > On Tue, Nov 9, 2021 at 2:55 AM Ali Minai wrote: > >> Asim >> >> This is a perennial issue, but I don't think one should see "localized >> vs. distributed" as a dichotomy. Neurons (or groups of neurons) all over >> the brain are obviously tuned to specific things in specific contexts - >> place cells are an obvious example, as are the cells in orientation >> columns, and indeed, much of the early visual system. That's why we can >> decode place from hippocampal recordings in dreaming rats. But in many >> cases, the same cells are tuned to something else in another context >> (again, think of place cells). The notion of "holographically" distributed >> representations is a nice abstract idea but I doubt if it applies anywhere >> in the brain. However, any object/concept, etc., is better represented by >> several units (neurons, columns, hypercolumns, whatever) than by a single >> one, and in a way that makes these representations both modular and >> redundant enough to ensure robustness. Two old but still very interesting >> papers in this regard are: >> >> Tsunoda K, Yamane Y, Nishizaki M, & Tanifuji M (2001) Complex objects >> are represented in macaque inferotemporal cortex by the combination of >> feature columns. Nat Neurosci 4:832?838. >> >> >> Wang, G., Tanaka, K. & Tanifuji M (2001) Optical imaging of functional >> organization in the monkey inferotemporal cortex. Science 272:1665-1668. >> >> >> If there's one big lesson biologists have learned in the last 50 years, >> it is the importance of modularity - not in the Fodorian sense but in the >> sense of Herb Simon, Jordan Pollack, et al. The more we think of the brain >> as a complex system, the clearer the importance of modularity and synergy >> will become. If all other complex systems exploit multi-scale modularity >> and synergistic coordination to achieve all sorts of useful attributes, >> it's inconceivable that the brain does not. Too much of AI - including >> neural networks - has been oblivious to biology, and too confident in the >> ability of abstractions to solve very complicated problems. The only way >> "real" AI will be achieved is by getting much closer to biology - not just >> in the superficial use of brain-like neural networks, but with much deeper >> attention to all aspects of the biological processes that still remain the >> only producers to intelligence known to us. >> >> >> Though modular representations can lead to better explainability to a >> limited degree (again, think of the dreaming rat), I am quite skeptical >> about "explainable AI" in general. Full explainability (e.g., including >> explainability of motivations) implies reductionistic analysis, but true >> intelligence will always be emergent and, by definition, quite full of >> surprises. The more "explainable" we try to make AI, the more we are >> squeezing out exactly that emergent creativity that is the hallmark of >> intelligence. Of course, such a demand is fine when we are just building >> machine learning-based tools for driving cars or detecting spam, but that >> is far from true general intelligence. We will only have built true AI when >> the thing we have created is as mysterious to us in its purposes as our cat >> or our teenage child :-). But that is a whole new debate. >> >> >> The special issue sounds like an interesting idea. >> >> >> Cheers >> >> >> Ali >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> 828 Rhodes Hall >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Past-President (2015-2016) >> International Neural Network Society >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> On Sun, Nov 7, 2021 at 2:15 PM Asim Roy wrote: >> >>> All, >>> >>> >>> >>> Amir Hussain, Editor-in-Chief, Cognitive Computation, is inviting us to >>> do a special issue on the topics under discussion here. They could be short >>> position papers summarizing ideas for moving forward in this field. He >>> promised reviews within two weeks. If that works out, we could have the >>> special issue published rather quickly. >>> >>> >>> >>> Please email me if you are interested. >>> >>> >>> >>> Asim Roy >>> >>> Professor, Arizona State University >>> >>> Lifeboat Foundation Bios: Professor Asim Roy >>> >>> >>> >>> >>> *From:* Yoshua Bengio >>> *Sent:* Sunday, November 7, 2021 8:55 AM >>> *To:* Asim Roy >>> *Cc:* Adam Krawitz ; connectionists at cs.cmu.edu; >>> Juyang Weng >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> >>> >>> Asim, >>> >>> >>> >>> You can have your cake and eat it too with modular neural net >>> architectures. You still have distributed representations but you have >>> modular specialization. Many of my papers since 2019 are on this theme. It >>> is consistent with the specialization seen in the brain, but keep in mind >>> that there is a huge number of neurons there, and you still don't see >>> single grand-mother cells firing alone, they fire in a pattern that is >>> meaningful both locally (in the same region/module) and globally (different >>> modules cooperate and compete according to the Global Workspace Theory and >>> Neural Workspace Theory which have inspired our work). Finally, our recent >>> work on learning high-level 'system-2'-like representations and their >>> causal dependencies seeks to learn 'interpretable' entities (with natural >>> language) that will emerge at the highest levels of representation (not >>> clear how distributed or local these will be, but much more local than in a >>> traditional MLP). This is a different form of disentangling than adopted in >>> much of the recent work on unsupervised representation learning but shares >>> the idea that the "right" abstract concept (related to those we can name >>> verbally) will be "separated" (disentangled) from each other (which >>> suggests that neuroscientists will have an easier time spotting them in >>> neural activity). >>> >>> >>> >>> -- Yoshua >>> >>> >>> >>> *I'm overwhelmed by emails, so I won't be able to respond quickly or >>> directly. Please write to my assistant in case of time sensitive matter or >>> if it entails scheduling: julie.mongeau at mila.quebec >>> * >>> >>> >>> >>> >>> >>> >>> >>> Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy a ?crit : >>> >>> Over a period of more than 25 years, I have had the opportunity to argue >>> about the brain in both public forums and private discussions. And they >>> included very well-known scholars such as Walter Freeman (UC-Berkeley), >>> Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland >>> (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen >>> Institute), Teuvo Kohonen (Finland) and many others, some of whom are on >>> this list. And many became good friends through these debates. >>> >>> >>> >>> We argued about many issues over the years, but the one that baffled me >>> the most was the one about localist vs. distributed representation. Here?s >>> the issue. As far as I know, although all the Nobel prizes in the field of >>> neurophysiology ? from Hubel and Wiesel (simple and complex cells) and >>> Moser and O?Keefe (grid and place cells) to the current one on discovery of >>> temperature and touch sensitive receptors and neurons - are about finding >>> ?meaning? in single or a group of dedicated cells, the distributed >>> representation theory has yet to explain these findings of ?meaning.? >>> Contrary to the assertion that the field is open-minded, I think most in >>> this field are afraid the to cross the red line. >>> >>> >>> >>> Horace Barlow was the exception. He was perhaps the only neuroscientist >>> who was willing to cross the red line and declare that ?grandmother cells >>> will be found.? After a debate on this issue in 2012, which included Walter >>> Freeman and others, Horace visited me in Phoenix at the age of 91 for >>> further discussion. >>> >>> >>> >>> If the field is open minded, would love to hear how distributed >>> representation is compatible with finding ?meaning? in the activations of >>> single or a dedicated group of cells. >>> >>> >>> >>> Asim Roy >>> >>> Professor, Arizona State University >>> >>> Lifeboat Foundation Bios: Professor Asim Roy >>> >>> >>> >>> >>> >>> >>> *From:* Connectionists *On >>> Behalf Of *Adam Krawitz >>> *Sent:* Friday, November 5, 2021 10:01 AM >>> *To:* connectionists at cs.cmu.edu >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> >>> >>> Tsvi, >>> >>> >>> >>> I?m just a lurker on this list, with no skin in the game, but perhaps >>> that gives me a more neutral perspective. In the spirit of progress: >>> >>> >>> >>> 1. If you have a neural network approach that you feel provides a >>> new and important perspective on cognitive processes, then write up a paper >>> making that argument clearly, and I think you will find that the community >>> is incredibly open to that. Yes, if they see holes in the approach they >>> will be pointed out, but that is all part of the scientific exchange. >>> Examples of this approach include: Elman (1990) Finding Structure in Time, >>> Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow >>> a Mind: Statistics, Structure, and Abstraction (not neural nets, but a >>> ?new? approach to modelling cognition). I?m sure others can provide more >>> examples. >>> 2. I?m much less familiar with how things work on the applied side, >>> but I have trouble believing that Google or anyone else will be dismissive >>> of a computational approach that actually works. Why would they? They just >>> want to solve problems efficiently. Demonstrate that your approach can >>> solve a problem more effectively (or at least as effectively) as the >>> existing approaches, and they will come running. Examples of this include: >>> Tesauro?s TD-Gammon, which was influential in demonstrating the power of >>> RL, and LeCun et al.?s convolutional NN for the MNIST digits. >>> >>> >>> >>> Clearly communicate the novel contribution of your approach and I think >>> you will find a receptive audience. >>> >>> >>> >>> Thanks, >>> >>> Adam >>> >>> >>> >>> >>> >>> *From:* Connectionists *On >>> Behalf Of *Tsvi Achler >>> *Sent:* November 4, 2021 9:46 AM >>> *To:* gary at ucsd.edu >>> *Cc:* connectionists at cs.cmu.edu >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> >>> >>> Lastly Feedforward methods are predominant in a large part because they >>> have financial backing from large companies with advertising and clout like >>> Google and the self-driving craze that never fully materialized. >>> >>> >>> >>> Feedforward methods are not fully connectionist unless rehearsal for >>> learning is implemented with neurons. That means storing all patterns, >>> mixing them randomly and then presenting to a network to learn. As far as >>> I know, no one is doing this in the community, so feedforward methods are >>> only partially connectionist. By allowing popularity to predominate and >>> choking off funds and presentation of alternatives we are cheating >>> ourselves from pursuing other more rigorous brain-like methods. >>> >>> >>> >>> Sincerely, >>> >>> -Tsvi >>> >>> >>> >>> >>> >>> On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: >>> >>> Gary- Thanks for the accessible online link to the book. >>> >>> >>> >>> I looked especially at the inhibitory feedback section of the book which >>> describes an Air Conditioner AC type feedback. >>> >>> It then describes a general field-like inhibition based on all >>> activations in the layer. It also describes the role of inhibition in >>> sparsity and feedforward inhibition, >>> >>> >>> >>> The feedback described in Regulatory Feedback is similar to the AC >>> feedback but occurs for each neuron individually, vis-a-vis its inputs. >>> >>> Thus for context, regulatory feedback is not a field-like inhibition, it >>> is very directed based on the neurons that are activated and their inputs. >>> This sort of regulation is also the foundation of Homeostatic Plasticity >>> findings (albeit with changes in Homeostatic regulation in experiments >>> occurring in a slower time scale). The regulatory feedback model describes >>> the effect and role in recognition of those regulated connections in real >>> time during recognition. >>> >>> >>> >>> I would be happy to discuss further and collaborate on writing about the >>> differences between the approaches for the next book or review. >>> >>> >>> >>> And I want to point out to folks, that the system is based on politics >>> and that is why certain work is not cited like it should, but even worse >>> these politics are here in the group today and they continue to very >>> strongly influence decisions in the connectionist community and holds us >>> back. >>> >>> >>> >>> Sincerely, >>> >>> -Tsvi >>> >>> >>> >>> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu wrote: >>> >>> Tsvi - While I think Randy and Yuko's book >>> is >>> actually somewhat better than the online version (and buying choices on >>> amazon start at $9.99), there *is* an online version. >>> >>> >>> >>> Randy & Yuko's models take into account feedback and inhibition. >>> >>> >>> >>> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: >>> >>> Daniel, >>> >>> >>> >>> Does your book include a discussion of Regulatory or Inhibitory Feedback >>> published in several low impact journals between 2008 and 2014 (and in >>> videos subsequently)? >>> >>> These are networks where the primary computation is inhibition back to >>> the inputs that activated them and may be very counterintuitive given >>> today's trends. You can almost think of them as the opposite of Hopfield >>> networks. >>> >>> >>> >>> I would love to check inside the book but I dont have an academic budget >>> that allows me access to it and that is a huge part of the problem with how >>> information is shared and funding is allocated. I could not get access to >>> any of the text or citations especially Chapter 4: "Competition, Lateral >>> Inhibition, and Short-Term Memory", to weigh in. >>> >>> >>> >>> I wish the best circulation for your book, but even if the Regulatory >>> Feedback Model is in the book, that does not change the fundamental problem >>> if the book is not readily available. >>> >>> >>> >>> The same goes with Steve Grossberg's book, I cannot easily look inside. >>> With regards to Adaptive Resonance I dont subscribe to lateral inhibition >>> as a predominant mechanism, but I do believe a function such as vigilance >>> is very important during recognition and Adaptive Resonance is one of >>> a very few models that have it. The Regulatory Feedback model I have >>> developed (and Michael Spratling studies a similar model as well) is built >>> primarily using the vigilance type of connections and allows multiple >>> neurons to be evaluated at the same time and continuously during >>> recognition in order to determine which (single or multiple neurons >>> together) match the inputs the best without lateral inhibition. >>> >>> >>> >>> Unfortunately within conferences and talks predominated by the Adaptive >>> Resonance crowd I have experienced the familiar dismissiveness and did not >>> have an opportunity to give a proper talk. This goes back to the larger >>> issue of academic politics based on small self-selected committees, the >>> same issues that exist with the feedforward crowd, and pretty much all of >>> academia. >>> >>> >>> >>> Today's information age algorithms such as Google's can determine >>> relevance of information and ways to display them, but hegemony of the >>> journal systems and the small committee system of academia developed in the >>> middle ages (and their mutual synergies) block the use of more modern >>> methods in research. Thus we are stuck with this problem, which especially >>> affects those that are trying to introduce something new and >>> counterintuitive, and hence the results described in the two National >>> Bureau of Economic Research articles I cited in my previous message. >>> >>> >>> >>> Thomas, I am happy to have more discussions and/or start a different >>> thread. >>> >>> >>> >>> Sincerely, >>> >>> Tsvi Achler MD/PhD >>> >>> >>> >>> >>> >>> >>> >>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S >>> wrote: >>> >>> Tsvi, >>> >>> >>> >>> While deep learning and feedforward networks have an outsize popularity, >>> there are plenty of published sources that cover a much wider variety of >>> networks, many of them more biologically based than deep learning. A >>> treatment of a range of neural network approaches, going from simpler to >>> more complex cognitive functions, is found in my textbook *Introduction >>> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also >>> Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021) >>> emphasizes a variety of architectures with a strong biological basis. >>> >>> >>> >>> >>> >>> Best, >>> >>> >>> >>> >>> >>> Dan Levine >>> ------------------------------ >>> >>> *From:* Connectionists >>> on behalf of Tsvi Achler >>> *Sent:* Saturday, October 30, 2021 3:13 AM >>> *To:* Schmidhuber Juergen >>> *Cc:* connectionists at cs.cmu.edu >>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing >>> Lecture, etc. >>> >>> >>> >>> Since the title of the thread is Scientific Integrity, I want to point >>> out some issues about trends in academia and then especially focusing on >>> the connectionist community. >>> >>> >>> >>> In general analyzing impact factors etc the most important progress gets >>> silenced until the mainstream picks it up Impact Factiors in novel >>> research www.nber.org/.../working_papers/w22180/w22180.pdf >>> and >>> often this may take a generation >>> https://www.nber.org/.../does-science-advance-one-funeral... >>> >>> . >>> >>> >>> >>> The connectionist field is stuck on feedforward networks and variants >>> such as with inhibition of competitors (e.g. lateral inhibition), or other >>> variants that are sometimes labeled as recurrent networks for learning time >>> where the feedforward networks can be rewound in time. >>> >>> >>> >>> This stasis is specifically occuring with the popularity of deep >>> learning. This is often portrayed as neurally plausible connectionism but >>> requires an implausible amount of rehearsal and is not connectionist if >>> this rehearsal is not implemented with neurons (see video link for further >>> clarification). >>> >>> >>> >>> Models which have true feedback (e.g. back to their own inputs) cannot >>> learn by backpropagation but there is plenty of evidence these types of >>> connections exist in the brain and are used during recognition. Thus they >>> get ignored: no talks in universities, no featuring in "premier" journals >>> and no funding. >>> >>> >>> >>> But they are important and may negate the need for rehearsal as needed >>> in feedforward methods. Thus may be essential for moving connectionism >>> forward. >>> >>> >>> >>> If the community is truly dedicated to brain motivated algorithms, I >>> recommend giving more time to networks other than feedforward networks. >>> >>> >>> >>> Video: >>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 >>> >>> >>> >>> >>> Sincerely, >>> >>> Tsvi Achler >>> >>> >>> >>> >>> >>> >>> >>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen >>> wrote: >>> >>> Hi, fellow artificial neural network enthusiasts! >>> >>> The connectionists mailing list is perhaps the oldest mailing list on >>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping >>> that some of them - as well as their contemporaries - might be able to >>> provide additional valuable insights into the history of the field. >>> >>> Following the great success of massive open online peer review (MOOR) >>> for my 2015 survey of deep learning (now the most cited article ever >>> published in the journal Neural Networks), I've decided to put forward >>> another piece for MOOR. I want to thank the many experts who have already >>> provided me with comments on it. Please send additional relevant references >>> and suggestions for improvements for the following draft directly to me at >>> juergen at idsia.ch: >>> >>> >>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html >>> >>> >>> The above is a point-for-point critique of factual errors in ACM's >>> justification of the ACM A. M. Turing Award for deep learning and a >>> critique of the Turing Lecture published by ACM in July 2021. This work can >>> also be seen as a short history of deep learning, at least as far as ACM's >>> errors and the Turing Lecture are concerned. >>> >>> I know that some view this as a controversial topic. However, it is the >>> very nature of science to resolve controversies through facts. Credit >>> assignment is as core to scientific history as it is to machine learning. >>> My aim is to ensure that the true history of our field is preserved for >>> posterity. >>> >>> Thank you all in advance for your help! >>> >>> J?rgen Schmidhuber >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >>> >>> Computer Science and Engineering 0404 >>> IF USING FEDEX INCLUDE THE FOLLOWING LINE: >>> CSE Building, Room 4130 >>> University of California San Diego - >>> 9500 Gilman Drive # 0404 >>> La Jolla, Ca. 92093-0404 >>> >>> Email: gary at ucsd.edu >>> Home page: http://www-cse.ucsd.edu/~gary/ >>> >>> >>> Schedule: http://tinyurl.com/b7gxpwo >>> >>> >>> >>> >>> *Listen carefully,* >>> *Neither the Vedas* >>> *Nor the Qur'an* >>> *Will teach you this:* >>> *Put the bit in its mouth,* >>> *The saddle on its back,* >>> *Your foot in the stirrup,* >>> *And ride your wild runaway mind* >>> *All the way to heaven.* >>> >>> *-- Kabir* >>> >>> > > -- > Juyang (John) Weng > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.bach at ucl.ac.uk Thu Nov 11 03:47:15 2021 From: d.bach at ucl.ac.uk (Dominik R. Bach) Date: Thu, 11 Nov 2021 08:47:15 +0000 Subject: Connectionists: Post doc in Cognitive-Computational/Theoretical Neuroscience at University College London UK (closing 16 Nov) In-Reply-To: References: <6c7bf4cd-f78e-9e77-e8de-b6d50a636f47@ucl.ac.uk> Message-ID: *Post doc in Cognitive-Computational Modelling/Theoretical Neuroscience at University College London: discrete and continuous human action control under threat* We are looking for a research fellow in an ERC-funded research project "Action selection under threat - the complex control of human defence" led by Dominik Bach (http://bachlab.org) at University College London in collaboration with Peter Dayan, Max-Planck-Institute for Biological Cybernetics in T?bingen (https://www.kyb.tuebingen.mpg.de/de/computational-neuroscience). The position will be based at Max-Planck UCL Centre for Computational Psychiatry (https://www.mps-ucl-centre.mpg.de/en) and Wellcome Centre for Human Neuroimaging (http://www.fil.ion.ucl.ac.uk/). The overarching goal of the project is to understand the cognitive-computational control of human action selection under acute, immediate threat. We investigate this in an immersive virtual reality (VR) environment, in which people can move to avoid a large number of different threats. As part of this project, the candidate will build explicit theoretical and computational models of discrete and continuous action control and action updating under constraints of time pressure and unaffordable costs. These will be tested by the experimentalists in our interdisciplinary team, composed of VR experts, psychologists, and movement scientists. If desired there is a possibility to get involved in the experimental work, including behavioural experiments, motion capture and OPM-MEG. Within the topical focus, the project offers unique freedom to explore and develop novel directions and formalisms at the interface of classical discrete-space decision models and continous action control. We seek applicants with a track record of cognitive-computational modelling and a PhD in computational neuroscience, robotics with focus on action planning, computer science/mathematics/physics with a focus on decision science, or in a related area. We are looking for an individual who is strongly motivated to pursue an academic career and is excited by the opportunities for personal and career development this position can provide. The post is available from now with negotiable starting date. Funding is available for up to 3 years. Starting salary is on UCL grade 7, ranging from ?36,770 to ?44,388 per annum, inclusive of London Allowance, superannuable. More information, a full job description, and access to the UCL online application portal: https://bit.ly/3G6uQAg Please contact d.bach at ucl.ac.uk for any queries about the project or role. *Closing date: 16 November 2021* Interviews will be held remotely in December. Apologies for cross-posting. -- ----------------------- Dominik R Bach MBBS PhD Professor for Cognitive-Computational and Clinical Neuroscience Max Planck UCL Centre for Computational Psychiatry and Ageing Research Wellcome Centre for Human Neuroimaging, University College London bachlab.org | @bachlab_cog -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Thu Nov 11 04:06:44 2021 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Thu, 11 Nov 2021 11:06:44 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Cees_Snoek=3A?= =?utf-8?q?_=E2=80=9CReal-World_Learning=E2=80=9D=2C_23rd_November_?= =?utf-8?q?2021_17=3A00-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_?= =?utf-8?q?lectures?= References: <0b9801d7d632$7278d7b0$576a8710$@csd.auth.gr> <007301d7d671$dbcc3240$936496c0$@csd.auth.gr> Message-ID: <01c301d7d6db$73729530$5a57bf90$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Cees Snoek (University of Amsterdam, Netherlands), a prominent AI researcher internationally, will deliver the e-lecture: ?Real-World Learning?, on Tuesday 23rd November 2021 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/s/96903010605 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. Other upcoming lectures: 1. Prof. Marios Polycarpou (University of Cyprus, Cyprus), 7th December 2021 17:00 ? 18:00 CET. 2. Prof. Bernhard Rinner (Universit?t Klagenfurt, Austria), 11th January 2021 17:00 ? 18:00 CET. More lecture infos in: https://www.i-aida.org/event_cat/ai-lectures/?type=future The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From marinella.petrocchi at iit.cnr.it Thu Nov 11 06:22:03 2021 From: marinella.petrocchi at iit.cnr.it (Marinella Petrocchi) Date: Thu, 11 Nov 2021 12:22:03 +0100 Subject: Connectionists: [CFP][ECIR 2022] ROMCIR 2022: 2nd Intl. Workshop on Reducing Online Misinformation through Credible Information Retrieval Message-ID: [Apologies for multiple postings] ******************************************************************************************************************** ROMCIR 2022: The 2nd International Workshop on Reducing Online Misinformation through Credible Information Retrieval Stavanger, Norway, April 10, 2022 Conference website: https://romcir2022.disco.unimib.it/ Submission link: https://easychair.org/conferences/?conf=romcir2022 ******************************************************************************************************************** ***AIM AND THEMES*** Within the ECIR 2022 conference (https://ecir2022.org/), the second edition of the ROMCIR workshop is particularly focused on discussing and addressing issues related to reducing misinformation through Information Retrieval solutions. Hence, the central topic of the workshop concerns providing access to users to credible and/or verified information, to mitigate the information disorder phenomenon. By "information disorder" we mean all forms of communication pollution. From misinformation made out of ignorance, to intentional sharing of false content. In this context, all those approaches that can serve to the assessment of the credibility of information circulating online and in social media, in particular, find their place. This topic is very broad, as it concerns different contents (e.g., Web pages, news, reviews, medical information, online accounts, etc.), different Web and social media platforms (e.g., microblogging platforms, social networking services, social question-answering systems, etc.), and different purposes (e.g., identifying false information, accessing information based on its credibility, retrieving credible information, etc.). For this reason, the themes of interest include, but are not limited to, the following: - Access to credible information - Bias detection - Bot/Spam/Troll detection - Computational fact-checking - Crowdsourcing for credibility - Deep fakes - Disinformation/Misinformation detection - Evaluation strategies to assess information credibility - Fake news detection - Fake reviews detection - Filter Bubbles and Echo chambers - Harassment/bullying - Hate-speech detection - Information polarization in online communities - Propaganda identification/analysis - Retrieval of credible information - Security, privacy and credibility - Sentiment/Emotional analysis - Stance detection - Trust and Reputation systems (to mitigate the effects of disinformation) - Understanding and guiding the societal reaction in the presence of disinformation Data-driven approaches in the IR field or related fields, supported by publicly available datasets, are more than welcome. ***CONTRIBUTIONS*** The workshop solicits the sending of two types of contributions relevant to the workshop and suitable to generate discussion: - Original, unpublished contributions (pre-prints submitted to ArXiv are eligible) that will be included in an open-access post-proceedings volume of CEUR Workshop Proceedings (http://ceur-ws.org/), indexed by both Scopus and DBLP. - Already published or preliminary work that will not be included in the post-proceedings volume. All submissions will undergo double-blind peer review by the program committee. Submissions are to be done electronically through the EasyChair at: https://easychair.org/conferences/?conf=romcir2022 ***SUBMISSION INSTRUCTIONS*** Submissions must be: - no more than 10 pages long (regular papers) - between 5 and 9 pages long (short papers) We recommend that authors use the new CEUR-ART style for writing papers to be published: - An Overleaf page for LaTeX users is available at: https://www.overleaf.com/read/gwhxnqcghhdt - An offline version with the style files including DOCX template files is available at: http://ceur-ws.org/Vol-XXX/CEURART.zip - The paper must contain, as the name of the conference: ROMCIR 2022: The 2nd Workshop on Reducing Online Misinformation through Credible Information Retrieval, held as part of ECIR 2022: the 44th European Conference on Information Retrieval, April 10-14, 2022, Stavanger, Norway - The title of the paper should follow the regular capitalization of English - Please, choose the single-column template - According to CEUR-WS policy, the papers will be published under a CC BY 4.0 license: https://creativecommons.org/licenses/by/4.0/deed.en If the paper is accepted, authors will be asked to sign (at pen) an author agreement with CEUR: - In case you do not employ Third-Party Material (TPM) in your draft, sign the document at http://ceur-ws.org/ceur-author-agreement-ccby-ntp.pdf?ver=2020-03-02 - If you do use TPM, the agreement can be found at http://ceur-ws.org/ceur-author-agreement-ccby-tp.pdf?ver=2020-03-02 Please submit an anonymized version of the submission (do not indicate the names of authors and institutions and cite your work in an impersonal way) ***IMPORTANT DATES*** - Abstract Submission Deadline: January 03, 2022 - Paper Submission Deadline: January 10, 2022 - Decision Notifications: February 11, 2022 - Workshop day: April 10, 2022 ***ORGANIZERS*** The following people contribute to the workshop in various capacities and roles: *Workshop Chairs* - Marinella Petrocchi (https://www.iit.cnr.it/en/marinella.petrocchi/), IIT-CNR, Pisa, Italy - Marco Viviani (https://ikr3.disco.unimib.it/people/marco-viviani/), University of Milano-Bicocca *Proceedings Chair* - Rishabh Upadhyay, University of Milano-Bicocca *Program Committee* - Rino Falcone, Institute of Cognitive Sciences and Technologies-CNR, Rome, Italy - Carlos A. Iglesias, Universidad Polit?cnica de Madrid, Madrid, Spain - Petr Knoth, The Open University, London, UK - Udo Kruschwitz, University of Regensburg, Regensburg, Germany - Yelena Mejova, ISI Foundation, Turin, Italy - Preslav Nakov, Qatar Computing Research Institute, HBKU, Doha, Qatar - Symeon Papadopoulos, Information Technologies Institute (ITI), Thessaloniki, Greece - Gabriella Pasi, University of Milano-Bicocca, Milan, Italy - Marinella Petrocchi, IIT ? CNR ? Istituto di Informatica e Telematica, Pisa, Italy - Adrian Popescu, CEA LIST, Gif-sur-Yvette, France - Paolo Rosso, Universitat Polit?cnica de Val?ncia, Val?ncia, Spain - Fabio Saracco, IMT School for Advanced Studies, Lucca, Italy - Marco Viviani, University of Milano-Bicocca, Milan, Italy - Xinyi Zhou, Syracuse University, Syracuse, NY, USA - Arkaitz Zubiaga, Queen Mary University of London, London, UK -- Marinella Petrocchi Senior Researcher @Institute of Informatics and Telematics (IIT) National Research Council (CNR) Pisa (Italy) Mobile: +39 348 8260773 Skype: m_arinell_a Web: https://marinellapetrocchi.wixsite.com/mysite `Luck is a matter of geography' (Bandabardo') From Auke.Ijspeert at epfl.ch Thu Nov 11 07:01:07 2021 From: Auke.Ijspeert at epfl.ch (Auke Ijspeert) Date: Thu, 11 Nov 2021 13:01:07 +0100 Subject: Connectionists: ERC-funded Postdoc position in computational neuroscience and modelling of salamander locomotor circuits, EPFL, Lausanne, Switzerland Message-ID: <80a00224-e6d0-7d1e-5119-c79095491226@epfl.ch> The Biorobotics laboratory (Biorob, https://www.epfl.ch/labs/biorob/) at EPFL (Lausanne, Switzerland) has one open postdoc position in computational neuroscience and modeling of salamander locomotor circuits. The position is part of the Salamandra project, a Synergy grant funded by the ERC that started in September 2021, together with Prof. Andras Simon (Karolinska Institute, Sweden) and Prof. Dimitri Ryczko (U. Sherbrooke, Canada). See some info here: https://actu.epfl.ch/news/salamanders-provide-a-model-for-spinal-cord-rege-3/ The goals of the position are (i) to develop numerical models and simulations of the locomotor circuits in the spinal cord of the salamander, (ii) to analyze their dynamics when coupled to a simulated mechanical model of the body (interactions between central and peripheral mechanisms), and (iii) to investigate their regenerative properties after spinal cord lesions. The models will likely be based on integrate-and-fire neuron models, but other models could also be investigated (e.g. rate-based). The position is fully funded for 2 years (and can be extended to 4 years in total). EPFL is one of the leading Institutes of Technology in Europe and offers extremely competitive salaries and research infrastructure. *Requirements:* Candidates should have a Ph.D. and a strong publication record in computational neuroscience. An ideal candidate would have prior experience in: * Numerical models of oscillatory (spinal) circuits * Movement and locomotion control * Integrate-and-fire neural models (or similar neuron models) * Good management skills are a plus. Fluency in oral and written English is required. *How to apply for the position:* Postdoctoral applications should consist of a motivation letter (explaining why you are interested in the project, and why you feel qualified for it), a full CV, two or three relevant publications, and the email addresses of two or more referees. PDF files are preferred. Please send your application and any inquiry by email to auke.ijspeert at epfl.ch. Information concerning the type of research carried out by the lab can be found at https://www.epfl.ch/labs/biorob/ . *Deadline and starting date:* Applications are requested starting from now**, and will then be processed as they arrive until the position is closed. The ideal starting date is *early 2022* (with some flexibility). *Contact:* Information concerning the type of research carried out by the lab can be found at https://www.epfl.ch/labs/biorob/ . You should send your application and any inquiry by email to auke.ijspeert at epfl.ch -- ----------------------------------------------------------------- Prof Auke Jan Ijspeert Biorobotics Laboratory EPFL-STI-IBI-BIOROB, ME D1 1226, Station 9 EPFL, Ecole Polytechnique F?d?rale de Lausanne CH 1015 Lausanne, Switzerland Office: ME D1 1226 Tel: +41 21 693 2658 Fax: +41 21 693 3705 www:http://biorob.epfl.ch Email:Auke.Ijspeert at epfl.ch ----------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver at roesler.co.uk Thu Nov 11 07:48:47 2021 From: oliver at roesler.co.uk (Oliver Roesler) Date: Thu, 11 Nov 2021 12:48:47 +0000 Subject: Connectionists: Survey on Robot Behavior Adaptation to Human Social Norms Message-ID: <510cc86b-0dd0-6dc4-154f-a80e6d3c28e5@roesler.co.uk> **Apologies for cross-posting** Dear All, many thanks to everyone who has already completed our survey about Robot Behavior Adaptation to Human Social Norms. The goal of the survey is to compile an overview of how socially aware robot behavior, which is conform to social norms, will influence the perception of robots by humans who are interacting with them. Additionally, the survey aims to determine which characteristics a good benchmark for the evaluation of social robot behavior regarding its compliance to social norms should have. If you haven't had the time yet to complete the survey and would like to participate, please access it here . The survey should take no more than 15 minutes. Thank you for your support! Best regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonior at usp.br Thu Nov 11 09:16:47 2021 From: antonior at usp.br (Antonio Roque) Date: Thu, 11 Nov 2021 11:16:47 -0300 Subject: Connectionists: Postdoctoral fellowships at the Neuromathematics Center in Sao Paulo State, Brazil Message-ID: *Postdoctoral Fellowships inAcquisition, processing and quantitative analysis of neurobiological data* The Research, Innovation and Dissemination Center for Neuromathematics (NeuroMat), hosted by the University of S?o Paulo (USP), Brazil, and funded by the S?o Paulo Research Foundation (FAPESP), is offering three post-doctoral fellowships for recent PhDs with outstanding research potential. The fellowship will involve collaborations with research teams and laboratories associated with NeuroMat. The research to be developed by the post-doc fellows shall be strictly related to ongoing research lines developed by the NeuroMat team. The project may be developed at USP (S?o Paulo or Ribeir?o Preto) or at UNICAMP (Campinas). We seek candidates capable of developing independent research in one of the research lines below. 1. Stochastic modeling of neurobiological data. 2. Acquisition, processing and quantitative analysis of neurobiological data. Candidates to the first research line are required to have a strong background in probability theory with emphasis on stochastic processes. Candidates to the second research line are required to have a strong background in neuroscience with previous experience in neurophysiological data acquisition, processing and analysis, and knowledge of computer programming. The initial appointment is for one year, with a possible extension to up to three years, conditional on research progress. The fellowship is competitive at an international level, and fellows benefit from extra funds for travel and research expenses plus limited support for relocation expenses. Application Instructions: Applicants should complete and submit the application form. The following documents and information are requested (please see the form for further details): - Summary of the CV, in the format required by FAPESP (see fapesp.br/en/6351 for instructions); - List of publications, with links to those available online. - A summary of the research plan for the next year, up to 5 pages in length. This document must explicitly state for which of the two aforementioned profiles the candidate has applied. It should also address how this research plan fits within the framework of the NeuroMat research program. Timetable: Candidates are encouraged to apply at their earliest convenience until December 12, 2021. Appointments are expected to start by February or March 2022. The initial period of the position lasts for 12 months, with possible renewals for up to three years. This opportunity is open to candidates of any nationality. The selected candidates will be awarded FAPESP Postdoctoral fellowships in the amount of BR 7,373 monthly and a research contingency fund, equivalent to 15% of the annual value of the fellowship, which should be spent on items directly related to the research activity. More details on FAPESP's postdoctoral fellowships are available at http://www.fapesp.br/en/5427. -- Dr. Antonio C. Roque Professor Associado Departamento de Fisica FFCLRP, Universidade de Sao Paulo 14040-901 Ribeirao Preto-SP Brazil - Brasil E-mails: antonior at usp.br aroquesilva at gmail.com URL: www.sisne.org Tels: +55 16 3315-3768 <+55%2016%203315-3768> (sala/office); +55 16 3315-3859 <+55%2016%203315-3859> (lab) FAX: +55 16 3315-4887 <+55%2016%203315-4887> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomas.hromadka at gmail.com Thu Nov 11 18:45:18 2021 From: tomas.hromadka at gmail.com (Tomas Hromadka) Date: Fri, 12 Nov 2021 00:45:18 +0100 Subject: Connectionists: COSYNE 2022: Abstract submission closes soon Message-ID: <2bf63148-a82c-c875-0d2c-4181a539ea20@gmail.com> ==================================================== Computational and Systems Neuroscience 2022 (Cosyne) MAIN MEETING 17 - 20 March 2022 Lisbon, Portugal WORKSHOPS 21 - 22 March 2022 Cascais, Portugal www.cosyne.org ==================================================== ---------------------------------------------------- MEETING ANNOUNCEMENT ---------------------------------------------------- The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function. The MAIN MEETING is single-track. A set of invited talks is selected by the Executive Committee, and additional talks and posters are selected by the Program Committee, based on submitted abstracts. The WORKSHOPS feature in-depth discussion of current topics of interest, in a small group setting. All abstract submissions will be reviewed double blind. The deadline for Abstract submission is 20 November 2021. Cosyne topics include but are not limited to: neural basis of behavior, sensory and motor systems, circuitry, learning, neural coding, natural scene statistics, dendritic computation, neural basis of persistent activity, nonlinear receptive field mapping, representations of time and sequence, reward systems, decision-making, synaptic plasticity, map formation and plasticity, population coding, attention, neuromodulation, and computation with spiking networks. We would like to foster increased participation from experimental groups as well as computational ones. Please circulate widely and encourage your students and postdocs to apply. IMPORTANT DATES Abstract submission is now open Abstract submission deadline: 20 November 2021 When preparing an abstract, authors should be aware that not all abstracts can be accepted for the meeting. Abstracts will be selected based on the clarity with which they convey the substance, significance, and originality of the work to be presented. COSYNE SPEAKERS Eugenia Chiappe (Champalimaud Centre for the Unknown) Albert Compte (IDIBAPS, Barcelona) Sandeep Robert Datta (Harvard Medical School) Andr? Fenton (New York University) Kate Jeffery (University College London) Ann Hermundstad (Janelia Research Campus, HHMI) Michael A. Long (New York University) Christian Machens (Champalimaud Centre for the Unknown) Asya Rolls (Technion - Israel Institute of Technology) Susanne Schreiber (Humboldt-Universit?t zu Berlin) Maryam Shanechi (University of Southern California) Scott Waddell (University of Oxford) Martha White (University of Alberta) ORGANIZING COMMITTEE General Chairs: Anne-Marie Oswald (U Pittsburgh) and Srdjan Ostojic (Ecole Normale Superieure Paris) Program Chairs: Laura Busse (LMU Munich) and Tim Vogels (IST Austria) Workshop Chairs: Anna Schapiro (U Penn) and Blake Richards (McGill) Tutorial Chair: Kanaka Rajan (Mount Sinai) DEIA Committee: Gabrielle Gutierrez (Columbia) and Stefano Recanatesi (U Washington) Undergraduate Travel Chairs: Angela Langdon (Princeton) and Sashank Pisupati (Princeton) Development Chair: Michael Long (NYU) Social Media Chair: Grace Lindsay (Columbia) Audio-Video Media Chair: Carlos Stein Brito (EPFL) Poster Design: Maja Bialon PROGRAM COMMITTEE Laura Busse (U Munich) Tim Vogels (IST Austria) Athena Akrami (UCL) Omri Barak (Technion) Brice Bathellier (Paris) Bing Brunton (U Washington) Yoram Burak (Hebrew University) SueYeon Chung (Columbia) Christine Constantinople (NYU) Victor de Lafuente (UNAM Mexico) Jan Drugowitsch (Harvard) Alexander Ecker (G?ttingen) Tatiana Engel (Cold Spring Harbor) Annegret Falkner (Princeton) Kevin Franks (Duke) Jens Kremkow (Berlin) Andrew Leifer (Princeton) Sukbin Lim (Shanghai) Scott Linderman (Stanford) Emilie Mace (MPI Neurobiology) Mackenzie Mathis (EPFL Lausanne) Ida Momennejad (Microsoft) Jill O'Reilly (Oxford) Il Memming Park (Stony Brook) Adrien Peyrache (McGill Montr?al) Yiota Poirazi (FORCE) Nathalie Rochefort (Edinburgh) Cristina Savin (NYU) Daniela Vallentin (MPI Ornithology) Brad Wyble (Penn State) EXECUTIVE COMMITTEE Stephanie Palmer (U Chicago) Zachary Mainen (Champalimaud) Alexandre Pouget (U Geneva) Anthony Zador (CSHL) CONTACT meeting [at] cosyne.org COSYNE MAILING LISTS Please consider adding yourself to Cosyne mailing lists (groups) to receive email updates with various Cosyne-related information and join in helpful discussions. See Cosyne.org -> Mailing lists for details. From oliver at roesler.co.uk Fri Nov 12 02:58:55 2021 From: oliver at roesler.co.uk (Oliver Roesler) Date: Fri, 12 Nov 2021 07:58:55 +0000 Subject: Connectionists: CFP Special Issue on Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation Message-ID: <016222ae-ec9c-5841-1e0b-f3f536f70698@roesler.co.uk> *CALL FOR PAPERS* **Apologies for cross-posting** *Special Issue* on *Socially Acceptable Robot Behavior: Approaches for Learning, Adaptation and Evaluation* in Interaction Studies *I. Aim and Scope* A key factor for the acceptance of robots as regular partners in human-centered environments is the appropriateness and predictability of their behavior. The behavior of human-human interactions is governed by customary rules that define how people should behave in different situations, thereby governing their expectations. Socially compliant behavior is usually rewarded by group acceptance, while non-compliant behavior might have consequences including isolation from a social group. Making robots able to understand human social norms allows for improving the naturalness and effectiveness of human-robot interaction and collaboration. Since social norms can differ greatly between different cultures and social groups, it is essential that robots are able to learn and adapt their behavior based on feedback and observations from the environment. This special issue in Interaction Studies aims to attract the latest research aiming at learning, producing, and evaluating human-aware robot behavior, thereby, following the recent RO-MAN 2021 Workshop on Robot Behavior Adaptation to Human Social Norms (TSAR) in providing a venue to discuss the limitations of the current approaches and future directions towards intelligent human-aware robot behaviors. *II. Submission* 1. Before submitting, please check the official journal guidelines . 2. For paper submission, please use the online submission system . 3. After logging into the submission system, please click on "Submit a manuscript" and select "Original article". 4. Please ensure that you select "Special Issue: Socially Acceptable Robot Behavior" under "General information". ??? The primary list of topics covers the following points (but not limited to): * Human-human vs human-robot social norms * Influence of cultural and social background on robot behavior perception * Learning of socially accepted behavior * Behavior adaptation based on social feedback * Transfer learning of social norms experience * The role of robot appearance on applied social norms * Perception of socially normative robot behavior * Human-aware collaboration and navigation * Social norms and trust in human-robot interaction * Representation and modeling techniques for social norms * Metrics and evaluation criteria for socially compliant robot behavior *III. Timeline* 1. Deadline for paper submission: *January 31, 2022*** 2. First notification for authors: *April 15, 2022* 3. Deadline for revised papers submission: *May 31, 2022* 4. Final notification for authors: *July 15, 2022* 5. Deadline for submission of camera-ready manuscripts: *August 15, 2022* ??? Please note that these deadlines are only indicative and that all submitted papers will be reviewed as soon as they are received. *IV. Guest Editors* 1. *Oliver Roesler* ? Vrije Universiteit Brussel ? Belgium 2. *Elahe Bagheri* ? Vrije Universiteit Brussel ? Belgium 3. *Amir Aly* ? University of Plymouth ? UK 4. *Silvia Rossi* ? University of Naples Federico II ? Italy 5. *Rachid Alami* ? CNRS-LAAS ? France -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis.e.baker.phd at gmail.com Thu Nov 11 12:06:59 2021 From: travis.e.baker.phd at gmail.com (Travis Baker) Date: Thu, 11 Nov 2021 12:06:59 -0500 Subject: Connectionists: PhD Studentships available at the Center for Molecular and Behavioral Neuroscience, Rutgers University Message-ID: Rutgers University in Newark invites students to apply to the Behavioral and Neural Sciences Graduate Program. Students in our program perform independent research in humans, non-human primates, and animals, culminating in a Ph.D. in Neuroscience. The program?s goal is to prepare students for positions in academic, biomedical, and industrial research settings. All students receive tuition and a generous stipend, regardless of nationality. Outstanding candidates will be considered for the prestigious Rutgers Presidential Fellowship. Cellular, systems, and cognitive neuroscience research State-of-the-art facilities ? two research dedicated 3T MRI scanners ? MRI compatible EEG ? Behavioral and EEG testing facilities ? Confocal microscopes Cutting-edge techniques ? Functional imaging ? Two photon laser microscopy ? Eye-tracking ? Calcium imaging ? robot guided Transcranial magnetic stimulation ? Optogenetics ? Virtual reality ? Chemogenetics ? Patch clamp recording ? Fiber photometry ? Voltametry ? Multi-site unit/LFP recordings Diverse research interests of our faculty ? Learning and memory ? Movement control ? Language development ? Vision ? Addiction ? Emotions, stress, and anxiety ? Decision making ? Executive control ? Attention While performing exciting research, our students learn how the brain works, develops, interacts with the environment, and is modified by experience in health and disease. How to Apply / Contact Apply: http://bns.rutgers.edu Deadline: Dec. 15th, 2021 Contact: Dr. Icnelia Huerta Ocampo icnelia.huerta at rutgers.edu Center for Molecular and Behavioral Neuroscience https://sasn.rutgers.edu/research/centers-institutes/center-molecular-and-behavioral-neuroscience-cmbn For more information about the program, please contact Travis Baker, PhD at travis.e.baker at rutgers.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Thu Nov 11 17:10:45 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Thu, 11 Nov 2021 22:10:45 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: Yoshua, Thanks for sending this list. If I want to do a quick read, which of these articles best describe the ?Disentanglement? motivation and mechanism? Asim From: Yoshua Bengio Sent: Sunday, November 7, 2021 5:45 PM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu; Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Here is a selection: *** Modular system-2 / Global Workspace Theory -inspired deep learning *** * Inductive Biases for Deep Learning of Higher-Level Cognition. https://arxiv.org/abs/2011.15091 * Compositional Attention: Disentangling Search and Retrieval. https://arxiv.org/abs/2110.09419 * Discrete-Valued Neural Communication. https://arxiv.org/abs/2107.02367 * A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning. https://arxiv.org/abs/2106.02097 * Coordination Among Neural Modules Through a Shared Global Workspace. https://arxiv.org/abs/2103.01197 * Neural Production Systems. https://arxiv.org/abs/2103.01937 *** Causal discovery with deep learning *** * A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. https://arxiv.org/abs/1901.10912 * Learning Neural Causal Models with Active Interventions. https://arxiv.org/abs/2109.02429 * Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning. https://arxiv.org/abs/2110.15796 * Toward Causal Representation Learning. https://ieeexplore.ieee.org/abstract/document/9363924 I am currently working on the merge of the above two threads of modularity and causality... -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec Le dim. 7 nov. 2021, ? 14 h 01, Asim Roy > a ?crit : Yoshua, I am indeed feeling that I can have the cake and eat it too. Accepting the fact that neural activations in the brain have ?meaning and interpretation? is a huge step forward for the field. I would conjecture that it opens the door to new theories in cognitive and neuro sciences. You are definitely crossing the red line and that?s great. Can we have references to some of your papers? By the way, I think I understand what you mean by disentangling. There are probably simpler ways to disentangle and get to Explainable AI. But please send us the references. Best, Asim From: Yoshua Bengio > Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy > Cc: Adam Krawitz >; connectionists at cs.cmu.edu; Juyang Weng > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy > a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy From: Connectionists > On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: 1. If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. 2. I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists > On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler > wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu > wrote: Tsvi - While I think Randy and Yuko's book is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler > wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S > wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ________________________________ From: Connectionists > on behalf of Tsvi Achler > Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen > Cc: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu Home page: http://www-cse.ucsd.edu/~gary/ Schedule: http://tinyurl.com/b7gxpwo Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo.cabri at unimore.it Thu Nov 11 15:15:23 2021 From: giacomo.cabri at unimore.it (gcabri@unimore.it) Date: Thu, 11 Nov 2021 21:15:23 +0100 Subject: Connectionists: CfP: Special issue Autonomous, Context-Aware, Adaptive Digital Twins Message-ID: *Special issue Autonomous, Context-Aware, Adaptive Digital Twins* /Computers in Industry (Elsevier)/ // /Scope: / Digital Twins are quickly becoming an important concept in the digital representation of manufacturing assets, products, and other resources. As comprehensive digital representations of physical assets, comprising their design and configuration, state, and behaviour, Digital Twins provide information about and services based on their physical counterpart?s current condition, their history and even their predicted future. As such, they can be considered the building blocks of a vision of future Digital Factories in which stakeholders collaborate via the information Digital Twins provide about physical assets in the factory and throughout the product lifecycle. Besides their potential to facilitate collaboration based on information about their physical counterparts, Digital Twins also hold promise to contribute to more flexible and resilient Digital Factories. To fulfil this promise, Digital Twins will need to evolve from today?s expert-centric tools towards active entities that extend the capabilities of their physical counterparts. Required features include sensing and processing their environment and situation, pro-actively communicating with each other and with humans, taking their own decisions towards their own or cooperative goals, and adapting themselves and their physical counterparts to achieve those goals. That means, future Digital Twins should be /context-aware/,/autonomous/, and /adaptive/. This Special Issue intends to highlight research contributing to the evolution of Digital Twins towards context-aware, autonomous, and adaptive building blocks of tomorrow?s Digital Factories. Contributions should address research gaps that need to be bridged to achieve that objective. Four interwoven research topics are especially relevant in this context: 1) interoperability, 2) modelling, 3) interaction and 4) real-time data processing and decision-making. More details about this vision and the related research gaps are available in the state-of-the-art paper ?Autonomous, Context-Aware, Adaptive Digital Twins ? State of the Art and Roadmap? in Computers in Industry Volume 133 (December 2021), https://doi.org/10.1016/j.compind.2021.103508 . Contributions should address one or more of the four topics and deal with issues including, but not limited to, the following: ?Integration and interoperability of context information with Digital Twins ?Harmonisation of Digital Twins with the IoT paradigm ?Lifecycle-wide interoperability with Digital Twins ?Standards and protocols for interoperability of Digital Twins ?Context models for Digital Twins ?Context-aware integration aggregation of Digital Twins ?Relation of agent and holon models to Digital Twins ?Granularity of Digital Twins ?Transition of business goals into adaptable plans ?Framework, reference models and integration with existing models e.g. RAMI 4.0 ?Models of interaction and cooperation between Digital Twins and Humans ?Humans as part of the context of Digital Twins ?Borders between humans and autonomous Digital Twins ?Explainability and certification of Digital Twin models, simulations and predictions ?Real-time processing of context data by Digital Twins ?Time-constrained, near-optimal decision-making methods for Digital Twins ?Real-time handling of massive amounts of data by Digital Twins ?Methods for Digital Twins to handle Incomplete and noisy data streams /Guest Editors:/// Giacomo Cabri > Department of Physics Informatics and Math University of Modena and Reggio Emilia Modena (Italy) Web page: http://personale.unimore.it/rubrica/dettaglio/gcabri Karl A. Hribernik < hri at biba.uni-bremen.de > BIBA - Bremer Institut f?r Produktion und Logistik GmbH Bremen (Germany) Web page: https://www.biba.uni-bremen.de/en/institute/staff/homepage.html?nick=hri Federica Mandreoli > Department of Physics Informatics and Math University of Modena and Reggio Emilia Modena (Italy) Web page: http://personale.unimore.it/rubrica/dettaglio/fmandreoli Gregoris Mentzas School of Electrical and Computer Engineering National Technical University of Athens Athens (Greece) Web page: https://www.ece.ntua.gr/en/staff/50 /Paper submission: / // The submission process is organised in two stages: 1) extended abstract submission and 2) full paper submission.Extended abstract submission and its acceptance are compulsory for full paper submission. Extended abstracts must be not more than 3.000 characters (this amount excludes the characters in tables, figures, and references). These are to be sent directly via email to the Guest editors ? not the Elsevier submission webpage of the journal. Please include all four email addresses below : Giacomo Cabri > Karl A. Hribernik Federica Mandreoli > Gregoris Mentzas The guest editors will assess the appropriateness of the themes proposed and results to be presented as shown in the extended abstracts and will invite selected authors to submit full papers. The authors of the abstracts who are not selected will be notified accordingly. If invited, full papers must be prepared by following the Computers in Industry guide for authors available at https://www.elsevier.com/journals/computers-in-industry/0166-3615/guide-for-authors and submitted through the submission portal https://www.editorialmanager.com/comind/default1.aspx . The dates for submission and notification are the following: -extended abstract submission: December 8, 2021 -extended abstract acceptance notification: December 22, 2021 -full paper submission: March 31, 2022 -first round of review results: May 15 2022, -revised papers due: June 30, 2022 Those papers that need a second revision, the expected decision date is August 31, 2022 -- |----------------------------------------------------| | Prof. Giacomo Cabri - Ph.D., Full Professor | Rector's Delegate for Teaching | Dip. di Scienze Fisiche, Informatiche e Matematiche | Universita' di Modena e Reggio Emilia - Italia | e-mail giacomo.cabri at unimore.it | tel. +39-059-2058320 fax +39-059-2055216 |----------------------------------------------------| From calendarsites at insticc.org Fri Nov 12 08:01:43 2021 From: calendarsites at insticc.org (calendarsites at insticc.org) Date: Fri, 12 Nov 2021 13:01:43 -0000 Subject: Connectionists: [CFP] Last Call :: 11th Int. Conf. on Sensor Networks and Special Sessions Message-ID: <004601d7d7c5$72fb4d00$58f1e700$@insticc.org> Dear Colleague, It is our pleasure to invite you to submit your latest research results to the 11th International Conference on Sensor Networks (SENSORNETS 2022), that will be held Online from February 07 to February 08, 2022, until the 29th of November. Please notice this is a hard deadline and will not be extended. The conference registration fees have been strongly reduced, in order to give the community an unique and last opportunity to contribute and submit an original research paper to this conference. In the last years, the SENSORNETS proceedings have been fully indexed by SCOPUS and submitted to other well-known indexes like: Google Scholar, The DBLP Computer Science Bibliography, Semantic Scholar, Microsoft Academic, Engineering Index (EI) and Web of Science / Conference Proceedings Citation Index. SENSORNETS 2022 is also welcoming submissions until the 26th of November to the following special session's: * Special Session on Wireless Sensor Networks for Precise Agriculture - WSN4PA 2022 Chaired by Davide Polese and Alessandro Checco https://sensornets.scitevents.org/WSN4PA.aspx * Special Session on Energy-Aware Wireless Sensor Networks for IoT - EWSN-IoT 2022 Chaired by Olfa Kanoun and Sabrine Kheriji https://sensornets.scitevents.org/EWSN-IoT.aspx Looking forward to receiving your paper submission! Kind regards, Monica Saramago SENSORNETS Secretariat Web: https://sensornets.scitevents.org/ e-mail: sensornets.secretariat at insticc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengioy at iro.umontreal.ca Fri Nov 12 18:52:53 2021 From: bengioy at iro.umontreal.ca (bengioy at iro.umontreal.ca) Date: Fri, 12 Nov 2021 23:52:53 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> Message-ID: <11e6cffb94ee21175fdfcf354c2258ee@iro.umontreal.ca> This one gives an overview (but is light on the causal part): * Inductive Biases for Deep Learning of Higher-Level Cognition. https://arxiv.org/abs/2011.15091 (https://urldefense.com/v3/__https:/arxiv.org/abs/2011.15091__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuRfQoSqA%24) November 11, 2021 5:10 PM, "Asim Roy" )> wrote: Yoshua, Thanks for sending this list. If I want to do a quick read, which of these articles best describe the ?Disentanglement? motivation and mechanism? Asim From: Yoshua Bengio Sent: Sunday, November 7, 2021 5:45 PM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu); Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Here is a selection: *** Modular system-2 / Global Workspace Theory -inspired deep learning *** * Inductive Biases for Deep Learning of Higher-Level Cognition. https://arxiv.org/abs/2011.15091 (https://urldefense.com/v3/__https:/arxiv.org/abs/2011.15091__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuRfQoSqA%24) * Compositional Attention: Disentangling Search and Retrieval. https://arxiv.org/abs/2110.09419 (https://urldefense.com/v3/__https:/arxiv.org/abs/2110.09419__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuqJ5qV3E%24) * Discrete-Valued Neural Communication. https://arxiv.org/abs/2107.02367 (https://urldefense.com/v3/__https:/arxiv.org/abs/2107.02367__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuXkhtmd4%24) * A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning. https://arxiv.org/abs/2106.02097 (https://urldefense.com/v3/__https:/arxiv.org/abs/2106.02097__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuIikN8x0%24) * Coordination Among Neural Modules Through a Shared Global Workspace. https://arxiv.org/abs/2103.01197 (https://urldefense.com/v3/__https:/arxiv.org/abs/2103.01197__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTufU8hBTw%24) * Neural Production Systems. https://arxiv.org/abs/2103.01937 (https://urldefense.com/v3/__https:/arxiv.org/abs/2103.01937__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTu1w88Tfg%24) *** Causal discovery with deep learning *** * A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. https://arxiv.org/abs/1901.10912 (https://urldefense.com/v3/__https:/arxiv.org/abs/1901.10912__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuNauT8Bc%24) * Learning Neural Causal Models with Active Interventions. https://arxiv.org/abs/2109.02429 (https://urldefense.com/v3/__https:/arxiv.org/abs/2109.02429__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTuWNi2riE%24) * Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning. https://arxiv.org/abs/2110.15796 (https://urldefense.com/v3/__https:/arxiv.org/abs/2110.15796__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTu5-U8A34%24) * Toward Causal Representation Learning. https://ieeexplore.ieee.org/abstract/document/9363924 (https://urldefense.com/v3/__https:/ieeexplore.ieee.org/abstract/document/9363924__;!!IKRxdwAv5BmarQ!OO-vUr6i_ZDp0MDP7YR-bU-5fyT1in-Rwl45xUzJlK_eulIKNBAwvJTusu849vU%24) I am currently working on the merge of the above two threads of modularity and causality... -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec (mailto:julie.mongeau at mila.quebec) Le dim. 7 nov. 2021, ? 14 h 01, Asim Roy a ?crit : Yoshua, I am indeed feeling that I can have the cake and eat it too. Accepting the fact that neural activations in the brain have ?meaning and interpretation? is a huge step forward for the field. I would conjecture that it opens the door to new theories in cognitive and neuro sciences. You are definitely crossing the red line and that?s great. Can we have references to some of your papers? By the way, I think I understand what you mean by disentangling. There are probably simpler ways to disentangle and get to Explainable AI. But please send us the references. Best, Asim From: Yoshua Bengio Sent: Sunday, November 7, 2021 8:55 AM To: Asim Roy Cc: Adam Krawitz ; connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu); Juyang Weng Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Asim, You can have your cake and eat it too with modular neural net architectures. You still have distributed representations but you have modular specialization. Many of my papers since 2019 are on this theme. It is consistent with the specialization seen in the brain, but keep in mind that there is a huge number of neurons there, and you still don't see single grand-mother cells firing alone, they fire in a pattern that is meaningful both locally (in the same region/module) and globally (different modules cooperate and compete according to the Global Workspace Theory and Neural Workspace Theory which have inspired our work). Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity). -- Yoshua I'm overwhelmed by emails, so I won't be able to respond quickly or directly. Please write to my assistant in case of time sensitive matter or if it entails scheduling: julie.mongeau at mila.quebec (mailto:julie.mongeau at mila.quebec) Le dim. 7 nov. 2021, ? 01 h 46, Asim Roy a ?crit : Over a period of more than 25 years, I have had the opportunity to argue about the brain in both public forums and private discussions. And they included very well-known scholars such as Walter Freeman (UC-Berkeley), Horace Barlow (Cambridge; great grandson of Charles Darwin), Jay McClelland (Stanford), Bernard Baars (Neuroscience Institute), Christof Koch (Allen Institute), Teuvo Kohonen (Finland) and many others, some of whom are on this list. And many became good friends through these debates. We argued about many issues over the years, but the one that baffled me the most was the one about localist vs. distributed representation. Here?s the issue. As far as I know, although all the Nobel prizes in the field of neurophysiology ? from Hubel and Wiesel (simple and complex cells) and Moser and O?Keefe (grid and place cells) to the current one on discovery of temperature and touch sensitive receptors and neurons - are about finding ?meaning? in single or a group of dedicated cells, the distributed representation theory has yet to explain these findings of ?meaning.? Contrary to the assertion that the field is open-minded, I think most in this field are afraid the to cross the red line. Horace Barlow was the exception. He was perhaps the only neuroscientist who was willing to cross the red line and declare that ?grandmother cells will be found.? After a debate on this issue in 2012, which included Walter Freeman and others, Horace visited me in Phoenix at the age of 91 for further discussion. If the field is open minded, would love to hear how distributed representation is compatible with finding ?meaning? in the activations of single or a dedicated group of cells. Asim Roy Professor, Arizona State University Lifeboat Foundation Bios: Professor Asim Roy (https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!JYoK0hORlllDPMK5nxG1MV8TRdHc4uGvWM3awogw4qslieKdtCnnX7G9gvkI0Xg%24) From: Connectionists On Behalf Of Adam Krawitz Sent: Friday, November 5, 2021 10:01 AM To: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Tsvi, I?m just a lurker on this list, with no skin in the game, but perhaps that gives me a more neutral perspective. In the spirit of progress: * If you have a neural network approach that you feel provides a new and important perspective on cognitive processes, then write up a paper making that argument clearly, and I think you will find that the community is incredibly open to that. Yes, if they see holes in the approach they will be pointed out, but that is all part of the scientific exchange. Examples of this approach include: Elman (1990) Finding Structure in Time, Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow a Mind: Statistics, Structure, and Abstraction (not neural nets, but a ?new? approach to modelling cognition). I?m sure others can provide more examples. * I?m much less familiar with how things work on the applied side, but I have trouble believing that Google or anyone else will be dismissive of a computational approach that actually works. Why would they? They just want to solve problems efficiently. Demonstrate that your approach can solve a problem more effectively (or at least as effectively) as the existing approaches, and they will come running. Examples of this include: Tesauro?s TD-Gammon, which was influential in demonstrating the power of RL, and LeCun et al.?s convolutional NN for the MNIST digits. Clearly communicate the novel contribution of your approach and I think you will find a receptive audience. Thanks, Adam From: Connectionists On Behalf Of Tsvi Achler Sent: November 4, 2021 9:46 AM To: gary at ucsd.edu (mailto:gary at ucsd.edu) Cc: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized. Feedforward methods are not fully connectionist unless rehearsal for learning is implemented with neurons. That means storing all patterns, mixing them randomly and then presenting to a network to learn. As far as I know, no one is doing this in the community, so feedforward methods are only partially connectionist. By allowing popularity to predominate and choking off funds and presentation of alternatives we are cheating ourselves from pursuing other more rigorous brain-like methods. Sincerely, -Tsvi On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler wrote: Gary- Thanks for the accessible online link to the book. I looked especially at the inhibitory feedback section of the book which describes an Air Conditioner AC type feedback. It then describes a general field-like inhibition based on all activations in the layer. It also describes the role of inhibition in sparsity and feedforward inhibition, The feedback described in Regulatory Feedback is similar to the AC feedback but occurs for each neuron individually, vis-a-vis its inputs. Thus for context, regulatory feedback is not a field-like inhibition, it is very directed based on the neurons that are activated and their inputs. This sort of regulation is also the foundation of Homeostatic Plasticity findings (albeit with changes in Homeostatic regulation in experiments occurring in a slower time scale). The regulatory feedback model describes the effect and role in recognition of those regulated connections in real time during recognition. I would be happy to discuss further and collaborate on writing about the differences between the approaches for the next book or review. And I want to point out to folks, that the system is based on politics and that is why certain work is not cited like it should, but even worse these politics are here in the group today and they continue to very strongly influence decisions in the connectionist community and holds us back. Sincerely, -Tsvi On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu (mailto:gary at ucsd.edu) wrote: Tsvi - While I think Randy and Yuko's book (https://urldefense.com/v3/__https:/www.amazon.com/dp/0262650541/__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7U4snDyk0%24)is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there is an online version. (https://urldefense.com/v3/__https:/compcogneuro.org/__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UH2qn4go%24) Randy & Yuko's models take into account feedback and inhibition. On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler wrote: Daniel, Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)? These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks. I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in. I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition. Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia. Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message. Thomas, I am happy to have more discussions and/or start a different thread. Sincerely, Tsvi Achler MD/PhD On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S wrote: Tsvi, While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches, going from simpler to more complex cognitive functions, is found in my textbook Introduction to Neural and Cognitive Modeling (3rd edition, Routledge, 2019). Also Steve Grossberg's book Conscious Mind, Resonant Brain (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis. Best, Dan Levine ------------------------------------ From: Connectionists on behalf of Tsvi Achler Sent: Saturday, October 30, 2021 3:13 AM To: Schmidhuber Juergen Cc: connectionists at cs.cmu.edu (mailto:connectionists at cs.cmu.edu) Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community. In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up Impact Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.nber.org*2Fsystem*2Ffiles*2Fworking_papers*2Fw22180*2Fw22180.pdf*3Ffbclid*3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300122043*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=9o*2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UD9hRGNg%24) and often this may take a generation https://www.nber.org/.../does-science-advance-one-funeral... (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.nber.org*2Fdigest*2Fmar16*2Fdoes-science-advance-one-funeral-time*3Ffbclid*3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300132034*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UapVS1t0%24) . The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks can be rewound in time. This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons (see video link for further clarification). Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in "premier" journals and no funding. But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward. If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks. Video: https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2 (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.youtube.com*2Fwatch*3Fv*3Dm2qee6j5eew*26list*3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3*26index*3D2&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300132034*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8*2BFcOkkNEWZ9w*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UzMnNL04%24) Sincerely, Tsvi Achler On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch (mailto:juergen at idsia.ch): https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html (https://urldefense.com/v3/__https:/nam12.safelinks.protection.outlook.com/?url=https*3A*2F*2Fpeople.idsia.ch*2F*juergen*2Fscientific-integrity-turing-award-deep-learning.html&data=04*7C01*7Clevine*40uta.edu*7Cb1a267e3b6a64ada666208d99ca37f6d*7C5cdc5b43d7be4caa8173729e3b0a62d9*7C1*7C0*7C637713048300142030*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw*3D&reserved=0__;JSUlJX4lJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UNznV_Qo%24) The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -- Gary Cottrell 858-534-6640 FAX: 858-534-7029 Computer Science and Engineering 0404 IF USING FEDEX INCLUDE THE FOLLOWING LINE: CSE Building, Room 4130 University of California San Diego - 9500 Gilman Drive # 0404 La Jolla, Ca. 92093-0404 Email: gary at ucsd.edu (mailto:gary at ucsd.edu) Home page: http://www-cse.ucsd.edu/~gary/ (https://urldefense.com/v3/__http:/www-cse.ucsd.edu/*gary/__;fg!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7U-G68mLE%24) Schedule: http://tinyurl.com/b7gxpwo (https://urldefense.com/v3/__http:/tinyurl.com/b7gxpwo__;!!IKRxdwAv5BmarQ!P43fgF97h1EkMmUyqwIyGb3BiM6QvDDIayyZy_zt_11O7NVqPb6YiU7UcMz40H8%24) Listen carefully, Neither the Vedas Nor the Qur'an Will teach you this: Put the bit in its mouth, The saddle on its back, Your foot in the stirrup, And ride your wild runaway mind All the way to heaven. -- Kabir -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Sat Nov 13 03:41:51 2021 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sat, 13 Nov 2021 10:41:51 +0200 Subject: Connectionists: 9th European Conference on Service-Oriented and Cloud Computing (ESOCC 2022): Final Call for the Main Track and Third Call for Other Contributions Message-ID: *** Final Call for the Main Track and Third Call for Other Contributions *** 9th European Conference on Service-Oriented and Cloud Computing (ESOCC 2022) March 22-24, 2022, Lutherstadt Wittenberg, Germany https://www.esocc-conf.eu Scope Service-oriented and cloud computing have made a huge impact both on the software industry and on the research community. Today, service and cloud technologies are applied to build large-scale software landscapes as well as to provide single software services to end users. Services today are independently developed and deployed as well as freely composed while they can be implemented in a variety of technologies, a quite important fact from a business perspective. Similarly, cloud computing aims at enabling flexibility by offering a centralised sharing of resources. The industry's need for agile and flexible software and IT systems has made cloud computing the dominating paradigm for provisioning computational resources in a scalable, on- demand fashion. Nevertheless, service developers, providers, and integrators still need to create methods, tools and techniques to support cost-effective and secure development as well as use of dependable devices, platforms, services and service- oriented applications in the cloud. The European Conference on Service-Oriented and Cloud Computing (ESOCC) is the premier conference on advances in the state of the art and practice of service- oriented computing and cloud computing in Europe. The main objectives of this conference are to facilitate the exchange between researchers and practitioners in the areas of service-oriented computing and cloud computing, as well as to explore the new trends in those areas and foster future collaborations in Europe and beyond. Tracks - Main conference: three days of invited talks, panels, and presentations of selected research papers, including a dedicated day to satellite workshops. - PhD Symposium: an opportunity for PhD students to present their research activities and perspectives, to critically discuss them with other PhD students and with established researchers in the area, hence getting fruitful feedback and advices on their research activities. - Projects Track: a useful opportunity for researchers to disseminate the latest research developments in their projects and meet representatives of other consortia. Details about all the tracks are available at the conference web site: https://www.esocc-conf.eu . Topics of interest ESOCC 2022 seeks original, high quality papers related to all aspects of service- oriented and cloud computing. Specific topics of interest include but are not limited to: - Service and Cloud Computing Models  ? Design patterns, guidelines and methodologies  ? Governance models  ? Architectural models  ? Requirements engineering  ? Formal Methods  ? Model-Driven Engineering  ? Quality models  ? Security, Privacy & Trust models  ? Self-Organising Service-Oriented and Cloud Architectures Models  ? Testing models - Service and Cloud Computing Engineering  ? Service Discovery, Matchmaking, Negotiation and Selection  ? Monitoring and Analytics  ? Governance and management  ? Cloud Interoperability, Multi-Cloud, Cross-Cloud, Federated Cloud solutions  ? Frameworks & Methods for Building Service and Cloud based Applications  ? Cross-layer adaptation  ? Edge/Fog computing  ? Cloud, Service Orchestration & Management  ? Service Level Agreement Management  ? Service Evolution/Optimisation  ? Service & Cloud Testing and Simulation  ? QoS for Services and Clouds  ? Semantic Web Services  ? Service mining  ? Service & Cloud Standards  ? FaaS / Serverless computing - Technologies  ? DevOps in the Cloud  ? Containerized services  ? Emerging Trends in Storage, Computation and Network Clouds  ? Microservices: Design, Analysis, Deployment and Management  ? Next Generation Services Middleware and Service Repositories  ? RESTful Services  ? Service and Cloud Middleware & Platforms  ? Blockchain for Services & Clouds  ? Services and Clouds with IoT  ? Fog Computing with Service and Cloud - Business and Social aspects  ? Enterprise Architectures for Service and Cloud  ? Service-based Workflow Deployment & Life-cycle Management  ? Core Applications, e.g., Big Data, Commerce, Energy, Finance, Health, Scientific Computing, Smart Cities  ? Business Process as a Service - BPaaS  ? Service and Cloud Business Models  ? Service and Cloud Brokerage  ? Service and Cloud Marketplaces  ? Service and Cloud Cost & Pricing  ? Crowdsourcing Business Services  ? Social and Crowd-based Cloud  ? Energy issues in Cloud Computing  ? Sustainability issues Submissions from industry are welcome (for example, use cases). Submissions ESOCC 2022 invites submissions in all the tracks: - Regular research papers (15 pages including references) - PhD Symposium (8 pages including references, authored by the PhD student with indication of his/her supervisors' names) - Projects Track (1 to 5 pages including references, describing an ongoing project) We only accept original papers, not submitted for publication elsewhere. The papers must be formatted according to the LNCS proceedings guidelines. They must be submitted to the EasyChair site at https://easychair.org/conferences/?conf=esocc2022 by selecting the right track. All accepted regular research papers are expected to be published in the main conference proceedings by Springer in the Lecture Notes in Computer Science (LNCS) series (http://www.springer.com/lncs). Accepted papers of the other tracks and the satellite workshops are expected to be published by Springer in the Communications in Computer and Information Science (CCIS) series (https://www.springer.com/series/7899). At least one author of each accepted paper is expected to register and present the work at the conference. A journal special issue is planned, and authors of selected accepted papers will be invited to submit extended versions of their articles. Workshop Proposals ESOCC 2022 also invites proposals for satellite workshops. More details about the proposal format and submission can be found at https://esocc-conf.eu/index.php/workshops/ . Important Dates Regular research & industrial papers: - Paper submission: 21 November 2021 - Notifications: 7 January 2022 - Camera Ready versions due: 20 January 2022 Projects track: - Paper submission: 14 January 2022 - Paper notification: 25 February 2022 - Camera Ready Version: 11 March 2022 PhD Symposium Track: - Paper submission: 14 January 2022 - Paper notification: 25 February 2022 - Camera Ready Version: 11 March 2022 Industrial Track: - Paper submission: 14 January 2022 - Paper notification: 25 February 2022 - Camera Ready Version: 11 March 2022 Satellite Workshops: - Workshop Proposal submission: 8 October 2021 - Workshop Proposal notification: 15 October 2021 - Workshop Paper submission: 14 January 2022 - Workshop Paper notification: 25 February 2022 - Workshop Camera Ready Version: 11 March 2022 Organization General Chair ? Wolf Zimmermann (Martin Luther University Halle-Wittenberg, Germany) Programme Co-Chairs ? Fabrizio Montesi (University of Southern Denmark, Denmark) ? George A. Papadopoulos (University of Cyprus, Cyprus) Industrial Track Chair ? Andreas Both (Anhalt University of Applied Science) Projects Track Chair ? Damian Tamburri (Technical University Eindhoven) Workshops Co-Chairs ? Guadalupe Ortiz (University of C?diz, Spain) ? Christian Zirpins (Karlsruhe University of Applied Science) PhD Symposium Co-Chair ? Jacopo Soldani (University of Pisa) ? Massimo Villari (University of Messina) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiigaya at gmail.com Fri Nov 12 09:25:48 2021 From: kiigaya at gmail.com (Kiyohito Iigaya) Date: Fri, 12 Nov 2021 09:25:48 -0500 Subject: Connectionists: Research Assistant in Computational Psychiatry lab at Columbia Message-ID: Research Assistant Position The Laboratory for Computational Psychiatry and Translational Neuroscience (PI: Kiyohito Iigaya) at Columbia University Irving Medical Center is seeking a full-time Research Assistant position. We are a new laboratory interested in advancing our understanding of fundamental neuroscience and translating the findings to clinical applications using computational methods. The ideal candidate will develop and perform online (e.g., Amazon M-Turk) and in-person (e.g., fMRI) studies with human volunteers. We are particularly interested in a well-organized individual who has excellent programming skills. Our Lab is located at the New York State Psychiatric Institute and Department of Psychiatry at Columbia University Irving Medical Center in NYC. Click here to apply. https://opportunities.columbia.edu/en-us/job/520298/research-assistant Best wishes, Kyo Iigaya --------------------------------- Kiyohito Iigaya, Ph.D. Assistant Professor of Neurobiology (in Psychiatry) Columbia University Irving Medical Center ki2151 at columbia.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From victorfxtc at gmail.com Fri Nov 12 11:09:40 2021 From: victorfxtc at gmail.com (vic roc) Date: Fri, 12 Nov 2021 19:39:40 +0330 Subject: Connectionists: Statistical Analysis of coherence and PLV (phase-locking value) Message-ID: Dear cognitive neuroscientists Hope you are perfectly well. I had two questions regarding the statistical analysis of PLV and coherence: 1. Can I directly interpret the ?grand average PLV values? or ?grand average coherence values? as indicators of connectivity? 2. Can I treat these PLV or coherence values as some sort of ?raw data? to be analyzed and compared statistically across subjects? ---------------------------- More explanation about 1 (1. Can I directly interpret the ?grand average PLV values? or ?grand average coherence values? as indicators of connectivity?) : These two measures seem to be some directionless correlation coefficients (like r-squared), right? If yes, so I guess I can directly interpret them? There is no P value attached to them. Still, I can say for example those above 0.7 are strong and those above 0.9 are excellent correlations. So can I look at the ?Grand Average? PLVs (average of average PLVs of all subjects in a particular condition/group) and find those grand-average PLVs that are above, say, 70% across all the subjects, and list their responsible pairs of electrodes (channels) as strongly correlated and thus as ?strongly functionally connected?? I can do the same for those between 50% and 70%, calling their electrodes (channels) to be moderately connected. Is this method possible? Please let me know about any other similar analyses I can do. ??????? More explanation about 2 (2. Can I treat these PLV or coherence values as some sort of ?raw data? to be analyzed and compared statistically across subjects?): Can I compare these PLV values or coherence values across all the different conditions (different interventions), using analyses such as ANOVA and post hoc tests? For example, let?s say we have 3 conditions, each with 30 subjects. And each subject has many, many PLV values between all possible pairs of electrodes. But the number of PLV values is the same for all subjects since all of them have been scanned with the same EEG device. Can I compare each PLV value of all my 90 subjects (within 3 groups) with each other, in terms of their 3 conditions (interventions), using ANOVA and Tukey or some other appropriate statistical test? I mean is it correct to treat these ?correlation coefficients? named PLV or coherence as raw data, and statistically analyze them? Thanks a lot in advance. Best, Vic -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Sun Nov 14 11:47:36 2021 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Sun, 14 Nov 2021 16:47:36 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> Dear all, thanks for your public comments, and many additional private ones! So far nobody has challenged the accuracy of any of the statements in the draft report currently under massive open peer review: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html Nevertheless, some of the recent comments will trigger a few minor revisions in the near future. Here are a few answers to some of the public comments: Randall O'Reilly wrote: "I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit." Indeed, as I wrote in Science (2011, reference [NASC3] in the report): "As they say: Columbus did not become famous because he was the first to discover America, but because he was the last." Sure, some people sometimes assign the "inventor" title to the person that should be truly called the "popularizer." Frequently, this is precisely due to the popularizer packaging the work of others in such a way that it becomes easily digestible. But this is not to say that their receipt of the title is correct or that we shouldn't do our utmost to correct it; their receipt of such title over the ones that are actually deserving of it is one of the most enduring issues in scientific history. As Stephen Jos? Hanson wrote: "Well, to popularize is not to invent. Many of Juergen's concerns could be solved with some scholarship, such that authors look sometime before 2006 for other relevant references." Randy also wrote: "Sometimes, it is not the basic equations etc that matter: it is the big picture vision." However, the same vision has almost always been there in the earlier work on neural nets. It's just that the work was ahead of its time. It's only in recent years that we have the datasets and the computational power to realize those big pictures visions. I think you would agree that simply scaling something up isn't the same as inventing it. If it were, then the name "Newton" would have little meaning to people nowadays. Jonathan D. Cohen wrote: " ...it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor." Sure, but let?s make sure that this cannot be used as a justification of plagiarism! See Sec. 5 of the report. Generally speaking, if B plagiarizes A but inspires C, whom should C cite? The answer is clear. Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came about recently." Not so. See references in Sec. X of the report: the ancient term "deep learning" (explicitly mentioned by ACM) was actually first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000). Tsvi Achler wrote: "Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in `premier' journals and no funding. [...] Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized." This is very misleading - see Sec. A, B, and C of the report which are about recurrent nets with feedback, especially LSTM, heavily used by Google and others, on your smartphone since 2015. Recurrent NNs are general computers that can compute anything your laptop can compute, including any computable model with feedback "back to the inputs." My favorite proof from over 30 years ago: a little subnetwork can be used to build a NAND gate, and a big recurrent network of NAND gates can emulate the CPU of your laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) However, as Asim Roy pointed out, this discussion deviates from the original topic of improper credit assignment. Please use another thread for this. Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule." This is so misleading in several ways - see Sec. XII of the report: "Some claim that `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970" by Seppo Linnainmaa. Of course, the person to cite is Linnainmaa. Randy also wrote: "how little Einstein added to what was already established by Lorentz and others". Juyang already respectfully objected to this misleading statement. I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or `strategic vale' or boosting someone because they're a great populariser. ... Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear." See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The inventor of an important method should get credit for inventing it. They may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it - but not for inventing it.' If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in follow-up papers and presentations." I also agree with what Zhaoping Li wrote: "I would find it hard to enter a scientific community if it is not scholarly. Each of us can do our bit to be scholarly, to set an example, if not a warning, to the next generation." Randy also wrote: "Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar?" This sounds almost like a defense of plagiarism. That's what time stamps of patents and papers are for. A recurring point of the report is: the awardees did not cite the prior art - not even in later surveys written when the true origins of this work were well-known. Here I fully agree with what Marina Meila wrote: "Since credit is a form of currency in academia, let's look at the `hard currency' rewards of invention. Who gets them? The first company to create a new product usually fails. However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the `product' directly from them or not." J?rgen ************************* On 27 Oct 2021, at 10:52, Schmidhuber Juergen wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber From Francesco.Rea at iit.it Sun Nov 14 10:40:56 2021 From: Francesco.Rea at iit.it (Francesco Rea) Date: Sun, 14 Nov 2021 15:40:56 +0000 Subject: Connectionists: =?windows-1252?q?=5Bjournals=5D__Special_Issue_on?= =?windows-1252?q?_the_topic_=93Cognitive_Robotics_in_Social_Applications?= =?windows-1252?q?=94?= Message-ID: <5af44b4a238247ccb0968b75fe639e0c@iit.it> Dear colleague, We hope this email finds you well! We would like to kindly inform you about a Special Issue on the topic ?Cognitive Robotics in Social Applications? of the open access journal ?Electronics? (ISSN 2079-9292, IF 2.397), for which we are serving as Guest Editors. We are writing to inquire whether you would be interested in submitting a contribution to this Special Issue. The deadline for submitting the manuscript is 31 December 2021. Please, find more details for this call and all the submission information at the following link: https://www.mdpi.com/journal/electronics/special_issues/cognitive_robots We hope you will contribute to this well-focused Special Issue, and we would be grateful if you could forward this information to friends and colleagues that might be interested in the topic. Best Regards, Prof. Dr. Dimitri Ognibene, Dr. Giovanni Pilato, Dr. Francesco Rea Guest Editors -------------- next part -------------- An HTML attachment was scrubbed... URL: From oreilly at ucdavis.edu Mon Nov 15 03:36:09 2021 From: oreilly at ucdavis.edu (Randall O'Reilly) Date: Mon, 15 Nov 2021 00:36:09 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> Message-ID: <6AC9BA06-DBF7-4FCA-87CE-0776DE9CC498@ucdavis.edu> Juergen, > Generally speaking, if B plagiarizes A but inspires C, whom should C cite? The answer is clear. Using the term plagiarize here implies a willful stealing of other people's ideas, and is a very serious allegation as I'm sure you are aware. At least some of the issues you raised are clearly not of this form, involving obscure publications that almost certainly the so-called plagiarizers had no knowledge of. This is then a case of reinvention, which happens all the time is still hard to avoid even with tools like google scholar available now (but not back when most of the relevant work was being done). You should be very careful to not confuse these two things, and only allege plagiarism when there is a very strong case to be made. In any case, consider this version: If B reinvents A but publishes a much more [comprehensive | clear | applied | accessible | modern] (whatever) version that becomes the main way in which many people C learn about the relevant idea, whom should C cite? For example, I cite Rumelhart et al (1986) for backprop, because that is how I and most other people in the modern field learned about this idea, and we know for a fact that they genuinely reinvented it and conveyed its implications in a very compelling way. If I might be writing a paper on the history of backprop, or some comprehensive review, then yes it would be appropriate to cite older versions that had limited impact, being careful to characterize the relationship as one of reinvention. Referring to Rumelhart et al (1986) as "popularizers" is a gross mischaracterization of the intellectual origins and true significance of such a work. Many people in this discussion have used that term inappropriately as it applies to the relevant situations at hand here. > Randy also wrote: "how little Einstein added to what was already established by Lorentz and others". Juyang already respectfully objected to this misleading statement. I beg to differ -- this is a topic of extensive ongoing debate: https://en.wikipedia.org/wiki/Relativity_priority_dispute -- specifically with respect to special relativity, which is the case I was referring to, not general relativity, although it appears there are issues there too. - Randy From maria.kesa at gmail.com Mon Nov 15 05:11:44 2021 From: maria.kesa at gmail.com (Maria Kesa) Date: Mon, 15 Nov 2021 11:11:44 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <6AC9BA06-DBF7-4FCA-87CE-0776DE9CC498@ucdavis.edu> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> <6AC9BA06-DBF7-4FCA-87CE-0776DE9CC498@ucdavis.edu> Message-ID: My personal take and you can all kiss my ass message https://fuckmyasspsychiatry.blogspot.com/2021/11/jurgen-schmidhuber-is-ethically-bankrupt.html All the very best, Maria Kesa On Mon, Nov 15, 2021 at 11:06 AM Randall O'Reilly wrote: > Juergen, > > > Generally speaking, if B plagiarizes A but inspires C, whom should C > cite? The answer is clear. > > Using the term plagiarize here implies a willful stealing of other > people's ideas, and is a very serious allegation as I'm sure you are > aware. At least some of the issues you raised are clearly not of this > form, involving obscure publications that almost certainly the so-called > plagiarizers had no knowledge of. This is then a case of reinvention, > which happens all the time is still hard to avoid even with tools like > google scholar available now (but not back when most of the relevant work > was being done). You should be very careful to not confuse these two > things, and only allege plagiarism when there is a very strong case to be > made. > > In any case, consider this version: > > If B reinvents A but publishes a much more [comprehensive | clear | > applied | accessible | modern] (whatever) version that becomes the main way > in which many people C learn about the relevant idea, whom should C cite? > > For example, I cite Rumelhart et al (1986) for backprop, because that is > how I and most other people in the modern field learned about this idea, > and we know for a fact that they genuinely reinvented it and conveyed its > implications in a very compelling way. If I might be writing a paper on > the history of backprop, or some comprehensive review, then yes it would be > appropriate to cite older versions that had limited impact, being careful > to characterize the relationship as one of reinvention. > > Referring to Rumelhart et al (1986) as "popularizers" is a gross > mischaracterization of the intellectual origins and true significance of > such a work. Many people in this discussion have used that term > inappropriately as it applies to the relevant situations at hand here. > > > Randy also wrote: "how little Einstein added to what was already > established by Lorentz and others". Juyang already respectfully objected to > this misleading statement. > > I beg to differ -- this is a topic of extensive ongoing debate: > https://en.wikipedia.org/wiki/Relativity_priority_dispute -- specifically > with respect to special relativity, which is the case I was referring to, > not general relativity, although it appears there are issues there too. > > - Randy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barak at pearlmutter.net Mon Nov 15 09:21:33 2021 From: barak at pearlmutter.net (Barak A. Pearlmutter) Date: Mon, 15 Nov 2021 14:21:33 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> Message-ID: One point of scientific propriety and writing that may be getting lost in the scrum here, and which has I think contributed substantially to the somewhat woeful state of credit assignment in the field, is the traditional idea of what a citation *means*. If a paper says "we use the Foo Transform (Smith, 1995)" that, traditionally, implies that the author has actually read Smith (1995) and it describes the Foo Transform as used in the work being presented. If the author was told that the Foo Transform was actually discovered by Barker (1980) but the author hasn't actually verified that by reading Barker (1980), then the author should NOT just cite Barker. If the author heard that Barker (1980) is the "right" citation for the Foo Transform, but they got the details of it that they're actually using from Smith (1995) then they're supposed to say so: "We use the Foo Transform as described in Smith (1995), attributed to Barker (1980) by someone I met in line for the toilet at NeurIPS 2019". This seemingly-antediluvian practice is to guard against people citing "Barker (1980)" as saying something that it actually doesn't say, proving a theorem that it doesn't, defining terms ("rate code", cough cough) in a fashion that is not consistent with Barker's actual definitions, etc. Iterated violations of this often manifest as repeated and successive simplification of an idea, a so-called game of telephone, until something not even true is sagely attributed to some old publication that doesn't actually say it. So if you want to cite, say, Seppo Linnainmaa for Reverse Mode Automatic Differentiation, you need to have actually read it yourself. Otherwise you need to do a bounce citation: "Linnainman (1982) described by Schmidhuber (2021) as exhibiting a Fortran implementation of Reverse Mode Automatic Differentiation" or something like that. This is also why it's considered fine to simply cite a textbook or survey paper: nobody could possibly mistake those as the original source, but they may well be where the author actually got it from. To bring this back to the present thread: I must confess that I have not actually read many of the old references J?rgen brings up. Certainly "X (1960) invented deep learning" is not enough to allow someone to cite them. It's not even enough for a bounce citation. What did they *actually* do? What is J?rgen saying they actually did? From marwen.belkaid at iit.it Mon Nov 15 10:28:04 2021 From: marwen.belkaid at iit.it (marwen Belkaid) Date: Mon, 15 Nov 2021 16:28:04 +0100 Subject: Connectionists: =?utf-8?q?CFP_-_Special_issue_on_=E2=80=9CHuman-l?= =?utf-8?q?ike_Behavior_and_Cognition_in_Robots=E2=80=9D?= Message-ID: <611a17bd-93fa-6f28-a07b-f82ab780be1a@iit.it> Call for papers Special issue on ?Human-like Behavior and Cognition in Robots?///in the International Journal of Social Robotics/ _Submission deadline_: January 5, 2022; Research articles and Theoretical papers _More info_: https://www.springer.com/journal/12369/updates/19850712 *Description* This Special Issue is in continuation of the HBCR workshop organized at the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) on ?Human-like Behavior and Cognition in Robots ?. Submissions are welcomed from contributors who attended the workshop as well as from those who did not. Building robots capable of behaving in a human-like manner is a long-term goal in robotics. It is becoming even more crucial with the growing number of applications in which robots are brought closer to humans, not only trained experts, but also inexperienced users, children, the elderly, or clinical populations. Current research from different disciplines contributes to this general endeavor in various ways: * by creating robots that mimic specific aspects of human behavior, * by designing brain-inspired cognitive architectures for robots, * by implementing embodied neural models driving robots? behavior, * by reproducing human motion dynamics on robots, * by investigating how humans perceive and interact with robots, dependent on the degree of the robots? human-likeness. This special issue thus welcomes research articles as well as theoretical articles from different areas of research (e.g., robotics, artificial intelligence, human-robot interaction, computational modeling of human cognition and behavior, psychology, cognitive neuroscience) addressing questions such as the following: * How to design robots with human-like behavior and cognition? * What are the best methods for examining human-like behavior and cognition? * What are the best approaches for implementing human-like behavior and cognition in robots? * How to manipulate, control and measure robots? degree of human-likeness? * Is autonomy a prerequisite for human-likeness? * How to best measure human reception of human-likeness of robots? * What is the link between perceived human-likeness and social attunement in human-robot interaction? * How can such human-like robots inform and enable human-centered research? * How can modeling human-like behavior in robots inform us about human cognition? * In what contexts and applications do we need human-like behavior or cognition? * And in what contexts it is not necessary? *Guest editors* * Marwen Belkaid, Istituto Italiano di Tecnologia (Italy) * Giorgio Metta, Istituto Italiano di Tecnologia (Italy) * Tony Prescott, University of Sheffield (United Kingdom) * Agnieszka Wykowska, Istituto Italiano di Tecnologia (Italy) -- Dr Marwen BELKAID Istituto Italiano di Tecnologia Center for Human Technologies Via Enrico Melen, 83 16152 Genoa, Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Nov 15 12:14:31 2021 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Mon, 15 Nov 2021 12:14:31 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> Message-ID: <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> Barak, as usual turgid and yet clear. Exactly correct.??? I agree.? Well, mostly. however the Jurrasic--*not antidiluvian*-- nature of citations does require one to *read* as you so intimated in your citation "algorithm" -- citations are meant to be contextual and relevant.? So it is easy to miss references in Cognitive Science.. as sometimes its just not clear what is similar to what.?? SO there is a matter of judgment here that folks could differ on.. Math and CS are a bit more obvious,? as Calculus was either invented by Newton or Leibnez.?? The Bernoulli brothers (not to be confused with the Doobie brothers)? invented probability theory and Laplace not the revered Bayes invented bayes law (I know.. its sounds wrong). But there are phase transitions in science.. so its often hard to decide where the citation path starts and where it ends.?? As Barak, correctly points out, reviews of the literature, annual reviews are good targets to cite, because they claim to leap-frog from the past in an authoritative and comprehensive way.? And so maybe you are tempted to be lazy and not read it.? Read it. Mostly in academics, the stakes are so low, mostly no-one seems to care who invented the "one sided t-test", but someone did (Braver).??? But with 100M$ being tossed about on a daily basis, especially when the algorithms are being appropriated without acknowledgement and prizes are given out without the entire context, Juergen;s concerns are more tangible.?? And writing a comprehensive review? is valuable and generates relevant and useful discussion.??? So bravo to Juergen for an impressive historical exegesis.?? Where the problem begins, is where does the modern era begin--2012? 2006? 1984?, 1961?, 1946?..? an ancient Greek guy for Arithmetic?-- well this might be a matter of taste.??? I think its safe to say we know when something is new because it actually works better (the context changed--there is a *change point* if you will), and the citation nuance between the present and past begins to melt away. Steve On 11/15/21 9:21 AM, Barak A. Pearlmutter wrote: > One point of scientific propriety and writing that may be getting lost > in the scrum here, and which has I think contributed substantially to > the somewhat woeful state of credit assignment in the field, is the > traditional idea of what a citation *means*. > > If a paper says "we use the Foo Transform (Smith, 1995)" that, > traditionally, implies that the author has actually read Smith (1995) > and it describes the Foo Transform as used in the work being > presented. If the author was told that the Foo Transform was actually > discovered by Barker (1980) but the author hasn't actually verified > that by reading Barker (1980), then the author should NOT just cite > Barker. If the author heard that Barker (1980) is the "right" citation > for the Foo Transform, but they got the details of it that they're > actually using from Smith (1995) then they're supposed to say so: "We > use the Foo Transform as described in Smith (1995), attributed to > Barker (1980) by someone I met in line for the toilet at NeurIPS > 2019". > > This seemingly-antediluvian practice is to guard against people citing > "Barker (1980)" as saying something that it actually doesn't say, > proving a theorem that it doesn't, defining terms ("rate code", cough > cough) in a fashion that is not consistent with Barker's actual > definitions, etc. Iterated violations of this often manifest as > repeated and successive simplification of an idea, a so-called game of > telephone, until something not even true is sagely attributed to some > old publication that doesn't actually say it. > > So if you want to cite, say, Seppo Linnainmaa for Reverse Mode > Automatic Differentiation, you need to have actually read it yourself. > Otherwise you need to do a bounce citation: "Linnainman (1982) > described by Schmidhuber (2021) as exhibiting a Fortran implementation > of Reverse Mode Automatic Differentiation" or something like that. > > This is also why it's considered fine to simply cite a textbook or > survey paper: nobody could possibly mistake those as the original > source, but they may well be where the author actually got it from. > > To bring this back to the present thread: I must confess that I have > not actually read many of the old references J?rgen brings up. > Certainly "X (1960) invented deep learning" is not enough to allow > someone to cite them. It's not even enough for a bounce citation. What > did they *actually* do? What is J?rgen saying they actually did? > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 19957 bytes Desc: not available URL: From maanakg at gmail.com Mon Nov 15 11:54:56 2021 From: maanakg at gmail.com (Maanak Gupta) Date: Mon, 15 Nov 2021 10:54:56 -0600 Subject: Connectionists: Final Call (FIRST ROUND): 27th ACM Symposium on Access Control Models and Technologies Message-ID: ?ACM SACMAT 2022 New York City, New York ----------------------------------------------- | Hybrid Conference (Online + In-person) | ----------------------------------------------- Call for Research Papers ============================================================== Papers offering novel research contributions are solicited for submission. Accepted papers will be presented at the symposium and published by the ACM in the symposium proceedings. In addition to the regular research track, this year SACMAT will again host the special track -- "Blue Sky/Vision Track". Researchers are invited to submit papers describing promising new ideas and challenges of interest to the community as well as access control needs emerging from other fields. We are particularly looking for potentially disruptive and new ideas which can shape the research agenda for the next 10 years. We also encourage submissions to the "Work-in-progress Track" to present ideas that may have not been completely developed and experimentally evaluated. Topics of Interest ============================================================== Submissions to the regular track covering any relevant area of access control are welcomed. Areas include, but are not limited to, the following: * Systems: * Operating systems * Cloud systems and their security * Distributed systems * Fog and Edge-computing systems * Cyber-physical and Embedded systems * Mobile systems * Autonomous systems (e.g., UAV security, autonomous vehicles, etc) * IoT systems (e.g., home-automation systems) * WWW * Design for resiliency * Designing systems with zero-trust architecture * Network: * Network systems (e.g., Software-defined network, Network function virtualization) * Corporate and Military-grade Networks * Wireless and Cellular Networks * Opportunistic Network (e.g., delay-tolerant network, P2P) * Overlay Network * Satellite Network * Privacy and Privacy-enhancing Technologies: * Mixers and Mixnets * Anonymous protocols (e.g., Tor) * Online social networks (OSN) * Anonymous communication and censorship resistance * Access control and identity management with privacy * Cryptographic tools for privacy * Data protection technologies * Attacks on Privacy and their defenses * Authentication: * Password-based Authentication * Biometric-based Authentication * Location-based Authentication * Identity management * Usable authentication * Mechanisms: * Blockchain Technologies * AI/ML Technologies * Cryptographic Technologies * Programming-language based Technologies * Hardware-security Technologies (e.g., Intel SGX, ARM TrustZone) * Economic models and game theory * Trust Management * Usable mechanisms * Data Security: * Big data * Databases and data management * Data leakage prevention * Data protection on untrusted infrastructure * Policies and Models: * Novel policy language design * New Access Control Models * Extension of policy languages * Extension of Models * Analysis of policy languages * Analysis of Models * Policy engineering and policy mining * Verification of policy languages * Efficient enforcement of policies * Usable access control policy New in ACM SACMAT 2022 ============================================================== We are moving ACM SACMAT 2022 to have two submission cycles. Authors submitting papers in the first submission cycle will have the opportunity to receive a major revision verdict in addition to the usual accept and reject verdicts. Authors can decide to prepare a revised version of the paper and submit it to the second submission cycle for consideration. Major revision papers will be reviewed by the program committee members based on the criteria set forward by them in the first submission cycle. Regular Track Paper Submission and Format ============================================================== Papers must be written in?English. Authors are required to use the ACM format for papers, using the two-column SIG Proceedings Template (the sigconf template for LaTex) available in the following link: https://www.acm.org/publications/authors/submissions The length of the paper in the proceedings format must not exceed?twelve?US letter pages formatted for 8.5" x 11" paper and be no more than 5MB in size. It is the responsibility of the authors to ensure that their submissions will print easily on simple default configurations. The submission must be anonymous, so information that might identify the authors - including author names, affiliations, acknowledgments, or obvious self-citations - must be excluded. It is the authors' responsibility to ensure that their anonymity is preserved when citing their work. Submissions should be made to the EasyChair conference management system by the paper submission deadline of: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) Submission Link: https://easychair.org/conferences/?conf=acmsacmat2022 All submissions must contain a significant original contribution. That is, submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal, conference, or workshop. In particular, simultaneous submission of the same work is not allowed. Wherever appropriate, relevant related work, including that of the authors, must be cited. Submissions that are not accepted as full papers may be invited to appear as short papers. At least one author from each accepted paper must register for the conference before the camera-ready deadline. Blue Sky Track Paper Submission and Format ============================================================== All submissions to this track should be in the same format as for the regular track, but the length must not exceed ten US letter pages, and the submissions are not required to be anonymized (optional). Submissions to this track should be submitted to the EasyChair conference management system by the same deadline as for the regular track. Work-in-progress Track Paper Submission and Format ============================================================== Authors are invited to submit papers in the newly introduced work-in-progress track. This track is introduced for (junior) authors, ideally, Ph.D. and Master's students, to obtain early, constructive feedback on their work. Submissions in this track should follow the same format as for the regular track papers while limiting the total number of pages to six US letter pages. Paper submitted in this track should be anonymized and can be submitted to the EasyChair conference management system by the same deadline as for the regular track. Call for Lightning Talk ============================================================== Participants are invited to submit proposals for 5-minute lightning talks describing recently published results, work in progress, wild ideas, etc. Lightning talks are a new feature of SACMAT, introduced this year to partially replace the informal sharing of ideas at in-person meetings. Submissions are expected??by May 27, 2022. Notification of acceptance will be on June 3, 2022. Call for Posters ============================================================== SACMAT 2022 will include a poster session to promote discussion of ongoing projects among researchers in the field of access control and computer security. Posters can cover preliminary or exploratory work with interesting ideas, or research projects in the early stages with promising results in all aspects of access control and computer security. Authors interested in displaying a poster must submit a poster abstract in the same format as for the regular track, but the length must not exceed three US letter pages, and the submission should not be anonymized. The title should start with "Poster:". Accepted poster abstracts will be included in the conference proceedings. Submissions should be emailed to the poster chair by Apr 15th, 2022. The subject line should include "SACMAT 2022 Poster:" followed by the poster title. Call for Demos ============================================================== A demonstration proposal should clearly describe (1) the overall architecture of the system or technology to be demonstrated, and (2) one or more demonstration scenarios that describe how the audience, interacting with the demonstration system or the demonstrator, will gain an understanding of the underlying technology. Submissions will be evaluated based on the motivation of the work behind the use of the system or technology to be demonstrated and its novelty. The subject line should include "SACMAT 2022 Demo:" followed by the demo title. Demonstration proposals should be in the same format as for the regular track, but the length must not exceed four US letter pages, and the submission should not be anonymized. A two-page description of the demonstration will be included in the conference proceedings. Submissions should be emailed to the Demonstrations Chair by Apr 15th, 2022. Financial Conflict of Interest (COI) Disclosure: ============================================================== In the interests of transparency and to help readers form their own judgments of potential bias, ACM SACMAT requires authors and PC members to declare any competing financial and/or non-financial interests in relation to the work described. Definition ------------------------- For the purposes of this policy, competing interests are defined as financial and non-financial interests that could directly undermine, or be perceived to undermine the objectivity, integrity, and value of a publication, through a potential influence on the judgments and actions of authors with regard to objective data presentation, analysis, and interpretation. Financial competing interests include any of the following: Funding: Research support (including salaries, equipment, supplies, and other expenses) by organizations that may gain or lose financially through this publication. A specific role for the funding provider in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript, should be disclosed. Employment: Recent (while engaged in the research project), present or anticipated employment by any organization that may gain or lose financially through this publication. Personal financial interests: Ownership or contractual interest in stocks or shares of companies that may gain or lose financially through publication; consultation fees or other forms of remuneration (including reimbursements for attending symposia) from organizations that may gain or lose financially; patents or patent applications (awarded or pending) filed by the authors or their institutions whose value may be affected by publication. For patents and patent applications, disclosure of the following information is requested: patent applicant (whether author or institution), name of the inventor(s), application number, the status of the application, specific aspect of manuscript covered in the patent application. It is difficult to specify a threshold at which a financial interest becomes significant, but note that many US universities require faculty members to disclose interests exceeding $10,000 or 5% equity in a company. Any such figure is necessarily arbitrary, so we offer as one possible practical alternative guideline: "Any undeclared competing financial interests that could embarrass you were they to become publicly known after your work was published." We do not consider diversified mutual funds or investment trusts to constitute a competing financial interest. Also, for employees in non-executive or leadership positions, we do not consider financial interest related to stocks or shares in their company to constitute a competing financial interest, as long as they are publishing under their company affiliation. Non-financial competing interests: Non-financial competing interests can take different forms, including personal or professional relations with organizations and individuals. We would encourage authors and PC members to declare any unpaid roles or relationships that might have a bearing on the publication process. Examples of non-financial competing interests include (but are not limited to): * Unpaid membership in a government or non-governmental organization * Unpaid membership in an advocacy or lobbying organization * Unpaid advisory position in a commercial organization * Writing or consulting for an educational company * Acting as an expert witness Conference Code of Conduct and Etiquette ============================================================== ACM SACMAT will follow the ACM Policy Against Harassment at ACM Activities. Please familiarize yourself with the ACM Policy Against Harassment (available at https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/ policy-against-discrimination-and-harassment) and guide to Reporting Unacceptable Behavior (available at https://www.acm.org/about-acm/reporting-unacceptable-behavior). AUTHORS TAKE NOTE ============================================================== The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.) Important dates ============================================================== **Note that, these dates are currently only tentative and subject to change.** * Paper submission: November 15th, 2021 (Submission Cycle 1) February 18th, 2022 (Submission Cycle 2) * Rebuttal: December 16th - December 20th, 2021 (Submission Cycle 1) March 24th - March 28th, 2022 (Submission Cycle 2) * Notifications: January 14th, 2022 (Submission Cycle 1) April 8th, 2022 (Submission Cycle 2) * Systems demo and Poster submissions: April 15th, 2022 * Systems demo and Poster notifications: April 22nd, 2022 * Panel Proposal: March 18th, 2022 * Camera-ready paper submission: April 29th, 2022 * Conference date: June 8 - June 10, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabu.thampi at iiitmk.ac.in Mon Nov 15 23:34:58 2021 From: sabu.thampi at iiitmk.ac.in (Sabu M. Thampi) Date: Tue, 16 Nov 2021 10:04:58 +0530 Subject: Connectionists: CFP - International Conference on Connected Systems & Intelligence (CSI'22) Message-ID: ** Apologies if you receive multiple copies of this invitation ** ** Please forward to anyone who might be interested ** ------------------------------------------------------------------------------------------------- International Conference on Connected Systems & Intelligence (CSI'22) August 31, September 1-2, 2022, Trivandrum, Kerala, India https://connected-systems.org/ Submission Deadline for Main Track: March 31, 2022 EDAS Submission Link: https://edas.info/N29009 Approved by IEEE and technically co-sponsored by IEEE Systems, Man, and Cybernetics Society ------------------------------------------------------------------------------------------------- Call for Papers: CSI'22 intends to bring together researchers, engineers, and practitioners from around the world to discuss their latest research findings, ideas, and applications in the field of connected systems and data intelligence. The conference solicits the submission of papers reporting significant and innovative research contributions in the field of connected systems and data intelligence. Papers should present original research that has been validated via analysis, simulation, or experimentation. All accepted and presented papers will be published in the conference proceedings and submitted to IEEE Xplore as well as other Abstracting and Indexing (A&I) databases. Major topics of interest to the conference include, but are not limited to: -- Internet of Things(IoT)/Internet of Everything (IoE)/Artificial IoT -- Autonomous Real-Time Systems -- Localization Techniques and Wireless Technologies -- Big Data Intelligence -- LTE, 5G, 6G, and beyond -- Blockchain and Industry 4.0 -- Mechatronics and Automation -- Cloud and Fog Computing -- Mobile P2P Networking -- Cognitive Cybersecurity -- NLP and Human-to-Machine Interaction -- Communication, Connectivity, and Networking -- Networked Control systems -- Computer Vision and Pattern Recognition -- Network Intelligence -- Connected Vehicles and Future ITS -- Satellite and Space Communications, Cognitive Radio Communications -- Context Awareness, Situation Awareness, Ambient Intelligence -- Security, Privacy, and Trust -- Cyber-Physical Systems -- Semantic Technologies, Collective Intelligence -- Circuits and Systems for AI -- Smart Embedded Systems and Robotics -- Crowd-sensing, Human-centric Sensing -- Social Networks, Mobile Computing -- Distributed Optimization, Game and Learning Algorithms -- Social Intelligence AI and Social Cognitive Systems -- Edge Intelligence and Industry 4.0 -- Smart Ubiquitous Computing and Smart Ubiquitous Networks -- Energy and Resource Management -- Machine-to-Machine Communications -- Wireless Smart Sensor Networks -- Heterogeneous Networks, Web of Things, Web of Everything -- Intelligent Optical Networks -- Human-in-the-loop Systems -- Industrial IoT and Digital Twins -- Mobility and Location-dependent Services -- Infrastructure, Devices, and Components -- Intelligent Signal Processing and Data Analysis -- Intelligent Video Analytics -- Virtual Reality and the Internet of Things -- Intelligent Multimedia Surveillance -- AI, Machine learning, and Cognitive Computing -- Evolutionary Computation Techniques and Applications -- Socio-Technical Systems -- Standards and Protocols -- Enabling Technologies for Connected Systems -- Results from Deployment Experiences/Social and Societal Impacts Important Dates ----------------- Full Paper Due: March 31, 2022 Notification of Acceptance: May 25, 2022 Final Version Due: July 31, 2022 Contact Us ----------- E-mail: csai.conference at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pubconference at gmail.com Mon Nov 15 22:48:22 2021 From: pubconference at gmail.com (Pub Conference) Date: Mon, 15 Nov 2021 22:48:22 -0500 Subject: Connectionists: Call for IEEE TNNLS Special Issue on "Stream Learning, " Submission Deadline: December 15, 2021 Message-ID: IEEE TNNLS Special Issue on "Stream Learning," Guest Editors: Jie Lu, University of Technology Sydney, Australia; Joao Gama, University of Porto, Portugal; Xin Yao, Southern University of Science and Technology, China; Leandro Minku, University of Birmingham, UK. Submission Deadline: December 15, 2021 [EXTENDED]. Website: https://cis.ieee.org/images/files/Publications/TNNLS/special-issues/One-Page_IEEE_Transactions_on_NNLS-SI-CFP-Update.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: One-Page_IEEE_Transactions_on_NNLS-SI-CFP-Update.pdf Type: application/pdf Size: 109969 bytes Desc: not available URL: From i.tetko at helmholtz-muenchen.de Mon Nov 15 10:59:32 2021 From: i.tetko at helmholtz-muenchen.de (Igor Tetko) Date: Mon, 15 Nov 2021 18:59:32 +0300 Subject: Connectionists: PhD position: Machine learning in reaction informatics Message-ID: <243FF573-6384-4515-94B6-3484C02CAA17@helmholtz-muenchen.de> PhD position: Machine learning in chemical reaction informatics Chemical synthesis is critical to further increase life quality by contributing to new medicine and new materials. The optimal synthesis can decrease its costs as well as the amount of produced chemical waste. The prediction of the direct, i.e., which new chemical compound results by mixing a set of reactants, or retro-synthesis, which compounds are starting materials to make a given product, is the cornerstone of chemical synthesis. The fellow will develop new method(s) (based on the preliminary results [1,2]) to predict the outcome of reactions. The goal is to extend the published models by incorporating additional information about experiments (reagents, catalyst, solvent, temperature, etc.) and expert knowledge based on NLP approaches. Requirements: knowledge in NLP and deep learning methods, Python frameworks (PyTorch, Tensorflow, etc.); a knowledge of chemistry is desirable but not crucial Eligibility: see detailed rules at https://ai-dd.eu/esr-positions (briefly: not more than 4 years after MSc, MSc from recognised University) Relevant references: ? Karpov P., Godin G., Tetko I.V.: A Transformer Model for Retrosynthesis. In: Artificial Neural Networks and Machine Learning ? ICANN 2019: Workshop and Special Sessions: 17th - 19th September 2019 2019; M?nich. Springer International Publishing: 817-830. ? Tetko I.V., Karpov P., Van Deursen R., Godin G.: State-of-the-art augmented NLP transformer models for direct and single-step retrosynthesis. Nat Comm 2020, 11(1):1-11. About: This position is announced within the Advanced machine learning for Innovative Drug Discovery (AIDD) network (http://ai-dd.eu), that is European Union?s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement No 956832. Multi-modal learning is a hot topic in AI research around the globe. With this project, you will work on applying this innovative approach for chemistry and drug discovery. The project is run in collaboration with 15 other PhD students, multiple academic groups and industry partners, creating great opportunities for networking and collaboration on cool science projects. This position will be located at Helmholtz Zentrum in vibrant M?nchen and at Janssen Pharmaceutica in Belgium. The fellow will collaborate with several other positions such as QM models for reactivity prediction based on machine learning, academic PI Alexandre Tkatchenko; Prediction of outcome of chemical reactions using new neural network architectures, academic PI J?rgen Schmidhuber and others. See application details at https://ai-dd.eu/esr-positions See also LinkedIn announcement at https://www.linkedin.com/posts/dorota-herman-ab433033_phd-positions-activity-6866004030494150656-mW69 Dr. Igor V. Tetko Institute of Structural Biology Helmholtz Zentrum Muenchen (GmbH) German Research Center for Environmental Health Ingolstaedter Landstrasse 1, D-85764 Neuherberg, Germany AIDD: http://ai-dd.eu (coordinator) OCHEM http://ochem.eu Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaudel142 at gmail.com Mon Nov 15 14:48:12 2021 From: rpaudel142 at gmail.com (Ramesh Paudel) Date: Mon, 15 Nov 2021 14:48:12 -0500 Subject: Connectionists: CFP - (SaT-CPS 2022) ACM Workshop on Secure and Trustworthy Cyber-Physical Systems Message-ID: Dear Colleagues, *** Please accept our apologies if you receive multiple copies of this CFP *** Please consider submitting and/or forwarding to the appropriate groups/personnel the opportunity to submit to the ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS 2022), which will be held in Baltimore-Washington DC area (or virtually) on April 26, 2022 in conjunction with the 12th ACM Conference on Data and Application Security and Privacy (CODASPY 2022). *** Paper submission deadline: December 30, 2021 *** *** Website: https://sites.google.com/view/sat-cps-2022/ *** SaT-CPS aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. Example topics of interest are given below, but are not limited to: Secure CPS architectures - Authentication mechanisms for CPS - Access control for CPS - Key management in CPS - Attack detection for CPS - Threat modeling for CPS - Forensics for CPS - Intrusion and anomaly detection for CPS - Trusted-computing in CPS - Energy-efficient and secure CPS - Availability, recovery, and auditing for CPS - Distributed secure solutions for CPS - Metrics and risk assessment approaches - Privacy and trust - Blockchain for CPS security - Data security and privacy for CPS - Digital twins for CPS - Wireless sensor network security - CPS/IoT malware analysis - CPS/IoT firmware analysis - Economics of security and privacy - Securing CPS in medical devices/systems - Securing CPS in civil engineering systems/devices - Physical layer security for CPS - Security on heterogeneous CPS - Securing CPS in automotive systems - Securing CPS in aerospace systems - Usability security and privacy of CPS - Secure protocol design in CPS - Vulnerability analysis of CPS - Anonymization in CPS - Embedded systems security - Formal security methods in CPS - Industrial control system security - Securing Internet-of-Things - Securing smart agriculture and related domains The workshop is planned for one day, April 26, 2022, on the last day of the conference. Instructions for Paper Authors All submissions must describe original research, not published nor currently under review for another workshop, conference, or journal. All papers must be submitted electronically via the Easychair system: https://easychair.org/conferences/?conf=acmsatcps2022 Full-length papers Papers must be at most 10 pages in length in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template). Submission implies the willingness of at least one author to attend the workshop and present the paper. Accepted papers will be included in the ACM Digital Library. The presenter must register for the workshop before the deadline for author registration. Position papers and Work-in-progress papers We also invite short position papers and work-in-progress papers. Such papers can be of length up to 6 pages in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template), and must clearly state "Position Paper" or "Work in progress," as the case may be in the title section of the paper. These papers will be reviewed and accepted papers will be published in the conference proceedings. Important Dates Due date for full workshop submissions: December 30, 2021 Notification of acceptance to authors: February 10, 2022 Camera-ready of accepted papers: February 20, 2022 Workshop day: April 26, 2022 *------------------------------------------------------* *Ramesh Paudel, Ph.D.* Publicity and Web Co-Chair Research Scientist George Washington University Washington, DC. rpaudel42 at gwu.edu, https://rpaudel42.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mathias at tu-dresden.de Mon Nov 15 16:46:38 2021 From: brian.mathias at tu-dresden.de (Brian Mathias) Date: Mon, 15 Nov 2021 21:46:38 +0000 Subject: Connectionists: Postdoc position in cognitive neuroscience of bilingual semantics at TU Dresden, Germany Message-ID: Dear List, A 3-year postdoctoral position is available in the Chair of Cognitive and Clinical Neuroscience at the Technical University Dresden, Germany. The goal of the position is to test models of bilingual semantics using behavioral, fMRI, and EEG methods. TU Dresden is one of eleven German Universities of Excellence and provides an outstanding scientific infrastructure and ideal environment for interdisciplinary cooperation. Experiments will be performed at the Neuroimaging Center (http://www.nic-tud.de), which is equipped with a research-only MRI machine (Siemens 3T Prisma), MRI-compatible EEG, eye-tracking and noise-cancellation headphones, and a neurostimulation (TMS/tDCS) unit. The application deadline is November 26 and the position starts as soon as possible. Further details on the position and how to apply can be found at: https://tud.link/ca6i Please contact Dr. Brian Mathias with questions about the position (brian.mathias at tu-dresden.de). Best wishes, Brian _______ Brian Mathias, PhD Research Associate Chair of Cognitive and Clinical Neuroscience Technical University Dresden 01187 Dresden, Germany Email: brian.mathias at tu-dresden.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From pubconference at gmail.com Mon Nov 15 22:52:40 2021 From: pubconference at gmail.com (Pub Conference) Date: Mon, 15 Nov 2021 22:52:40 -0500 Subject: Connectionists: [Journal] Call for IEEE TNNLS Special Issue on "Stream Learning, " Submission Deadline: December 15, 2021 Message-ID: IEEE TNNLS Special Issue on "Stream Learning," Guest Editors: Jie Lu, University of Technology Sydney, Australia; Joao Gama, University of Porto, Portugal; Xin Yao, Southern University of Science and Technology, China; Leandro Minku, University of Birmingham, UK. Submission Deadline: December 15, 2021 [EXTENDED]. Website: https://cis.ieee.org/images/files/Publications/TNNLS/special-issues/One-Page_IEEE_Transactions_on_NNLS-SI-CFP-Update.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: One-Page_IEEE_Transactions_on_NNLS-SI-CFP-Update.pdf Type: application/pdf Size: 109969 bytes Desc: not available URL: From r.jolivet at ucl.ac.uk Mon Nov 15 11:13:34 2021 From: r.jolivet at ucl.ac.uk (Jolivet, Renaud) Date: Mon, 15 Nov 2021 16:13:34 +0000 Subject: Connectionists: Fully funded PhD and postdoc positions in Computational Neurosciences at Maastricht University Message-ID: <011ECAE1-9F54-46E9-A557-DAB1EDAD58A3@ucl.ac.uk> Dear Colleagues, My department has 1 fully-funded postdoc position and multiple PhD positions for Chinese students to apply to via the Chinese Scientific Council: 1. I am recruiting a postdoc to work on our European EIC Pathfinder Open grant IN-FET (https://cordis.europa.eu/project/id/862882) to develop a new type of brain-machine interface. I am looking for a postdoc with experience in the computational modelling of neuronal biophysics at the cellular and sub-cellular levels, and an interest in using that knowledge to develop equivalent neural network models. The IN-FET project aims at developing a new type of interface for neural tissue that works by modulating the extracellular concentrations of key ionic species. Experience with COMSOL desired but not mandatory. FTE work based in Maastricht, but with frequent trips to Northern Italy anticipated (Trieste region). Attractive conditions, position funded for 27 months. Anticipated start date Jan 2021. 2. Our department is looking to attract Chinese students willing to apply with us for PhD positions funded by the Chinese Scientific Council. Positions are funded for 4 years. The following projects are available in our department: * https://www.maastrichtuniversity.nl/file/2021fsebreueragingmetabolicmodelingdoc * https://www.maastrichtuniversity.nl/file/2021fsejolivetmechanismsandroleofaxon-myelinstructuralplasticitydocx * https://www.maastrichtuniversity.nl/file/2021fseadriaensthegeneticunderpinningsofhumantinnitusdocx Another project is available to investigate the metabolic profile of different glial cell types. If interested, contact me at r.jolivet at maastrichtuniversity.nl or DM me at twitter.com/RenaudJolivet. Cheers, Renaud ? Prof. Renaud B. Jolivet Neural Engineering and Computation, Chair Maastricht University Organization for Computational Neurosciences, Board of Directors Initiative for Science in Europe, External Policy Advisor and Board Member Marie Curie Alumni Association, Vice-Chair of Policy +41798302129 (mobile) ?+31433881741 (office)? r.jolivet at maastrichtuniversity.nl twitter.com/RenaudJolivet linkedin.com/in/renaud-jolivet-63b5534 scholar.google.ch/citations?user=9Ozwv7EAAAAJ&hl=en -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Tue Nov 16 02:17:20 2021 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 16 Nov 2021 07:17:20 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Message-ID: For some perspective, the history of awards and prizes is replete with similar stories. George Dantzig, the godfather of linear programming, didn't get the economics Nobel prize in 1975 along with Koopmans and Kantorovich. Even Koopmans and Kantorovich were surprised why Dantzig was not included. Here's a quote from Dantzig's profile: https://www.informs.org/Explore/History-of-O.R.-Excellence/Biographical-Profiles/Dantzig-George-B "In 1975 Tjalling Koopmans and Leonid Kantorovich were awarded the Nobel Prize in Economics for their contribution in resource allocation and linear programming. Many professionals, Koopmans and Kantorovich included, were surprised at Dantzig's exclusion as an honoree. Most individuals familiar with the situation considered him to be just as worthy of the prize." I read somewhere that Kantorovich was hesitant about accepting the prize and called Kenneth Arrow, who had won the Nobel in 1972. If I remember correctly, Arrow's advice to Kantorovich was something like this: "Just take it. You can't do anything about Dantzig not getting it." And, by the way, both Dantzig and Arrow were at Stanford at that time. Here's a footnote from the same bio: " (Unbeknownst to Dantzig and most other operations researchers in the West, a similar method was derived eight years prior by Soviet mathematician Leonid V. Kantorovich)" Asim Roy Arizona State University -----Original Message----- From: Connectionists connectionists-bounces at mailman.srv.cs.cmu.edu On Behalf Of Schmidhuber Juergen Sent: Sunday, November 14, 2021 9:48 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Dear all, thanks for your public comments, and many additional private ones! So far nobody has challenged the accuracy of any of the statements in the draft report currently under massive open peer review: https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$ Nevertheless, some of the recent comments will trigger a few minor revisions in the near future. Here are a few answers to some of the public comments: Randall O'Reilly wrote: "I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit." Indeed, as I wrote in Science (2011, reference [NASC3] in the report): "As they say: Columbus did not become famous because he was the first to discover America, but because he was the last." Sure, some people sometimes assign the "inventor" title to the person that should be truly called the "popularizer." Frequently, this is precisely due to the popularizer packaging the work of others in such a way that it becomes easily digestible. But this is not to say that their receipt of the title is correct or that we shouldn't do our utmost to correct it; their receipt of such title over the ones that are actually deserving of it is one of the most enduring issues in scientific history. As Stephen Jos? Hanson wrote: "Well, to popularize is not to invent. Many of Juergen's concerns could be solved with some scholarship, such that authors look sometime before 2006 for other relevant references." Randy also wrote: "Sometimes, it is not the basic equations etc that matter: it is the big picture vision." However, the same vision has almost always been there in the earlier work on neural nets. It's just that the work was ahead of its time. It's only in recent years that we have the datasets and the computational power to realize those big pictures visions. I think you would agree that simply scaling something up isn't the same as inventing it. If it were, then the name "Newton" would have little meaning to people nowadays. Jonathan D. Cohen wrote: " ...it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor." Sure, but let's make sure that this cannot be used as a justification of plagiarism! See Sec. 5 of the report. Generally speaking, if B plagiarizes A but inspires C, whom should C cite? The answer is clear. Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came about recently." Not so. See references in Sec. X of the report: the ancient term "deep learning" (explicitly mentioned by ACM) was actually first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000). Tsvi Achler wrote: "Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in `premier' journals and no funding. [...] Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized." This is very misleading - see Sec. A, B, and C of the report which are about recurrent nets with feedback, especially LSTM, heavily used by Google and others, on your smartphone since 2015. Recurrent NNs are general computers that can compute anything your laptop can compute, including any computable model with feedback "back to the inputs." My favorite proof from over 30 years ago: a little subnetwork can be used to build a NAND gate, an! d a big recurrent network of NAND gates can emulate the CPU of your laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) However, as Asim Roy pointed out, this discussion deviates from the original topic of improper credit assignment. Please use another thread for this. Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule." This is so misleading in several ways - see Sec. XII of the report: "Some claim that `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970" by Seppo Linnainmaa. Of course, the person to cite is Linnainmaa. Randy also wrote: "how little Einstein added to what was already established by Lorentz and others". Juyang already respectfully objected to this misleading statement. I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or `strategic vale' or boosting someone because they're a great populariser. ... Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear." See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The inventor of an important method should get credit for inventing it. They may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it - but not for inventing it.' If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in follow-up papers and presentations." I also agree with what Zhaoping Li wrote: "I would find it hard to enter a scientific community if it is not scholarly. Each of us can do our bit to be scholarly, to set an example, if not a warning, to the next generation." Randy also wrote: "Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar?" This sounds almost like a defense of plagiarism. That's what time stamps of patents and papers are for. A recurring point of the report is: the awardees did not cite the prior art - not even in later surveys written when the true origins of this work were well-known. Here I fully agree with what Marina Meila wrote: "Since credit is a form of currency in academia, let's look at the `hard currency' rewards of invention. Who gets them? The first company to create a new product usually fails. However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the `product' directly from them or not." J?rgen ************************* On 27 Oct 2021, at 10:52, Schmidhuber Juergen > wrote: Hi, fellow artificial neural network enthusiasts! The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field. Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch: https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$ The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned. I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity. Thank you all in advance for your help! J?rgen Schmidhuber -------------- next part -------------- An HTML attachment was scrubbed... URL: From cognitivium at sciencebeam.com Tue Nov 16 04:17:05 2021 From: cognitivium at sciencebeam.com (Mary) Date: Tue, 16 Nov 2021 12:47:05 +0330 Subject: Connectionists: Neurofeedback and QEEG workshop postponed to a week later Message-ID: <202111160917.1AG9H6UB056622@scs-mx-04.andrew.cmu.edu> Greetings, On behalf of the ScienceBeam, organizer of various Neuroscience workshops, and manufacturer of clinical and research brain-related products, we would like to inform you that due to many requests from the participants, we have postponed the Neurofeedback and QEEG workshop in Istanbul, Turkey, to a week later, in order to give the interested clinicians/researchers more time to arrange their trips to attend this semi-private hands-on workshop. This two-day hands-on workshop will be conducted with launching the latest EEG device introduced by the Applied Neuroscience Company. This will go over the details of EEG recording and generating a qEEG report, as well as Neurofeedback therapy with various treatment protocols. At the end of this workshop, the participants will be able to record EEGs and generate a qEEG report on their own. They will be able to provide Neurofeedback training according to the QEEG report using various treatment protocols. The workshop, which is suitable for both clinicians and researchers, will be held on November 27-28, and the registration deadline won't be extended again. Please bear in mind that limited seats are available, and save your seat NOW: https://sciencebeam.com/neurofeedback-and-qeeg-workshop-2/ workshop schedule: https://sciencebeam.com/wp-content/uploads/2021/11/Program-Schedule.pdf? ????????????? If you require any further information, please do not hesitate to contact us: Email: workshop at sciencebeam.com WhatsApp: 00905356498587 Mary Reae Human Neuroscience Dept. Manager @ScienceBeam mary at sciencebeam.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From julia.trommershaeuser at esi-frankfurt.de Tue Nov 16 07:58:48 2021 From: julia.trommershaeuser at esi-frankfurt.de (Trommershaeuser, Julia) Date: Tue, 16 Nov 2021 12:58:48 +0000 Subject: Connectionists: =?iso-8859-1?q?data_analyst=2C_Fries_Lab_at_the_E?= =?iso-8859-1?q?rnst_Str=FCngmann_Institute?= Message-ID: <31948f40fb694a02b5936804eb6f8cbc@esi-frankfurt.de> The Fries Lab at the Ernst Str?ngmann Institute (ESI) in Frankfurt is looking for an enthusiastic research assistant / data analyst, who is interested in analyzing complex brain data underlying psychological phenomena. The position is for two years initially, with the possibility of extension, and can start at any time in 2021 or 2022. This call will remain open until the position is filled. THE POSITION The Fries Lab investigates how cognitive processes like attention are coded in the brain. One idea is that this happens via groups of neurons in different brain areas that synchronize their activity. You will help analyzing brain data that assess this idea. You will be jointly supervised by Dr. Marieke Sch?lvinck and Prof. Pascal Fries, and your contribution to the analyses will be recognized in the form of authorships on publications. For details on the institute and the lab, see: www.esi-frankfurt.de/pf YOU The position is open for candidates with a range of qualifications, starting from a BSc degree and including candidates with a PhD. Your salary is commensurate with your qualifications. The main criterion is a solid background in Matlab programming and a convincing interest in analyzing brain data. HOW TO APPLY Applications should include a motivation letter, a curriculum vitae, copies of university degrees, and the names and e-mail addresses of two references. Candidates without a PhD degree should additionally include lists of their university courses with obtained grades. Please send your application materials in electronic format to: marieke.scholvinck at esi-frankfurt.de. Equal opportunities and diversity are important to us! All candidates are equally welcome and encouraged to apply. The ESI offers a family-friendly work environment with affordable child-care in the immediate vicinity, free parking on the premises, is easily accessible by public transport. Please consider our data protection regulations: https://www.esi-frankfurt.de/datenschutz We look forward to hearing from you! Dr. Julia Trommersh?user (Scientific Coordinator, Fries Lab) Ernst Str?ngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society Deutschordenstr. 46, 60528 Frankfurt, Germany Web: www.esi-frankfurt.de Mail: julia.trommershaeuser at esi-frankfurt.de Tel: +49 (0)69 96769 501 Fax: +49 (0)69 96769 555 Sitz der Gesellschaft: Frankfurt am Main Registered at Local Court Frankfurt - HRB 84266 CEO: Prof. David Poeppel, PhD -------------- next part -------------- An HTML attachment was scrubbed... URL: From julia.trommershaeuser at esi-frankfurt.de Tue Nov 16 08:00:06 2021 From: julia.trommershaeuser at esi-frankfurt.de (Trommershaeuser, Julia) Date: Tue, 16 Nov 2021 13:00:06 +0000 Subject: Connectionists: =?iso-8859-1?q?Scientific_Software_Developer=2C_F?= =?iso-8859-1?q?ries_Lab_at_the_Ernst_Str=FCngmann_Institute?= Message-ID: <5e8801bfc78f4a1f903db7315f351a41@esi-frankfurt.de> The Fries Lab at the Ernst Str?ngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society in Frankfurt, Germany, recently started developing a novel software framework: Systems Neuroscience Computing in Python; SyNCoPy, https://github.com/esi-neuroscience/syncopy SyNCoPy is a fully open-source Python environment for neuronal data analysis. SyNCoPy is scalable and built for very large datasets. It is designed to excel in high-performance computing (HPC) environments by leveraging the parallelization and distributed-computing capabilities of Dask, a modern data analytics framework. It is compatible with the MATLAB toolbox FieldTrip. To pursue the further development of SyNCoPy, the Fries Lab is complementing the current SyNCoPy team with a Scientific Software Developer (f/m/d) The position is initially limited to a period of two years with the possibility of extension and can start as soon as possible. Responsibilities: ? Implementation of novel analysis routines in SyNCoPy ? Optimization of concurrent/parallel computing routines in SyNCoPy ? Design of programming interfaces with external Python and MATLAB packages for SyNCoPy Required: ? Intermediate experience with Python, specifically with the packages NumPy, SciPy and scikit-learn ? Knowledge of and passion for applied statistics, time series analysis and/or digital signal processing ? Strong interest and at least some experience in data visualization Preferred: ? Practical know-how of Git ? Interest in open-source software development ? Some experience with MATLAB and/or Octave Desired: ? Know-how of HPC cluster queuing systems (SLURM) ? Neuro-scientific background What We Offer: ? Room to grow and learn: from software design through mathematical problem solving to end user support ? Exciting environment at an excellent research institute ? Small core development team with flat hierarchies ? Possibility of working remotely The successful candidate will benefit from the mentoring of the existing SyNCoPy team of two experienced Research Software Engineers. An integral part of the position is the further development of skills in the above mentioned areas through workshops and courses. The salary is in accordance with the TV?D (Bund). Interested candidates are invited to send a cover letter briefly explaining their professional background and programming experience, a CV, and, if possible, web-links to relevant software projects they contributed to as one PDF file to hr-esi at esi-frankfurt.de with the subject line "SyNCoPy Fries Lab" before January 31st, 2022. Equal opportunities and diversity are important to us! All potential candidates are equally welcome and encouraged to apply. The ESI offers a family-friendly working environment with affordable childcare (children between 3 and 36 months) in the immediate vicinity and is easily accessible by public transport and offers free parking on the institute premises. Please consider our data protection regulations: https://www.esi-frankfurt.de/datenschutz Dr. Julia Trommersh?user (Scientific Coordinator, Fries Lab) Ernst Str?ngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society Deutschordenstr. 46, 60528 Frankfurt, Germany Web: www.esi-frankfurt.de Mail: julia.trommershaeuser at esi-frankfurt.de Tel: +49 (0)69 96769 501 Fax: +49 (0)69 96769 555 Sitz der Gesellschaft: Frankfurt am Main Registered at Local Court Frankfurt - HRB 84266 CEO: Prof. David Poeppel, PhD -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.w.vandemeent at uva.nl Tue Nov 16 09:18:25 2021 From: j.w.vandemeent at uva.nl (Jan-Willem van de Meent) Date: Tue, 16 Nov 2021 14:18:25 +0000 Subject: Connectionists: Tenure-track Position at the University of Amsterdam Message-ID: Dear Colleagues, We have an opening for a tenure-track position at the University of Amsterdam, which will be affiliated with our ELLIS unit. The deadline for applications is December 13: https://vacatures.uva.nl/UvA/job/Tenure-Track-position-In-Machine-Learning/735935202/ We are inviting applications from excellent candidates whose research focuses on machine learning, including application areas of machine learning in the natural sciences. We are highly encouraging applications by researchers from under-represented groups in computing and will prioritize such applications in our review process. If you have any questions about this position, please contact Max Welling > and/or me, Jan-Willem van de Meent >. With best wishes, Jan-Willem -- Jan-Willem van de Meent Associate Professor (UHD) Amsterdam Machine Learning Lab Institute of Informatics University of Amsterdam (UvA) https://jwvdm.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From dayan at tue.mpg.de Tue Nov 16 16:10:02 2021 From: dayan at tue.mpg.de (Peter Dayan) Date: Tue, 16 Nov 2021 22:10:02 +0100 Subject: Connectionists: deadline: 30th Nov: MSc/PhD in Neuroscience in Tuebingen In-Reply-To: <20211014105046.wtbpasjz3dbrdz6v@tuebingen.mpg.de> References: <20211014105046.wtbpasjz3dbrdz6v@tuebingen.mpg.de> Message-ID: <20211116211002.c4m2vcdxbyahtfth@tuebingen.mpg.de> International Max Planck Research School: The Mechanisms of Mental Function and Dysfunction 5-Year combined MSc/PhD program The Max Planck Institute for Biological Cybernetics, the Hertie Institute for Clinical Brain Research and the University of T?bingen invite students from all over the world to apply for their interdisciplinary 5-year combined MSc/PhD program leading to a PhD in Neuroscience. Full funding is available for top-ranked applicants. We are seeking talented, curious and open-minded scientists with strong backgrounds in neuroscience, biomedical sciences, computational science, applied mathematics, statistics, artificial intelligence, or engineering. Successful candidates will possess a burning aspiration to shape the future of neuroscience and the ability to thrive in a fast-paced, interdisciplinary, environment. The application deadline is 30th November 2021. Please visit: https://www.kyb.tuebingen.mpg.de/imprs-mmfd https://www.neuroschool-tuebingen.de/about-imprs/ for more details and information about applying. The MSc/PhD program is a collaboration between the Max Planck Institute for Biological Cybernetics, the Hertie Institute for Clinical Brain Research and the University of T?bingen. It is closely affiliated with the renowned Graduate Training Centre of Neuroscience, the centerpiece of neuroscience training in T?bingen. Students (who should have been awarded a Bachelor's degree by September 2022) will receive a broad interdisciplinary training in neuroscience, including expert teaching by international renowned scientists and individual and intensive mentoring. Potential research topics cover a variety of fields in systems neuroscience, cognitive and behavioral neuroscience, computational neuroscience, translational and clinical neuroscience as well as cellular and molecular neuroscience. Teaching and research are conducted in English. -- -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 4936 bytes Desc: not available URL: From juergen at idsia.ch Wed Nov 17 00:45:04 2021 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Wed, 17 Nov 2021 05:45:04 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> Message-ID: <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> In a mature field like math we?d never have such a discussion. It is well-known that plagiarism may be unintentional (e.g., https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism). However, since nobody can read the minds and intentions of others, science & tech came up with a _formal_ way of establishing priority: the time stamps of publications and patents. If you file your patent one day after your competitor, you are scooped, no matter whether you are the original inventor, a re-inventor, a great popularizer, or whatever. (In the present context, however, we are mostly talking about decades rather than days.) Randy wrote: "For example, I cite Rumelhart et al (1986) for backprop, because that is how I and most other people in the modern field learned about this idea, and we know for a fact that they genuinely reinvented it and conveyed its implications in a very compelling way. If I might be writing a paper on the history of backprop, or some comprehensive review, then yes it would be appropriate to cite older versions that had limited impact, being careful to characterize the relationship as one of reinvention." See Sec. XVII of the report: "The deontology of science requires: If one `re-invents' something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in all follow-up papers and presentations." In particular, along the lines of Randy's remarks on historic surveys: from a survey like the 2021 Turing lecture you'd expect correct credit assignment instead of additional attempts at getting credit for work done by others. See Sec. 2 of the report: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html#lbhacm I also agree with what Barak Pearlmutter wrote on the example of backpropagation: "So if you want to cite, say, Seppo Linnainmaa for Reverse Mode Automatic Differentiation, you need to have actually read it yourself. Otherwise you need to do a bounce citation: `Linnainmaa (1982) described by Schmidhuber (2021) as exhibiting a Fortran implementation of Reverse Mode Automatic Differentiation' or something like that.'" Indeed, you don't have to read Linnainmaa's original 1970 paper on what's now called backpropagation, you can read his 1976 journal paper (in English), or Griewank's 2012 paper "Who invented the reverse mode of differentiation?" in Documenta Mathematica, or other papers on this famous subject, some of them cited in my report, which also cites Werbos, who first applied the method to NNs in 1982 (but not yet in his 1974 thesis). The report continues: "By 1985, compute had become about 1,000 times cheaper than in 1970, and the first desktop computers had just become accessible in wealthier academic labs. Computational experiments then demonstrated that backpropagation can yield useful internal representations in hidden layers of NNs.[RUM] But this was essentially just an experimental analysis of a known method.[BP1-2] And the authors did not cite the prior art - not even in later surveys.[DL3,DL3a][DLC]" I find it interesting what Asim Roy wrote: "In 1975 Tjalling Koopmans and Leonid Kantorovich were awarded the Nobel Prize in Economics for their contribution in resource allocation and linear programming. Many professionals, Koopmans and Kantorovich included, were surprised at Dantzig?s exclusion as an honoree. Most individuals familiar with the situation considered him to be just as worthy of the prize. [...] (Unbeknownst to Dantzig and most other operations researchers in the West, a similar method was derived eight years prior by Soviet mathematician Leonid V. Kantorovich)." Let's not forget, however, that there is no "Nobel Prize in Economics!" (See also this 2010 paper: https://people.idsia.ch/~juergen/nobelshare.html) J?rgen From zk240 at cam.ac.uk Tue Nov 16 20:01:23 2021 From: zk240 at cam.ac.uk (Zoe Kourtzi) Date: Wed, 17 Nov 2021 01:01:23 +0000 Subject: Connectionists: Postdoc positions in Computational Neuroimaging Message-ID: <02B9A10E-758B-4FF9-B696-3C49A0FB16EE@cam.ac.uk> 2x post-doctoral positions in Computational Neuroimaging at the Adaptive Brain Lab (Univ of Cambridge; http://www.abg.psychol.cam.ac.uk), University of Cambridge, UK. Opportunity to work with our cross-disciplinary team on a new Wellcome Trust funded Collaborative award that bridges work across species (humans, rodents) and scales (local circuits, global networks) to uncover the network and neurochemical mechanisms that support learning and brain plasticity. Successful applicants will be integrated in a diverse collaborative team of international experts and will receive cross-disciplinary training in innovative methodologies at the interface of neuroscience, neurotechnology and computational science. For details and to apply online see: https://www.jobs.cam.ac.uk/job/32242/: For Informal enquiries please contact Prof Zoe Kourtzi (zk240 at cam.ac.uk) with CV and brief statement of background skills and research interests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Tue Nov 16 20:30:25 2021 From: juyang.weng at gmail.com (Juyang Weng) Date: Tue, 16 Nov 2021 20:30:25 -0500 Subject: Connectionists: Connectionists Digest, Vol 764, Issue 1 In-Reply-To: References: Message-ID: Dear Juergen, I respectfully waited till people have had enough time to respond to your plagiarism allegations. Many people probably are not aware of a much more severe problem than the plagiarism you correctly raised: I would like to raise here that error-backprop is a major technical flaw in many types of neural networks (CNN, LSTM, etc.) buried in a protocol violation called Post-Selection Using Test Sets (PSUTS). See this IJCNN 2021 paper: J. Weng, "On Post Selections Using Test Sets (PSUTS) in AI", in Proc. International Joint Conference on Neural Networks, pp. 1-8, Shengzhen, China, July 18-22, 2021. PDF file . Those who do not agree with me please respond. Best regards, -John ---------------------------------------------------------------------- Message: 1 Date: Sun, 14 Nov 2021 16:47:36 +0000 From: Schmidhuber Juergen To: "connectionists at cs.cmu.edu" Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472 at supsi.ch> Content-Type: text/plain; charset="utf-8" Dear all, thanks for your public comments, and many additional private ones! So far nobody has challenged the accuracy of any of the statements in the draft report currently under massive open peer review: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html Nevertheless, some of the recent comments will trigger a few minor revisions in the near future. Here are a few answers to some of the public comments: Randall O'Reilly wrote: "I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit." Indeed, as I wrote in Science (2011, reference [NASC3] in the report): "As they say: Columbus did not become famous because he was the first to discover America, but because he was the last." Sure, some people sometimes assign the "inventor" title to the person that should be truly called the "popularizer." Frequently, this is precisely due to the popularizer packaging the work of others in such a way that it becomes easily digestible. But this is not to say that their receipt of the title is correct or that we shouldn't do our utmost to correct it; their receipt of such title over the ones that are actually deserving of it is one of the most enduring issues in scientific history. As Stephen Jos? Hanson wrote: "Well, to popularize is not to invent. Many of Juergen's concerns could be solved with some scholarship, such that authors look sometime before 2006 for other relevant references." Randy also wrote: "Sometimes, it is not the basic equations etc that matter: it is the big picture vision." However, the same vision has almost always been there in the earlier work on neural nets. It's just that the work was ahead of its time. It's only in recent years that we have the datasets and the computational power to realize those big pictures visions. I think you would agree that simply scaling something up isn't the same as inventing it. If it were, then the name "Newton" would have little meaning to people nowadays. Jonathan D. Cohen wrote: " ...it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor." Sure, but let?s make sure that this cannot be used as a justification of plagiarism! See Sec. 5 of the report. Generally speaking, if B plagiarizes A but inspires C, whom should C cite? The answer is clear. Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came about recently." Not so. See references in Sec. X of the report: the ancient term "deep learning" (explicitly mentioned by ACM) was actually first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000). Tsvi Achler wrote: "Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in `premier' journals and no funding. [...] Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized." This is very misleading - see Sec. A, B, and C of the report which are about recurrent nets with feedback, especially LSTM, heavily used by Google and others, on your smartphone since 2015. Recurrent NNs are general computers that can compute anything your laptop can compute, including any computable model with feedback "back to the inputs." My favorite proof from over 30 years ago: a little subnetwork can be used to build a NAND gate, an! d a big recurrent network of NAND gates can emulate the CPU of your laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) However, as Asim Roy pointed out, this discussion deviates from the original topic of improper credit assignment. Please use another thread for this. Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule." This is so misleading in several ways - see Sec. XII of the report: "Some claim that `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970" by Seppo Linnainmaa. Of course, the person to cite is Linnainmaa. Randy also wrote: "how little Einstein added to what was already established by Lorentz and others". Juyang already respectfully objected to this misleading statement. I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or `strategic vale' or boosting someone because they're a great populariser. ... Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear." See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The inventor of an important method should get credit for inventing it. They may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it - but not for inventing it.' If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in follow-up papers and presentations." I also agree with what Zhaoping Li wrote: "I would find it hard to enter a scientific community if it is not scholarly. Each of us can do our bit to be scholarly, to set an example, if not a warning, to the next generation." Randy also wrote: "Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar?" This sounds almost like a defense of plagiarism. That's what time stamps of patents and papers are for. A recurring point of the report is: the awardees did not cite the prior art - not even in later surveys written when the true origins of this work were well-known. Here I fully agree with what Marina Meila wrote: "Since credit is a form of currency in academia, let's look at the `hard currency' rewards of invention. Who gets them? The first company to create a new product usually fails. However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the `product' directly from them or not." J?rgen On Mon, Nov 15, 2021 at 12:35 PM < connectionists-request at mailman.srv.cs.cmu.edu> wrote: > Send Connectionists mailing list submissions to > connectionists at mailman.srv.cs.cmu.edu > > To subscribe or unsubscribe via the World Wide Web, visit > https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists > or, via email, send a message with subject or body 'help' to > connectionists-request at mailman.srv.cs.cmu.edu > > You can reach the person managing the list at > connectionists-owner at mailman.srv.cs.cmu.edu > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Connectionists digest..." > > > Today's Topics: > > 1. Re: Scientific Integrity, the 2021 Turing Lecture, etc. > (Schmidhuber Juergen) > 2. [journals] Special Issue on the topic ?Cognitive Robotics > in Social Applications? (Francesco Rea) > 3. Re: Scientific Integrity, the 2021 Turing Lecture, etc. > (Randall O'Reilly) > 4. Re: Scientific Integrity, the 2021 Turing Lecture, etc. > (Maria Kesa) > 5. Re: Scientific Integrity, the 2021 Turing Lecture, etc. > (Barak A. Pearlmutter) > 6. CFP - Special issue on ?Human-like Behavior and Cognition in > Robots? (marwen Belkaid) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 14 Nov 2021 16:47:36 +0000 > From: Schmidhuber Juergen > To: "connectionists at cs.cmu.edu" > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472 at supsi.ch> > Content-Type: text/plain; charset="utf-8" > > Dear all, thanks for your public comments, and many additional private > ones! > > So far nobody has challenged the accuracy of any of the statements in the > draft report currently under massive open peer review: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > Nevertheless, some of the recent comments will trigger a few minor > revisions in the near future. > > Here are a few answers to some of the public comments: > > Randall O'Reilly wrote: "I vaguely remember someone making an interesting > case a while back that it is the *last* person to invent something that > gets all the credit." Indeed, as I wrote in Science (2011, reference > [NASC3] in the report): "As they say: Columbus did not become famous > because he was the first to discover America, but because he was the last." > Sure, some people sometimes assign the "inventor" title to the person that > should be truly called the "popularizer." Frequently, this is precisely due > to the popularizer packaging the work of others in such a way that it > becomes easily digestible. But this is not to say that their receipt of the > title is correct or that we shouldn't do our utmost to correct it; their > receipt of such title over the ones that are actually deserving of it is > one of the most enduring issues in scientific history. > > As Stephen Jos? Hanson wrote: "Well, to popularize is not to invent. Many > of Juergen's concerns could be solved with some scholarship, such that > authors look sometime before 2006 for other relevant references." > > Randy also wrote: "Sometimes, it is not the basic equations etc that > matter: it is the big picture vision." However, the same vision has almost > always been there in the earlier work on neural nets. It's just that the > work was ahead of its time. It's only in recent years that we have the > datasets and the computational power to realize those big pictures visions. > I think you would agree that simply scaling something up isn't the same as > inventing it. If it were, then the name "Newton" would have little meaning > to people nowadays. > > Jonathan D. Cohen wrote: " ...it is also worth noting that science is an > *intrinsically social* endeavor, and therefore communication is a > fundamental factor." Sure, but let?s make sure that this cannot be used as > a justification of plagiarism! See Sec. 5 of the report. > > Generally speaking, if B plagiarizes A but inspires C, whom should C cite? > The answer is clear. > > Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came > about recently." Not so. See references in Sec. X of the report: the > ancient term "deep learning" (explicitly mentioned by ACM) was actually > first introduced to Machine Learning by Dechter (1986), and to NNs by > Aizenberg et al (2000). > > Tsvi Achler wrote: "Models which have true feedback (e.g. back to their > own inputs) cannot learn by backpropagation but there is plenty of evidence > these types of connections exist in the brain and are used during > recognition. Thus they get ignored: no talks in universities, no featuring > in `premier' journals and no funding. [...] Lastly Feedforward methods are > predominant in a large part because they have financial backing from large > companies with advertising and clout like Google and the self-driving craze > that never fully materialized." This is very misleading - see Sec. A, B, > and C of the report which are about recurrent nets with feedback, > especially LSTM, heavily used by Google and others, on your smartphone > since 2015. Recurrent NNs are general computers that can compute anything > your laptop can compute, including any computable model with feedback "back > to the inputs." My favorite proof from over 30 years ago: a little > subnetwork can be used to build a NAND gate, an! > d a big recurrent network of NAND gates can emulate the CPU of your > laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) > However, as Asim Roy pointed out, this discussion deviates from the > original topic of improper credit assignment. Please use another thread for > this. > > Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for > backprop, as Steve suggested? Seriously, most of the math powering today's > models is just calculus and the chain rule." This is so misleading in > several ways - see Sec. XII of the report: "Some claim that > `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital > (1696).' No, it is the efficient way of applying the chain rule to big > networks with differentiable nodes (there are also many inefficient ways of > doing this). It was not published until 1970" by Seppo Linnainmaa. Of > course, the person to cite is Linnainmaa. > > Randy also wrote: "how little Einstein added to what was already > established by Lorentz and others". Juyang already respectfully objected to > this misleading statement. > > I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects > of the social quality of the scientific enterprise, let's take a look at a > simpler thing; individual duty. Each scientist has a duty to science (as an > intellectual discipline) and the scientific community, to uphold > fundamental principles informing the conduct of science. Credit should be > given wherever it is due - it is a matter of duty, not preference or > `strategic vale' or boosting someone because they're a great populariser. > ... Crediting those who disseminate is fine and dandy, but should be for > those precise contributions, AND the originators of an idea/method/body of > work ought to be recognised - this is perhaps a bit difficult when the work > is obscured by history, but not impossible. At any rate, if one has novel > information of pertinence w.r.t original work, then the right action is > crystal clear." > > See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The > inventor of an important method should get credit for inventing it. They > may not always be the one who popularizes it. Then the popularizer should > get credit for popularizing it - but not for inventing it.' If one > "re-invents" something that was already known, and only becomes aware of it > later, one must at least clarify it later, and correctly give credit in > follow-up papers and presentations." > > I also agree with what Zhaoping Li wrote: "I would find it hard to enter a > scientific community if it is not scholarly. Each of us can do our bit to > be scholarly, to set an example, if not a warning, to the next generation." > > Randy also wrote: "Outside of a paper specifically on the history of a > field, does it really make sense to "require" everyone to cite obscure old > papers that you can't even get a PDF of on google scholar?" This sounds > almost like a defense of plagiarism. That's what time stamps of patents and > papers are for. A recurring point of the report is: the awardees did not > cite the prior art - not even in later surveys written when the true > origins of this work were well-known. > > Here I fully agree with what Marina Meila wrote: "Since credit is a form > of currency in academia, let's look at the `hard currency' rewards of > invention. Who gets them? The first company to create a new product usually > fails. However, the interesting thing is that society (by this I mean the > society most of us we work in) has found it necessary to counteract this, > and we have patent laws to protect the rights of the inventors. The point > is not whether patent laws are effective or not, it's the social norm they > implement. That to protect invention one should pay attention to rewarding > the original inventors, whether we get the `product' directly from them or > not." > > J?rgen > > > > > ************************* > > On 27 Oct 2021, at 10:52, Schmidhuber Juergen wrote: > > Hi, fellow artificial neural network enthusiasts! > > The connectionists mailing list is perhaps the oldest mailing list on > ANNs, and many neural net pioneers are still subscribed to it. I am hoping > that some of them - as well as their contemporaries - might be able to > provide additional valuable insights into the history of the field. > > Following the great success of massive open online peer review (MOOR) for > my 2015 survey of deep learning (now the most cited article ever published > in the journal Neural Networks), I've decided to put forward another piece > for MOOR. I want to thank the many experts who have already provided me > with comments on it. Please send additional relevant references and > suggestions for improvements for the following draft directly to me at > juergen at idsia.ch: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > The above is a point-for-point critique of factual errors in ACM's > justification of the ACM A. M. Turing Award for deep learning and a > critique of the Turing Lecture published by ACM in July 2021. This work can > also be seen as a short history of deep learning, at least as far as ACM's > errors and the Turing Lecture are concerned. > > I know that some view this as a controversial topic. However, it is the > very nature of science to resolve controversies through facts. Credit > assignment is as core to scientific history as it is to machine learning. > My aim is to ensure that the true history of our field is preserved for > posterity. > > Thank you all in advance for your help! > > J?rgen Schmidhuber > > > > > > > > ------------------------------ > > Message: 2 > Date: Sun, 14 Nov 2021 15:40:56 +0000 > From: Francesco Rea > To: "connectionists at mailman.srv.cs.cmu.edu" > > Subject: Connectionists: [journals] Special Issue on the topic > ?Cognitive Robotics in Social Applications? > Message-ID: <5af44b4a238247ccb0968b75fe639e0c at iit.it> > Content-Type: text/plain; charset="windows-1252" > > Dear colleague, > > We hope this email finds you well! > > We would like to kindly inform you about a Special Issue on the topic > ?Cognitive Robotics in Social Applications? of the open access journal > ?Electronics? (ISSN 2079-9292, IF 2.397), for which we are serving as Guest > Editors. > > We are writing to inquire whether you would be interested in submitting a > contribution to this Special Issue. The deadline for submitting the > manuscript is 31 December 2021. > > Please, find more details for this call and all the submission information > at the following link: > > https://www.mdpi.com/journal/electronics/special_issues/cognitive_robots > > We hope you will contribute to this well-focused Special Issue, and we > would be grateful if you could forward this information to friends and > colleagues that might be interested in the topic. > > Best Regards, > > Prof. Dr. Dimitri Ognibene, Dr. Giovanni Pilato, Dr. Francesco Rea > Guest Editors > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211114/4416c1d3/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Mon, 15 Nov 2021 00:36:09 -0800 > From: "Randall O'Reilly" > To: Schmidhuber Juergen > Cc: Connectionists Connectionists > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > Message-ID: <6AC9BA06-DBF7-4FCA-87CE-0776DE9CC498 at ucdavis.edu> > Content-Type: text/plain; charset=us-ascii > > Juergen, > > > Generally speaking, if B plagiarizes A but inspires C, whom should C > cite? The answer is clear. > > Using the term plagiarize here implies a willful stealing of other > people's ideas, and is a very serious allegation as I'm sure you are > aware. At least some of the issues you raised are clearly not of this > form, involving obscure publications that almost certainly the so-called > plagiarizers had no knowledge of. This is then a case of reinvention, > which happens all the time is still hard to avoid even with tools like > google scholar available now (but not back when most of the relevant work > was being done). You should be very careful to not confuse these two > things, and only allege plagiarism when there is a very strong case to be > made. > > In any case, consider this version: > > If B reinvents A but publishes a much more [comprehensive | clear | > applied | accessible | modern] (whatever) version that becomes the main way > in which many people C learn about the relevant idea, whom should C cite? > > For example, I cite Rumelhart et al (1986) for backprop, because that is > how I and most other people in the modern field learned about this idea, > and we know for a fact that they genuinely reinvented it and conveyed its > implications in a very compelling way. If I might be writing a paper on > the history of backprop, or some comprehensive review, then yes it would be > appropriate to cite older versions that had limited impact, being careful > to characterize the relationship as one of reinvention. > > Referring to Rumelhart et al (1986) as "popularizers" is a gross > mischaracterization of the intellectual origins and true significance of > such a work. Many people in this discussion have used that term > inappropriately as it applies to the relevant situations at hand here. > > > Randy also wrote: "how little Einstein added to what was already > established by Lorentz and others". Juyang already respectfully objected to > this misleading statement. > > I beg to differ -- this is a topic of extensive ongoing debate: > https://en.wikipedia.org/wiki/Relativity_priority_dispute -- specifically > with respect to special relativity, which is the case I was referring to, > not general relativity, although it appears there are issues there too. > > - Randy > > > ------------------------------ > > Message: 4 > Date: Mon, 15 Nov 2021 11:11:44 +0100 > From: Maria Kesa > To: "Randall O'Reilly" > Cc: Connectionists Connectionists > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > Message-ID: > a8xwrC07FOSLC2R6wAG47Cw9V3aQx-az4cA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > My personal take and you can all kiss my ass message > > https://fuckmyasspsychiatry.blogspot.com/2021/11/jurgen-schmidhuber-is-ethically-bankrupt.html > > All the very best, > Maria Kesa > > On Mon, Nov 15, 2021 at 11:06 AM Randall O'Reilly > wrote: > > > Juergen, > > > > > Generally speaking, if B plagiarizes A but inspires C, whom should C > > cite? The answer is clear. > > > > Using the term plagiarize here implies a willful stealing of other > > people's ideas, and is a very serious allegation as I'm sure you are > > aware. At least some of the issues you raised are clearly not of this > > form, involving obscure publications that almost certainly the so-called > > plagiarizers had no knowledge of. This is then a case of reinvention, > > which happens all the time is still hard to avoid even with tools like > > google scholar available now (but not back when most of the relevant work > > was being done). You should be very careful to not confuse these two > > things, and only allege plagiarism when there is a very strong case to be > > made. > > > > In any case, consider this version: > > > > If B reinvents A but publishes a much more [comprehensive | clear | > > applied | accessible | modern] (whatever) version that becomes the main > way > > in which many people C learn about the relevant idea, whom should C cite? > > > > For example, I cite Rumelhart et al (1986) for backprop, because that is > > how I and most other people in the modern field learned about this idea, > > and we know for a fact that they genuinely reinvented it and conveyed its > > implications in a very compelling way. If I might be writing a paper on > > the history of backprop, or some comprehensive review, then yes it would > be > > appropriate to cite older versions that had limited impact, being careful > > to characterize the relationship as one of reinvention. > > > > Referring to Rumelhart et al (1986) as "popularizers" is a gross > > mischaracterization of the intellectual origins and true significance of > > such a work. Many people in this discussion have used that term > > inappropriately as it applies to the relevant situations at hand here. > > > > > Randy also wrote: "how little Einstein added to what was already > > established by Lorentz and others". Juyang already respectfully objected > to > > this misleading statement. > > > > I beg to differ -- this is a topic of extensive ongoing debate: > > https://en.wikipedia.org/wiki/Relativity_priority_dispute -- > specifically > > with respect to special relativity, which is the case I was referring to, > > not general relativity, although it appears there are issues there too. > > > > - Randy > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211115/3bbd31b2/attachment-0001.html > > > > ------------------------------ > > Message: 5 > Date: Mon, 15 Nov 2021 14:21:33 +0000 > From: "Barak A. Pearlmutter" > To: "connectionists at cs.cmu.edu" > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > Message-ID: > W6TmfLZw at mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > One point of scientific propriety and writing that may be getting lost > in the scrum here, and which has I think contributed substantially to > the somewhat woeful state of credit assignment in the field, is the > traditional idea of what a citation *means*. > > If a paper says "we use the Foo Transform (Smith, 1995)" that, > traditionally, implies that the author has actually read Smith (1995) > and it describes the Foo Transform as used in the work being > presented. If the author was told that the Foo Transform was actually > discovered by Barker (1980) but the author hasn't actually verified > that by reading Barker (1980), then the author should NOT just cite > Barker. If the author heard that Barker (1980) is the "right" citation > for the Foo Transform, but they got the details of it that they're > actually using from Smith (1995) then they're supposed to say so: "We > use the Foo Transform as described in Smith (1995), attributed to > Barker (1980) by someone I met in line for the toilet at NeurIPS > 2019". > > This seemingly-antediluvian practice is to guard against people citing > "Barker (1980)" as saying something that it actually doesn't say, > proving a theorem that it doesn't, defining terms ("rate code", cough > cough) in a fashion that is not consistent with Barker's actual > definitions, etc. Iterated violations of this often manifest as > repeated and successive simplification of an idea, a so-called game of > telephone, until something not even true is sagely attributed to some > old publication that doesn't actually say it. > > So if you want to cite, say, Seppo Linnainmaa for Reverse Mode > Automatic Differentiation, you need to have actually read it yourself. > Otherwise you need to do a bounce citation: "Linnainman (1982) > described by Schmidhuber (2021) as exhibiting a Fortran implementation > of Reverse Mode Automatic Differentiation" or something like that. > > This is also why it's considered fine to simply cite a textbook or > survey paper: nobody could possibly mistake those as the original > source, but they may well be where the author actually got it from. > > To bring this back to the present thread: I must confess that I have > not actually read many of the old references J?rgen brings up. > Certainly "X (1960) invented deep learning" is not enough to allow > someone to cite them. It's not even enough for a bounce citation. What > did they *actually* do? What is J?rgen saying they actually did? > > > > ------------------------------ > > Message: 6 > Date: Mon, 15 Nov 2021 16:28:04 +0100 > From: marwen Belkaid > To: > Subject: Connectionists: CFP - Special issue on ?Human-like Behavior > and Cognition in Robots? > Message-ID: <611a17bd-93fa-6f28-a07b-f82ab780be1a at iit.it> > Content-Type: text/plain; charset="utf-8"; Format="flowed" > > > Call for papers > > Special issue on ?Human-like Behavior and Cognition in Robots?///in the > International Journal of Social Robotics/ > > _Submission deadline_: January 5, 2022; Research articles and > Theoretical papers > > _More info_: https://www.springer.com/journal/12369/updates/19850712 > > > > *Description* > > This Special Issue is in continuation of the HBCR workshop organized at > the 2021 IEEE/RSJ International Conference on Intelligent Robots and > Systems (IROS 2021) on ?Human-like Behavior and Cognition in Robots > ?. Submissions > are welcomed from contributors who attended the workshop as well as from > those who did not. > > Building robots capable of behaving in a human-like manner is a > long-term goal in robotics. It is becoming even more crucial with the > growing number of applications in which robots are brought closer to > humans, not only trained experts, but also inexperienced users, > children, the elderly, or clinical populations. > > Current research from different disciplines contributes to this general > endeavor in various ways: > > * > > by creating robots that mimic specific aspects of human behavior, > > * > > by designing brain-inspired cognitive architectures for robots, > > * > > by implementing embodied neural models driving robots? behavior, > > * > > by reproducing human motion dynamics on robots, > > * > > by investigating how humans perceive and interact with robots, > dependent on the degree of the robots? human-likeness. > > This special issue thus welcomes research articles as well as > theoretical articles from different areas of research (e.g., robotics, > artificial intelligence, human-robot interaction, computational modeling > of human cognition and behavior, psychology, cognitive neuroscience) > addressing questions such as the following: > > * > > How to design robots with human-like behavior and cognition? > > * > > What are the best methods for examining human-like behavior and > cognition? > > * > > What are the best approaches for implementing human-like behavior > and cognition in robots? > > * > > How to manipulate, control and measure robots? degree of > human-likeness? > > * > > Is autonomy a prerequisite for human-likeness? > > * > > How to best measure human reception of human-likeness of robots? > > * > > What is the link between perceived human-likeness and social > attunement in human-robot interaction? > > * > > How can such human-like robots inform and enable human-centered > research? > > * > > How can modeling human-like behavior in robots inform us about human > cognition? > > * > > In what contexts and applications do we need human-like behavior or > cognition? > > * > > And in what contexts it is not necessary? > > > *Guest editors* > > * > > Marwen Belkaid, Istituto Italiano di Tecnologia (Italy) > > * > > Giorgio Metta, Istituto Italiano di Tecnologia (Italy) > > * > > Tony Prescott, University of Sheffield (United Kingdom) > > * > > Agnieszka Wykowska, Istituto Italiano di Tecnologia (Italy) > > > -- > Dr Marwen BELKAID > Istituto Italiano di Tecnologia > Center for Human Technologies > Via Enrico Melen, 83 > 16152 Genoa, Italy > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211115/fbe82839/attachment-0001.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Connectionists mailing list > Connectionists at mailman.srv.cs.cmu.edu > https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionists > > ------------------------------ > > End of Connectionists Digest, Vol 764, Issue 1 > ********************************************** > -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Wed Nov 17 12:05:06 2021 From: juyang.weng at gmail.com (Juyang Weng) Date: Wed, 17 Nov 2021 12:05:06 -0500 Subject: Connectionists: Connectionists Digest, Vol 764, Issue 1 In-Reply-To: References: Message-ID: Dear Suganthan, thank you for your message about what you did in your neural network experiments. If I were you, I would not simply trust what advisees said to me, but I trust my understanding of the current severe lack of generalization power in shallow data fitting by CNNs and LSTMs (including adversarial learning), etc. As I stated in my Post-Selection paper in IJCNN 2021, we must demand transparency in the Post-Selection stage, such as reporting the distribution of performances of all networks that have been trained. Without presentation of such a distribution, we should not simply trust a terse statement like the one from your advisees. Our own experiments have shown a huge variation in such a distribution! Why? Error-backprop in neural networks is like "Chairman Mao Zedong did error-backprop in China". Chairman Mao may present only one lucky case, like a nuclear bomb, to brag about the Chinese planned economy. For those who like to get more an intuitive explanation, watch this YouTube video: https://youtu.be/VpsufMtia14 The neural network community needs a fundamental cultural change: citation integrity and transparency of all networks trained. Best regards, -John On Wed, Nov 17, 2021 at 5:20 AM Ponnuthurai Nagaratnam Suganthan < EPNSugan at ntu.edu.sg> wrote: > Dear John, > My understanding is that we use test set only once to determine test > accuracy and that is the last step. We just report test results and end. > I?m not aware of any selection afterwards. If anyone imdoes anything like > that, that?d be cheating ? > Best Regards > Suganthan > > ------------------------------ > *From:* Connectionists on > behalf of Juyang Weng > *Sent:* Wednesday, 17 November 2021 9:30 am > *To:* Post Connectionists > *Subject:* Re: Connectionists: Connectionists Digest, Vol 764, Issue 1 > > Dear Juergen, > > I respectfully waited till people have had enough time to respond to your > plagiarism allegations. > > Many people probably are not aware of a much more severe problem than the > plagiarism you correctly raised: > > I would like to raise here that error-backprop is a major technical flaw > in many types of neural networks (CNN, LSTM, etc.) buried in a protocol > violation called Post-Selection Using Test Sets (PSUTS). > See this IJCNN 2021 paper: > J. Weng, "On Post Selections Using Test Sets (PSUTS) in AI", in Proc. > International Joint Conference on Neural Networks, pp. 1-8, Shengzhen, > China, July 18-22, 2021. PDF file > . > > Those who do not agree with me please respond. > > Best regards, > -John > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 14 Nov 2021 16:47:36 +0000 > From: Schmidhuber Juergen > To: "connectionists at cs.cmu.edu" > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > Message-ID: <532DC982-9F4B-41F8-9AB4-AD21314C6472 at supsi.ch> > Content-Type: text/plain; charset="utf-8" > > Dear all, thanks for your public comments, and many additional private > ones! > > So far nobody has challenged the accuracy of any of the statements in the > draft report currently under massive open peer review: > > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html > > Nevertheless, some of the recent comments will trigger a few minor > revisions in the near future. > > Here are a few answers to some of the public comments: > > Randall O'Reilly wrote: "I vaguely remember someone making an interesting > case a while back that it is the *last* person to invent something that > gets all the credit." Indeed, as I wrote in Science (2011, reference > [NASC3] in the report): "As they say: Columbus did not become famous > because he was the first to discover America, but because he was the last." > Sure, some people sometimes assign the "inventor" title to the person that > should be truly called the "popularizer." Frequently, this is precisely due > to the popularizer packaging the work of others in such a way that it > becomes easily digestible. But this is not to say that their receipt of the > title is correct or that we shouldn't do our utmost to correct it; their > receipt of such title over the ones that are actually deserving of it is > one of the most enduring issues in scientific history. > > As Stephen Jos? Hanson wrote: "Well, to popularize is not to invent. Many > of Juergen's concerns could be solved with some scholarship, such that > authors look sometime before 2006 for other relevant references." > > Randy also wrote: "Sometimes, it is not the basic equations etc that > matter: it is the big picture vision." However, the same vision has almost > always been there in the earlier work on neural nets. It's just that the > work was ahead of its time. It's only in recent years that we have the > datasets and the computational power to realize those big pictures visions. > I think you would agree that simply scaling something up isn't the same as > inventing it. If it were, then the name "Newton" would have little meaning > to people nowadays. > > Jonathan D. Cohen wrote: " ...it is also worth noting that science is an > *intrinsically social* endeavor, and therefore communication is a > fundamental factor." Sure, but let?s make sure that this cannot be used as > a justification of plagiarism! See Sec. 5 of the report. > > Generally speaking, if B plagiarizes A but inspires C, whom should C cite? > The answer is clear. > > Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came > about recently." Not so. See references in Sec. X of the report: the > ancient term "deep learning" (explicitly mentioned by ACM) was actually > first introduced to Machine Learning by Dechter (1986), and to NNs by > Aizenberg et al (2000). > > Tsvi Achler wrote: "Models which have true feedback (e.g. back to their > own inputs) cannot learn by backpropagation but there is plenty of evidence > these types of connections exist in the brain and are used during > recognition. Thus they get ignored: no talks in universities, no featuring > in `premier' journals and no funding. [...] Lastly Feedforward methods are > predominant in a large part because they have financial backing from large > companies with advertising and clout like Google and the self-driving craze > that never fully materialized." This is very misleading - see Sec. A, B, > and C of the report which are about recurrent nets with feedback, > especially LSTM, heavily used by Google and others, on your smartphone > since 2015. Recurrent NNs are general computers that can compute anything > your laptop can compute, including any computable model with feedback "back > to the inputs." My favorite proof from over 30 years ago: a little > subnetwork can be used to build a NAND gate, an! > d a big recurrent network of NAND gates can emulate the CPU of your > laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) > However, as Asim Roy pointed out, this discussion deviates from the > original topic of improper credit assignment. Please use another thread for > this. > > Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for > backprop, as Steve suggested? Seriously, most of the math powering today's > models is just calculus and the chain rule." This is so misleading in > several ways - see Sec. XII of the report: "Some claim that > `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital > (1696).' No, it is the efficient way of applying the chain rule to big > networks with differentiable nodes (there are also many inefficient ways of > doing this). It was not published until 1970" by Seppo Linnainmaa. Of > course, the person to cite is Linnainmaa. > > Randy also wrote: "how little Einstein added to what was already > established by Lorentz and others". Juyang already respectfully objected to > this misleading statement. > > I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects > of the social quality of the scientific enterprise, let's take a look at a > simpler thing; individual duty. Each scientist has a duty to science (as an > intellectual discipline) and the scientific community, to uphold > fundamental principles informing the conduct of science. Credit should be > given wherever it is due - it is a matter of duty, not preference or > `strategic vale' or boosting someone because they're a great populariser. > ... Crediting those who disseminate is fine and dandy, but should be for > those precise contributions, AND the originators of an idea/method/body of > work ought to be recognised - this is perhaps a bit difficult when the work > is obscured by history, but not impossible. At any rate, if one has novel > information of pertinence w.r.t original work, then the right action is > crystal clear." > > See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The > inventor of an important method should get credit for inventing it. They > may not always be the one who popularizes it. Then the popularizer should > get credit for popularizing it - but not for inventing it.' If one > "re-invents" something that was already known, and only becomes aware of it > later, one must at least clarify it later, and correctly give credit in > follow-up papers and presentations." > > I also agree with what Zhaoping Li wrote: "I would find it hard to enter a > scientific community if it is not scholarly. Each of us can do our bit to > be scholarly, to set an example, if not a warning, to the next generation." > > Randy also wrote: "Outside of a paper specifically on the history of a > field, does it really make sense to "require" everyone to cite obscure old > papers that you can't even get a PDF of on google scholar?" This sounds > almost like a defense of plagiarism. That's what time stamps of patents and > papers are for. A recurring point of the report is: the awardees did not > cite the prior art - not even in later surveys written when the true > origins of this work were well-known. > > Here I fully agree with what Marina Meila wrote: "Since credit is a form > of currency in academia, let's look at the `hard currency' rewards of > invention. Who gets them? The first company to create a new product usually > fails. However, the interesting thing is that society (by this I mean the > society most of us we work in) has found it necessary to counteract this, > and we have patent laws to protect the rights of the inventors. The point > is not whether patent laws are effective or not, it's the social norm they > implement. That to protect invention one should pay attention to rewarding > the original inventors, whether we get the `product' directly from them or > not." > > J?rgen > -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Wed Nov 17 18:56:12 2021 From: achler at gmail.com (Tsvi Achler) Date: Wed, 17 Nov 2021 15:56:12 -0800 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> Message-ID: Ultimately this controversy and others around corruption in academia arises from the fact that academia is governed by self-selected committees (a system established in the middle ages) thereby putting oneself in position to govern is more important than research. The issue with the ACM is the tip of a greater problem that includes a high rate of non-replicability, early innovators being ignored and not properly cited and where popularization gets priority over novelty. Statements like ?they were ahead of their time? is a way of saying the academic egos at the time would not accept it. Those that were ultimately able to popularize new ideas may simply have had the fortune of being alive at the right time. Moreover this continues to happen today and in this group. I think in order to really change things the root cause needs to change. Otherwise some may perceive this work as just an exercise of academics jostling for governing positions and ultimately having very little to do with moving research forward. The National Bureau of Economics quantified mechanisms of how politics in academia affect the ability to publish and popularize within two articles that I referenced in a previous message. Moreover I think even more costly than crediting the wrong people as in the ACM, is the inhibition of novel ideas and in that vein it seems no one is immune. I noticed J?rgen dismissed my claim of a novel approach by saying it must be an RNN even though I specifically wrote that those in the field simply assume it is a recurrent network and that this is part of the difficulty of presenting novel ideas in today?s environment. But admittedly discussing my approach on this thread is off topic and furthermore I have been specifically asked by J?rgen not to write about it, so I respectfully will not elaborate here. However, a discussion of how to avoid this problem today would greatly tie this article together and make it relevant to the present day. We live in the information age where companies such as Google, Wikipedia and others have demonstrated for us alternate ways to rate, digest and present information through less political methods. (and even fund) Thus I think without a discussion about the continuing situation today or committment to modify current trajectories, this article may become just another snapshot of the academic egos of the time, and we will need a new article discussing what went wrong for the next phase of research that will eventually materialize. Sincerely, -Tsvi PS. Also see my video list on ideas to improve Academia for more details https://www.youtube.com/playlist?list=PLM3bZImI0fj3rM3ZrzSYbfozkf8m4102j On Tue, Nov 16, 2021 at 11:53 PM Schmidhuber Juergen wrote: > In a mature field like math we?d never have such a discussion. > > It is well-known that plagiarism may be unintentional (e.g., > https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism). > However, since nobody can read the minds and intentions of others, science > & tech came up with a _formal_ way of establishing priority: the time > stamps of publications and patents. If you file your patent one day after > your competitor, you are scooped, no matter whether you are the original > inventor, a re-inventor, a great popularizer, or whatever. (In the present > context, however, we are mostly talking about decades rather than days.) > > Randy wrote: "For example, I cite Rumelhart et al (1986) for backprop, > because that is how I and most other people in the modern field learned > about this idea, and we know for a fact that they genuinely reinvented it > and conveyed its implications in a very compelling way. If I might be > writing a paper on the history of backprop, or some comprehensive review, > then yes it would be appropriate to cite older versions that had limited > impact, being careful to characterize the relationship as one of > reinvention." > > See Sec. XVII of the report: "The deontology of science requires: If one > `re-invents' something that was already known, and only becomes aware of it > later, one must at least clarify it later, and correctly give credit in all > follow-up papers and presentations." > > In particular, along the lines of Randy's remarks on historic surveys: > from a survey like the 2021 Turing lecture you'd expect correct credit > assignment instead of additional attempts at getting credit for work done > by others. See Sec. 2 of the report: > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html#lbhacm > > I also agree with what Barak Pearlmutter wrote on the example of > backpropagation: "So if you want to cite, say, Seppo Linnainmaa for Reverse > Mode Automatic Differentiation, you need to have actually read it yourself. > Otherwise you need to do a bounce citation: `Linnainmaa (1982) described by > Schmidhuber (2021) as exhibiting a Fortran implementation of Reverse Mode > Automatic Differentiation' or something like that.'" Indeed, you don't have > to read Linnainmaa's original 1970 paper on what's now called > backpropagation, you can read his 1976 journal paper (in English), or > Griewank's 2012 paper "Who invented the reverse mode of differentiation?" > in Documenta Mathematica, or other papers on this famous subject, some of > them cited in my report, which also cites Werbos, who first applied the > method to NNs in 1982 (but not yet in his 1974 thesis). The report > continues: "By 1985, compute had become about 1,000 times cheaper than in > 1970, and the first desktop computers had just become! > accessible in wealthier academic labs. Computational experiments then > demonstrated that backpropagation can yield useful internal representations > in hidden layers of NNs.[RUM] But this was essentially just an experimental > analysis of a known method.[BP1-2] And the authors did not cite the prior > art - not even in later surveys.[DL3,DL3a][DLC]" > > I find it interesting what Asim Roy wrote: "In 1975 Tjalling Koopmans and > Leonid Kantorovich were awarded the Nobel Prize in Economics for their > contribution in resource allocation and linear programming. Many > professionals, Koopmans and Kantorovich included, were surprised at > Dantzig?s exclusion as an honoree. Most individuals familiar with the > situation considered him to be just as worthy of the prize. [...] > (Unbeknownst to Dantzig and most other operations researchers in the West, > a similar method was derived eight years prior by Soviet mathematician > Leonid V. Kantorovich)." Let's not forget, however, that there is no "Nobel > Prize in Economics!" (See also this 2010 paper: > https://people.idsia.ch/~juergen/nobelshare.html) > > J?rgen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Wed Nov 17 15:19:44 2021 From: juyang.weng at gmail.com (Juyang Weng) Date: Wed, 17 Nov 2021 15:19:44 -0500 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. (Maria Kesa) Message-ID: "My personal take and you can all kiss my ass message" This message should be blocked by the moderator of connectionists at cmu. Please remove it from the archive. I would like to thank the moderator of connectionists at cmu for his willingness to keep the transparency. This is a personal attack and also many sentences in the link provided are, amounting to violations of Robert's Rules of Order which ban personal attacks in democratic discussions. I would like to review personal attacks: Personal attack - Wiktionary https://en.wiktionary.org ? wiki ? personal_attack Basically, it is a personal attack on an arguer that brings the individual's personal circumstances, trustworthiness, or character into question. Argue about the issue on the floor, instead of the personal circumstances of an arguer. Is "Maria Kesa" a real name? -John ---- Message: 4 Date: Mon, 15 Nov 2021 11:11:44 +0100 From: Maria Kesa To: "Randall O'Reilly" Cc: Connectionists Connectionists Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From blink.yu at mdpi.com Thu Nov 18 01:33:53 2021 From: blink.yu at mdpi.com (Mr. Blink Yu) Date: Thu, 18 Nov 2021 14:33:53 +0800 Subject: Connectionists: Call for Paper: [Computers] Special Issue--"Survey in Deep Learning for IoT Applications" Message-ID: [Apologies if you receive multiple copies of this message] ==================================== Special Issue:"Survey in Deep Learning for IoT Applications" Deadline for manuscript submissions: 31 December 2022 Website: https://www.mdpi.com/journal/computers/special_issues/DL_iot Guest Editor: Dr. Rytis Maskeliunas, Prof. Dr. Robertas Dama?evi?ius, University of Technology, Kaunas, Lithuania. ============= Dear Colleagues, In recent years, methods and dedicated communication channels of the Internet of Things (IoT) have been developed to detect and collect all kinds of information to deliver a variety of advanced services and applications, generating huge amounts of data, constantly received from millions of IoT sensors deployed around the world. The techniques behind deep learning now play an important role in desktop and mobile applications and are now entering the resource-constrained IoT sector, enabling the development of more advanced IoT applications, with proven results in a variety of areas already, including image recognition, medical data analysis, information retrieval, language recognition, natural language processing, indoor location, autonomous vehicles, smart cities, sustainability, pollution, bioeconomy, etc. This Special Issue focuses on the research and application of the Internet of Things, focusing on multimodal signal processing, sensor extraction, data visualization and understanding, and other related topics, answering the question of which deep neural network structures can efficiently process and integrate multimodal sensor input data for various IoT applications, how to adapt current and develop new designs to help to reduce the resource cost of running deep learning models for the efficient deployment on IoT devices, how to correctly calculate reliability measurements in deep learning predictions for IoT applications within limited and constrained calculation requirements, how to reduce the use of labeled IoT for needs linked to learning signal data considering operational limitations and other key areas. Keywords Internet of Things Deep learning Data fusion Multimodal signal processing Data processing and visualization Prof. Dr. Robertas Dama?evi?ius Dr. Rytis Maskeliunas Guest Editors ============ -- Mr. Blink Yu Managing Editor E-Mail: blink.yu at mdpi.com Skype: live:c91693ac8277e1f0 -- MDPI Wuhan Office No.6 Jingan Road, 430064 Wuhan, China http://www.mdpi.com -- Disclaimer: MDPI recognizes the importance of data privacy and protection. We treat personal data in line with the General Data Protection Regulation (GDPR) and with what the community expects of us. The information contained in this message is confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this message in error, please notify me and delete this message from your system. You may not copy this message in its entirety or in part, or disclose its contents to anyone. From jaime at unex.es Wed Nov 17 18:52:17 2021 From: jaime at unex.es (=?utf-8?B?SmFpbWUgR2Fsw6FuLUppbcOpbmV6?=) Date: Thu, 18 Nov 2021 00:52:17 +0100 Subject: Connectionists: [CFP] NOMS 2022 Workshop on Intelligence Provisioning for Network and Service Management in Softwarized Networks (IPSN) Message-ID: *[Please accept our apologies if you receive multiple copies of this announcement]* * * * * * * * * * CALL FOR PAPERS 1st International Workshop on Intelligence Provisioning for Network and Service Management in Softwarized Networks (IPSN) in conjunction with IEEE/IFIP NOMS 2022 https://ipsn2022.spilab.es/ * * * * * * * * * Thanks to the rapid growth in network bandwidth and connectivity, networks and distributed systems have become critical infrastructures that underpin much of today?s Internet services. However, networks are highly complex, dynamic and time-varying systems, such that the statistical properties of networks and network traffic cannot be easily modeled. Moreover, the trend towards highly integrated networks with diverse underlying access technologies to support simultaneously multiple vertical industries has demanded complex operations in network management. With the advent of Artificial Intelligence (AI) and Machine Learning (ML) techniques, along with the flexibility and programmability provided by the so-called softwarized networks and its enablers, Software-Defined Networks (SDN) and Network Function Virtualization (NFV), challenges associated to the complex network management operation of forthcoming networks can be tackled by exploiting the combination of AI/ML techniques and softwarized networks. The main goal of IPSN Workshop is to present state-of-the-art research results and experience reports in the area of AI/ML for network management on softwarized networks, addressing topics such as artificial intelligence techniques and models for network and service management in softwarized networks; smart service orchestration, training process at the constrained edge, dynamic Service Function Chaining, Intent and policy based management, centralized vs distributed control of SDN/NFV based networks, analytics and big data approaches, knowledge creation and decision making. This workshop offers a timely venue for researchers and industry partners to present and discuss their latest results in the application of network intelligence to the management of softwarized networks. Topics of interest include, but are not limited to: - Data-driven management of software defined networks - Deep and Reinforcement learning for networking and communications - Experiences and best-practices using AI/ML in operational networks - Fault-tolerant network protocols exploiting AI/ML methods - Implications and challenges brought by computer networks to machine learning theory and algorithms - Innovative architectures and infrastructures for intelligent networks - Intelligent energy-aware/green softwarized networks - Intent \& Policy-based management for intelligent networks - Methodologies for network problem diagnosis, anomaly detection and prediction - Network Security based on AI/ML techniques in softwarized networks - Open-source networking optimization tools for AI/ML applications - Protocol design and optimization using AI/ML in softwarized networks - Reliability, robustness and safety based on AI/ML techniques - Routing optimization based on flow prediction in softwarized networks - Self-learning and adaptive networking protocols and algorithms for softwarized networks - AI/ML for network management and orchestration in softwarized networks - AI/ML for network slicing optimization in softwarized networks - AI/ML for service placement and dynamic Service Function Chaining in softwarized networks - AI/ML for C-RAN resource management and medium access control - AI/ML for multimedia networking in softwarized networks - AI/ML support for ultra-low latency applications in softwarized networks ** IMPORTANT DATES ** Paper Submission: January 9th, 2022 Acceptance Notification: February 13, 2022 Camera Ready Submission: February 27th, 2022 Registration Deadline: March 1st, 2022 ? Jaime Gal?n-Jim?nez Assistant Professor University of Extremadura (Spain) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime at unex.es Wed Nov 17 18:52:20 2021 From: jaime at unex.es (=?utf-8?B?SmFpbWUgR2Fsw6FuLUppbcOpbmV6?=) Date: Thu, 18 Nov 2021 00:52:20 +0100 Subject: Connectionists: [CFP] NOMS 2022 Workshop on Intelligence Provisioning for Network and Service Management in Softwarized Networks (IPSN) Message-ID: <8F086D6C-D328-4A51-9430-8F931683EC4A@unex.es> *[Please accept our apologies if you receive multiple copies of this announcement]* * * * * * * * * * CALL FOR PAPERS 1st International Workshop on Intelligence Provisioning for Network and Service Management in Softwarized Networks (IPSN) in conjunction with IEEE/IFIP NOMS 2022 https://ipsn2022.spilab.es/ * * * * * * * * * Thanks to the rapid growth in network bandwidth and connectivity, networks and distributed systems have become critical infrastructures that underpin much of today?s Internet services. However, networks are highly complex, dynamic and time-varying systems, such that the statistical properties of networks and network traffic cannot be easily modeled. Moreover, the trend towards highly integrated networks with diverse underlying access technologies to support simultaneously multiple vertical industries has demanded complex operations in network management. With the advent of Artificial Intelligence (AI) and Machine Learning (ML) techniques, along with the flexibility and programmability provided by the so-called softwarized networks and its enablers, Software-Defined Networks (SDN) and Network Function Virtualization (NFV), challenges associated to the complex network management operation of forthcoming networks can be tackled by exploiting the combination of AI/ML techniques and softwarized networks. The main goal of IPSN Workshop is to present state-of-the-art research results and experience reports in the area of AI/ML for network management on softwarized networks, addressing topics such as artificial intelligence techniques and models for network and service management in softwarized networks; smart service orchestration, training process at the constrained edge, dynamic Service Function Chaining, Intent and policy based management, centralized vs distributed control of SDN/NFV based networks, analytics and big data approaches, knowledge creation and decision making. This workshop offers a timely venue for researchers and industry partners to present and discuss their latest results in the application of network intelligence to the management of softwarized networks. Topics of interest include, but are not limited to: - Data-driven management of software defined networks - Deep and Reinforcement learning for networking and communications - Experiences and best-practices using AI/ML in operational networks - Fault-tolerant network protocols exploiting AI/ML methods - Implications and challenges brought by computer networks to machine learning theory and algorithms - Innovative architectures and infrastructures for intelligent networks - Intelligent energy-aware/green softwarized networks - Intent \& Policy-based management for intelligent networks - Methodologies for network problem diagnosis, anomaly detection and prediction - Network Security based on AI/ML techniques in softwarized networks - Open-source networking optimization tools for AI/ML applications - Protocol design and optimization using AI/ML in softwarized networks - Reliability, robustness and safety based on AI/ML techniques - Routing optimization based on flow prediction in softwarized networks - Self-learning and adaptive networking protocols and algorithms for softwarized networks - AI/ML for network management and orchestration in softwarized networks - AI/ML for network slicing optimization in softwarized networks - AI/ML for service placement and dynamic Service Function Chaining in softwarized networks - AI/ML for C-RAN resource management and medium access control - AI/ML for multimedia networking in softwarized networks - AI/ML support for ultra-low latency applications in softwarized networks ** IMPORTANT DATES ** Paper Submission: January 9th, 2022 Acceptance Notification: February 13, 2022 Camera Ready Submission: February 27th, 2022 Registration Deadline: March 1st, 2022 ? Jaime Gal?n-Jim?nez Assistant Professor University of Extremadura (Spain) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuokall at gmail.com Thu Nov 18 03:43:25 2021 From: tuokall at gmail.com (Tuomo Kalliokoski) Date: Thu, 18 Nov 2021 10:43:25 +0200 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. (Maria Kesa) In-Reply-To: References: Message-ID: Hi, My two cents: With a little bit of googling my assumption is that there is a real Maria Kesa working in this field and a fake Maria Kesa who is pushing that kind of stuff. Or she has mental issues which need medical attention. Tuomo Kalliokoski ps. Many great scientists have had mental issues, so that is not a thing to hold against her but she should have support from her peers. I've had my share with depression even though I don't count myself as a great. On Thu, Nov 18, 2021 at 10:06 AM Juyang Weng wrote: > "My personal take and you can all kiss my ass message" > This message should be blocked by the moderator of connectionists at cmu. > Please remove it from the archive. I would like to thank the moderator of > connectionists at cmu for his willingness to keep the transparency. > > This is a personal attack and also many sentences in the link provided > are, amounting to violations of Robert's Rules of Order which ban personal > attacks in democratic discussions. > > I would like to review personal attacks: > Personal attack - Wiktionary > https://en.wiktionary.org ? wiki ? personal_attack > Basically, it is a personal attack on an arguer that brings the > individual's personal circumstances, trustworthiness, or character into > question. > > Argue about the issue on the floor, instead of the personal circumstances > of an arguer. > > Is "Maria Kesa" a real name? > > -John > ---- > Message: 4 > Date: Mon, 15 Nov 2021 11:11:44 +0100 > From: Maria Kesa > To: "Randall O'Reilly" > Cc: Connectionists Connectionists > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing > Lecture, etc. > > -- > Juyang (John) Weng > -- "Life is too serious to be taken seriously" "El?m? on liian vakavaa otettavaksi vakavasti" -------------- next part -------------- An HTML attachment was scrubbed... URL: From q.huys at ucl.ac.uk Thu Nov 18 05:31:14 2021 From: q.huys at ucl.ac.uk (Quentin Huys) Date: Thu, 18 Nov 2021 10:31:14 +0000 Subject: Connectionists: postdoc positions in computational psychiatry at UCL Message-ID: <20211118103114.q2dff3eyisaz5utx@Qd> (Apologies for cross-posting) Two postdoctoral positions available at the UCL Applied Computational Psychiatry Lab (www.acplab.org, PI Quentin Huys). The Wellcome Trust-funded project will combine cognitive probes with computational modelling and MEG imaging to better understand the algorithmic structure of maladaptive thinking patterns in depression, and how they relate to antidepressants, relapse and serotonin. These projects are an exciting opportunity to apply advanced computational and neuroimaging methods to clinically relevant problems. The Applied Computational Psychiatry lab is situated within the UCL Max Planck Centre for Computational Psychiatry and Ageing Research and the Division of Psychiatry. Official job advertisement is here: https://atsv7.wcn.co.uk/search_engine/jobs.cgi?amNvZGU9MTg4MDA2NyZ2dF90ZW1wbGF0ZT05NjUmb3duZXI9NTA0MTE3OCZvd25lcnR5cGU9ZmFpciZicmFuZF9pZD0wJmpvYl9yZWZfY29kZT0xODgwMDY3JnBvc3RpbmdfY29kZT02MzQ%3D&jcode=1880067&vt_template=965&owner=5041178&ownertype=fair&brand_id=0&job_ref_code=1880067&posting_code=634 From adrien.fois at loria.fr Thu Nov 18 06:54:31 2021 From: adrien.fois at loria.fr (Adrien Fois) Date: Thu, 18 Nov 2021 12:54:31 +0100 (CET) Subject: Connectionists: Internship - Unsupervised learning of optic flow with spiking neural networks Message-ID: <422215657.8491619.1637236471208.JavaMail.zimbra@loria.fr> Internship - Unsupervised learning of optic flow with spiking neural networks Supervisor : Bernard Girau and Adrien Fois Lab and team : LORIA (FRANCE), BISCUIT Contacts : [ mailto:bernard.girau at loria.fr | bernard.girau at loria.fr ] , [ mailto:adrien.fois at loria.fr | adrien.fois at loria.fr ] Start : Position open until filled Duration : 5-6 months Motivation and context Spiking neurons are considered the third generation of artificial neural models. These neural models take bio-mimicry a step further than their predecessors by communicating - in the manner of biological neurons - with spikes produced in time. A new dimension - the temporal dimension - thus allows information to be transmitted and processed on the fly, asynchronously. To take full advantage of the computing power and the very low energy consumption induced, these spiking neuron models can be directly emulated. This is what Intel and IBM have done with their Loihi neuromorphic processors and TrueNorth, respectively. Loihi2 integrates one million impulse neurons and 120 million programmable synapses. In the same bio-inspired vein, event-based cameras are gaining popularity. Event-based cameras such as DVS (Dynamic Vision Sensor) work analogously to the retina by transmitting information as a spike only when a local change in brightness - at the pixel level - is detected. This asynchronous processing of visual information brings great advantages: 1) a sampling speed nearly a million times faster than standard cameras, 2) a latency of one microsecond and 3) a dynamic range of 130 decibels (standard cameras have only 60 dB). All this with significantly lower power consumption than standard cameras. When an organism equipped with a visual system is in movement in its environment, or observes an object in movement while remaining static, it perceives a relative movement between itself and its environment. This motion appears to it in the form of spatio-temporal patterns, called optical flow. Estimating the optical flow is an essential task for the organism. This information allows it to better estimate its own movement and thus to better navigate in its environment. These problems are also transposable in the field of autonomous robotics and drones. This internship, which will last at least 5 months, is at the crossroads of these different fields. Goals and Objectives The goal is to use an event-driven camera and to process its data with a spiking neural network equipped with unsupervised learning rules. The intended application is optical flow estimation. This will include: - to carry out a bibliographical study on the methods of unsupervised optical flow estimation with impulse neural networks - to adapt a learning rule of the STDP type, developed within the team, to the targeted application - to integrate this rule into impulse neural networks - propose adaptations compatible with a hardware implementation on a neuromorphic processor - implement and test the architecture with Tensorflow A background in computer science (with a foundation in artificial intelligence) or computational neuroscience is expected, as well as a strong foundation in programming. The internship will take place in France in the Loria laboratory, in the Biscuit team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanjay.ankur at gmail.com Thu Nov 18 07:05:11 2021 From: sanjay.ankur at gmail.com (Ankur Sinha) Date: Thu, 18 Nov 2021 12:05:11 +0000 Subject: Connectionists: Next INCF/OCNS Software WG Dev session: Software Citation Principles: 22, 2021, 1600 UTC (Daniel S Katz, Neil Chue Hong) Message-ID: <20211118120511.ua6oywmvxgemlnp5@ankur.workstation> Dear all, Apologies for the cross posts. In the next dev session at the INCF/OCNS Software Working Group, Daniel S Katz and Neil Chue Hong will discuss principles of citing research software. https://ocns.github.io/SoftwareWG/2021/11/01/dev-session-daniel-s-katz-neil-chue-hong-software-citation-principles.html Where: Zoom: https://ucl.zoom.us/j/94578141033?pwd=SlZBcEluT2svUWhseGFHMUVLWFB0UT09 When: November 22, 2021, 1600 UTC The abstract for the talk is below: ----------------------------------- Today, software is a key part of almost all research, and the development and maintenance of this software is done by human beings, often in academia. It's essential that these activities be seen as a valued element of scholarly work so that the people who do it have career paths that encourage them to continue to do it. One means of accomplishing this is to use the existing scholarly citation system. The FORCE11 Software Citation working group[1] developed a set of Software Citation Principles in 2016[2], and since then, the FORCE11 Software Citation Implementation working group[3] has been working with many types of stakeholders, including publishers, repositories, and standards, to implement the principles. We'll talk about this work, the changes we've seen, what still needs to be done, and how the INCF community can participate. [1] https://www.force11.org/group/software-citation-working-group [2] https://www.force11.org/software-citation-principles [3] https://www.force11.org/group/software-citation-implementation-working-group The session is open to all, but requires you to login to Zoom to limit spam. Please do forward this to your colleagues. On behalf of the INCF/OCNS Software working group, -- Thanks, Regards, Ankur Sinha (He / Him / His) Research Fellow at the Silver Lab | http://silverlab.org/ Department of Neuroscience, Physiology, & Pharmacology University College London, London, UK Time zone: Europe/London From fabio.bellavia at unifi.it Thu Nov 18 07:43:00 2021 From: fabio.bellavia at unifi.it (Fabio Bellavia) Date: Thu, 18 Nov 2021 13:43:00 +0100 Subject: Connectionists: [CfP] International workshop on "Fine Art Pattern Extraction and Recognition (FAPER 2022)" at ICIAP 2021 Message-ID: <966c1ffc-9e9a-4b50-d3a7-9432a3ab443d@unifi.it> ???????????????????? Call for Papers -- FAPER 2022 ??????????? ---===== Apologies for cross-postings =====--- ?????????? Please distribute this call to interested parties ________________________________________________________________________ ?International Workshop on Fine Art Pattern Extraction and Recognition ????????????????????????? F A P E R?? 2 0 2 2 ??????? in conjunction with the 21st International Conference on ?????????????? Image Analysis and Processing (ICIAP 2021) ???????????????????? Lecce, Italy, MAY 23-27, 2022 ??????????? >>> https://sites.google.com/view/faper2022 <<< ????????????? *** Submission deadline: March 1, 2022 *** -> Submission link: https://easychair.org/conferences/?conf=faper2022 <- ????????????? [[[ both virtual and in presence event ]]] ________________________________________________________________________ === Aim & Scope === Cultural heritage, especially fine arts, plays an invaluable role in the cultural, historical and economic growth of our societies. Fine arts are primarily developed for aesthetic purposes and are mainly expressed through painting, sculpture and architecture. In recent years, thanks to technological improvements and drastic cost reductions, a large-scale digitization effort has been made, which has led to an increasing availability of large digitized fine art collections. This availability, coupled with recent advances in pattern recognition and computer vision, has disclosed new opportunities, especially for researchers in these fields, to assist the art community with automatic tools to further analyze and understand fine arts. Among other benefits, a deeper understanding of fine arts has the potential to make them more accessible to a wider population, both in terms of fruition and creation, thus supporting the spread of culture. Following the success of the first edition, organized in conjunction with ICPR 2020, the aim of the workshop is to provide an international forum for those wishing to present advancements in the state-of-the-art, innovative research, ongoing projects, and academic and industrial reports on the application of visual pattern extraction and recognition for a better understanding and fruition of fine arts. The workshop solicits contributions from diverse areas such as pattern recognition, computer vision, artificial intelligence and image processing. === Topics === Topics of interest include, but are not limited to: - Application of machine learning and deep learning to cultural heritage and digital humanities - Computer vision and multimedia data processing for fine arts - Generative adversarial networks for artistic data - Augmented and virtual reality for cultural heritage - 3D reconstruction of historical artifacts - Point cloud segmentation and classification for cultural heritage - Historical document analysis - Content-based retrieval in the art domain - Speech, audio and music analysis from historical archives - Digitally enriched museum visits - Smart interactive experiences in cultural sites - Projects, products or prototypes for cultural heritage restoration, preservation and fruition - Visual question answering and artwork captioning - Art history and computer vision === Invited speaker === Eva Cetinic (Digital Visual Studies, University of Zurich, Switzerland) - "Beyond Similarity: From Stylistic Concepts to Computational Metrics" Dr. Eva Cetinic is currently working as a postdoctoral fellow at the Center for Digital Visual Studies at the University of Zurich. She previously worked as a postdoc in Digital Humanities and Machine Learning at the Department of Computer Science, Durham University, and as a postdoctoral researcher and professional associate at the Ru?er Bo?kovic Institute in Zagreb. She obtained her Ph.D. in Computer science from the Faculty of Electrical Engineering and Computing, University of Zagreb in 2019 with the thesis titled "Computational detection of stylistic properties of paintings based on high-level image feature analysis". Besides being generally interested in the interdisciplinary field of digital humanities, her specific interests focus on studying new research methodologies rooted in the intersection of artificial intelligence and art history. Particularly, she is interested in exploring deep learning techniques for computational image understanding and multi-modal reasoning in the context of visual art. === Workshop modality === The workshop will be held in a hybrid form, both virtual and in presence participation will be allowed. === Submission guidelines === Accepted manuscripts will be included in the ICIAP 2021 proceedings, which will be published by Springer as Lecture Notes in Computer Science series (LNCS). Authors of selected papers will be invited to extend and improve their contributions for a Special Issue on IET Image Processing. Please follow the guidelines provided by Springer when preparing your contribution. The maximum number of pages is 10 + 2 pages for references. Each contribution will be reviewed on the basis of originality, significance, clarity, soundness, relevance and technical content. Once accepted, the presence of at least one author at the event and the oral presentation of the paper are expected. Please submit your manuscript through EasyChair: https://easychair.org/conferences/?conf=faper2022 === Important Dates === - Workshop submission deadline: March 1st 2022 - Author notification: March 10th 2022 - Camera-ready submission and registration: March 15th 2022 - Finalized workshop program: TBA - Workshop day: TBA === Organizing committee === Gennaro Vessio (University of Bari, Italy) Giovanna Castellano (University of Bari, Italy) Fabio Bellavia (University of Palermo, Italy) Sinem Aslan (University of Venice, Italy | Ege University, Turkey) === Venue === The workshop will be hosted at Convitto Palmieri, which is located in Piazzetta di Giosue' Carducci, Lecce, Italy ____________________________________________________ ?Contacts: gennaro.vessio at uniba.it ?????????? giovanna.castellano at uniba.it ?????????? fabio.bellavia at unipa.it ?????????? sinem.aslan at unive.it ?Workshop: https://sites.google.com/view/faper2022 ICIAP2021: https://www.iciap2021.org/ From ioannakoroni at csd.auth.gr Thu Nov 18 08:31:47 2021 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Thu, 18 Nov 2021 15:31:47 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Cees_Snoek=3A?= =?utf-8?q?_=E2=80=9CReal-World_Learning=E2=80=9D=2C_23rd_November_?= =?utf-8?q?2021_17=3A00-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_?= =?utf-8?q?lectures?= Message-ID: <033401d7dc80$a3440eb0$e9cc2c10$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Cees Snoek (University of Amsterdam, Netherlands), a prominent AI researcher internationally, will deliver the e-lecture: ?Real-World Learning?, on Tuesday 23rd November 2021 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/s/96903010605 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. Other upcoming lectures: 1. Prof. Marios Polycarpou (University of Cyprus, Cyprus), 7th December 2021 17:00 ? 18:00 CET. 2. Prof. Bernhard Rinner (Universit?t Klagenfurt, Austria), 11th January 2021 17:00 ? 18:00 CET. More lecture infos in: https://www.i-aida.org/event_cat/ai-lectures/?type=future The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurenz.wiskott at rub.de Thu Nov 18 09:08:26 2021 From: laurenz.wiskott at rub.de (Laurenz Wiskott) Date: Thu, 18 Nov 2021 15:08:26 +0100 Subject: Connectionists: [jobs] Position as Scientist/Project Manager in "Human Centered AI" in Bochum, Germany (good command of German required) Message-ID: <20211118140826.GQ2979@curry> Dear Connectionists, I would like to draw your attention to an interesting job announcement within the "Human Centered AI Network" (humAIne) in Bochum, Germany. It is a full time position for a combination of science and project management with a very broad base in different disciplines and with a lot of industry contacts. There is quite some freedom to define your own scientific profile, from work sciences to machine learning and artificial intelligence and more. Good command of German is required, though. https://www.stellenwerk-bochum.de/jobboerse/wiss-ma-e13-3983-stdwoche-bochum-211115-503071 Best regards, Laurenz Wiskott. __________________________________________________________________________ Prof. Dr. Laurenz Wiskott room: NB 3/29 Institut f?r Neuroinformatik phone: +49 234 32-27997 Fakult?t f?r Informatik fax: +49 234 32-14210 Ruhr Universit?t Bochum laurenz.wiskott at rub.de D-44780 Bochum, Germany https://www.ini.rub.de/PEOPLE/wiskott/ __________________________________________________________________________ From danko.nikolic at gmail.com Thu Nov 18 10:17:24 2021 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Thu, 18 Nov 2021 16:17:24 +0100 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> Message-ID: I have watched Tsvi's YouTube videos, the entire list. I recommend everyone to watch it. Mind opening. Danko On Thu, 18 Nov 2021, 08:39 Tsvi Achler wrote: > Ultimately this controversy and others around corruption in academia > arises from the fact that academia is governed by self-selected committees > (a system established in the middle ages) thereby putting oneself in > position to govern is more important than research. > > The issue with the ACM is the tip of a greater problem that includes a > high rate of non-replicability, early innovators being ignored and not > properly cited and where popularization gets priority over novelty. > > Statements like ?they were ahead of their time? is a way of saying the > academic egos at the time would not accept it. Those that were ultimately > able to popularize new ideas may simply have had the fortune of being alive > at the right time. > > Moreover this continues to happen today and in this group. > > I think in order to really change things the root cause needs to change. > Otherwise some may perceive this work as just an exercise of > academics jostling for governing positions and ultimately having very > little to do with moving research forward. > > The National Bureau of Economics quantified mechanisms of how politics in > academia affect the ability to publish and popularize within two articles > that I referenced in a previous message. > > Moreover I think even more costly than crediting the wrong people as in > the ACM, is the inhibition of novel ideas and in that vein it seems no > one is immune. > > I noticed J?rgen dismissed my claim of a novel approach by saying it must > be an RNN even though I specifically wrote that those in the field simply > assume it is a recurrent network and that this is part of the difficulty of > presenting novel ideas in today?s environment. > > But admittedly discussing my approach on this thread is off topic and > furthermore I have been specifically asked by J?rgen not to write about it, > so I respectfully will not elaborate here. > > However, a discussion of how to avoid this problem today would greatly tie > this article together and make it relevant to the present day. We live in > the information age where companies such as Google, Wikipedia and others > have demonstrated for us alternate ways to rate, digest and present > information through less political methods. (and even fund) > > Thus I think without a discussion about the continuing situation today or > committment to modify current trajectories, this article may become just > another snapshot of the academic egos of the time, and we will need a new > article discussing what went wrong for the next phase of research that will > eventually materialize. > > Sincerely, > > -Tsvi > > PS. Also see my video list on ideas to improve Academia for more details > https://www.youtube.com/playlist?list=PLM3bZImI0fj3rM3ZrzSYbfozkf8m4102j > > On Tue, Nov 16, 2021 at 11:53 PM Schmidhuber Juergen > wrote: > >> In a mature field like math we?d never have such a discussion. >> >> It is well-known that plagiarism may be unintentional (e.g., >> https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism). >> However, since nobody can read the minds and intentions of others, science >> & tech came up with a _formal_ way of establishing priority: the time >> stamps of publications and patents. If you file your patent one day after >> your competitor, you are scooped, no matter whether you are the original >> inventor, a re-inventor, a great popularizer, or whatever. (In the present >> context, however, we are mostly talking about decades rather than days.) >> >> Randy wrote: "For example, I cite Rumelhart et al (1986) for backprop, >> because that is how I and most other people in the modern field learned >> about this idea, and we know for a fact that they genuinely reinvented it >> and conveyed its implications in a very compelling way. If I might be >> writing a paper on the history of backprop, or some comprehensive review, >> then yes it would be appropriate to cite older versions that had limited >> impact, being careful to characterize the relationship as one of >> reinvention." >> >> See Sec. XVII of the report: "The deontology of science requires: If one >> `re-invents' something that was already known, and only becomes aware of it >> later, one must at least clarify it later, and correctly give credit in all >> follow-up papers and presentations." >> >> In particular, along the lines of Randy's remarks on historic surveys: >> from a survey like the 2021 Turing lecture you'd expect correct credit >> assignment instead of additional attempts at getting credit for work done >> by others. See Sec. 2 of the report: >> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html#lbhacm >> >> I also agree with what Barak Pearlmutter wrote on the example of >> backpropagation: "So if you want to cite, say, Seppo Linnainmaa for Reverse >> Mode Automatic Differentiation, you need to have actually read it yourself. >> Otherwise you need to do a bounce citation: `Linnainmaa (1982) described by >> Schmidhuber (2021) as exhibiting a Fortran implementation of Reverse Mode >> Automatic Differentiation' or something like that.'" Indeed, you don't have >> to read Linnainmaa's original 1970 paper on what's now called >> backpropagation, you can read his 1976 journal paper (in English), or >> Griewank's 2012 paper "Who invented the reverse mode of differentiation?" >> in Documenta Mathematica, or other papers on this famous subject, some of >> them cited in my report, which also cites Werbos, who first applied the >> method to NNs in 1982 (but not yet in his 1974 thesis). The report >> continues: "By 1985, compute had become about 1,000 times cheaper than in >> 1970, and the first desktop computers had just become! >> accessible in wealthier academic labs. Computational experiments then >> demonstrated that backpropagation can yield useful internal representations >> in hidden layers of NNs.[RUM] But this was essentially just an experimental >> analysis of a known method.[BP1-2] And the authors did not cite the prior >> art - not even in later surveys.[DL3,DL3a][DLC]" >> >> I find it interesting what Asim Roy wrote: "In 1975 Tjalling Koopmans and >> Leonid Kantorovich were awarded the Nobel Prize in Economics for their >> contribution in resource allocation and linear programming. Many >> professionals, Koopmans and Kantorovich included, were surprised at >> Dantzig?s exclusion as an honoree. Most individuals familiar with the >> situation considered him to be just as worthy of the prize. [...] >> (Unbeknownst to Dantzig and most other operations researchers in the West, >> a similar method was derived eight years prior by Soviet mathematician >> Leonid V. Kantorovich)." Let's not forget, however, that there is no "Nobel >> Prize in Economics!" (See also this 2010 paper: >> https://people.idsia.ch/~juergen/nobelshare.html) >> >> J?rgen >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaser.amd at gmail.com Thu Nov 18 11:19:57 2021 From: yaser.amd at gmail.com (Yaser Jararweh) Date: Thu, 18 Nov 2021 11:19:57 -0500 Subject: Connectionists: Call for Track Proposals for the Information Processing & Management Conference (IP&MC2022), Due: 15 December 2021. Message-ID: [Apologies if you got multiple copies of this posting] Information Processing & Management Conference (IP&MC2022) Call for Track Proposals *20-23 October 2022 | Xiamen, China* IP&MC2022 is an innovative experiment in academic research and publishing. The Conference will seamlessly integrate with the journal Information Processing & Management, among offering an array of other activities. IP&MC2022 offers researchers the advantages of conference feedback and journal dissemination of their research in a single seamless stream. The conference will be organized in Tracks, which will also be Special Issues in the journal, Information Processing and Management *Proposals for Conference Tracks Due: 15 December 2021* Submission Guidelines for Conference Tracks: https://www.elsevier.com/events/conferences/information-processing-and-management-conference/submit-proposal IP&MC2022 website: https://www.elsevier.com/events/conferences/information-processing-and-management-conference IP&MC2022 infographic of the conference-to-journal article flow: https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf Please share with colleagues that may be interested. *IP&M has an impact factor of 6.222, a cite score of 8.6, and an h-index of 101. * Thanks much! Hope to see you in Xiamen! :-) Conference and Program Chairs ? Conference Chair: Jim Jansen Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar ? Program Chair: Prasenjit Mitra, Pennsylvania State University College of Information Science and Technology, USA If you have any question about IP&MC, please send an email to Dr. Jim Jansen at: jjansen at acm.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwang at cse.ohio-state.edu Thu Nov 18 14:27:57 2021 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Thu, 18 Nov 2021 19:27:57 +0000 Subject: Connectionists: Announcing Neural Networks Best Paper Award Message-ID: We are pleased to announce the recipient of the 2019 Best Paper Award: "Continual lifelong learning with neural networks: A review" by German Parisi, Ronald Kemker, Jose Part, Christopher Kanan, and Stefan Wermter, published in Neural Networks, volume 113, pp. 54-71, May 2019. More details about the award are given below: https://www.journals.elsevier.com/neural-networks/announcements/announcement-of-the-neural-networks-2019-best-paper-award The paper has open access: https://doi.org/10.1016/j.neunet.2019.01.012 Kenji Doya and DeLiang Wang Co-Editors-in-Chief Neural Networks -------------- next part -------------- An HTML attachment was scrubbed... URL: From huiyu_kang at fudan.edu.cn Fri Nov 19 01:55:41 2021 From: huiyu_kang at fudan.edu.cn (huiyu_kang at fudan.edu.cn) Date: Fri, 19 Nov 2021 14:55:41 +0800 Subject: Connectionists: Assistant/Associated Professor Positions in Neural Engineering and Neuro-technology Message-ID: <003b01d7dd12$77ca1080$675e3180$@fudan.edu.cn> Assistant/Associated Professor Positions in Neural Engineering and Neuro-technology About Neural and Intelligence Engineering Center, Fudan University Fudan University is one of the top five universities in China, and ranked as top 60th worldwide in Times Higher Education World University Ranking. ISTBI is a leading institute in brain-inspired artificial intelligence, neuroimaging, biomedical big data and neural engineering in China. The Neural and Intelligence Engineering Center conducts interdisciplinary research among neuromodulation, intelligent information and electronic engineering and neurophysiology. More information can be found from https://www.fudan.edu.cn/en/ and https://istbi.fudan.edu.cn/lnen/. Research Areas: The positions are supported by Shanghai Government Key Project of ?Brain and Brain-inspired Intelligence? funded with 840 Millions RMB. Research Area I. Neural Engineering Research fields include brain computer interface, neural signal processing and modelling, neuromodulation and relevant research in signal processing, artificial intelligence, electronics and medical devices. Research Area 2. Digital Healthcare Research fields include remote healthcare, wearables technology and medical devices, cognitive and behavioural intervention, brain diseases and relevant research in big data analysis, human-machine interaction, cognitive neuroscience, clinical neurophysiology, and neuro-rehabilitation. Qualifications: * Strong ability to conduct high-quality independent research * Research experience in one of the above areas, or neural signal processing, machine learning, biomedical electronics or neurophysiology in movement disorders, pain, depression, or sleep disorders. ? Employment Benefits? * Enjoy competitive salary and housing allowance, actively assist in applying for talent apartment. * Provide sufficient research start-up fees and research assistants according to the discipline orientation and work needs. * The university provides various kinds of insurance and housing fund in accordance with the relevant policies of the State and Shanghai Municipality. ? Application Procedure? * A cover letter explaining the candidate?s background, qualifications, research interests * CV summarizing education, positions and academic or industrial work, scientific publications * Send CV and cover letter to shouyan at fudan.edu.cn and niec?_istbi at fudan.edu.cn with e-mail subject Faculty Application -------------- next part -------------- An HTML attachment was scrubbed... URL: From dleeds at fordham.edu Thu Nov 18 21:56:16 2021 From: dleeds at fordham.edu (Daniel Leeds) Date: Thu, 18 Nov 2021 21:56:16 -0500 Subject: Connectionists: Faculty Position in Computational Neuroscience at Fordham University Message-ID: Fordham University invites applications for two tenure track Assistant Professor positions in the Department of Computer and Information Science (CIS) to start in Fall 2022. We welcome candidates in the areas of computational neuroscience, artificial intelligence, bioinformatics, data science, theoretic computer science, and cybersecurity to apply. The positions require a Ph.D. in Computer Science or related fields, a commitment to teaching excellence, good communication skills, and demonstrated research potential with the ability to attract external research funding. Applications can be electronically submitted to Interfolio Scholar Services through the following links. The following are required: (1) Cover letter with qualifications, (2) Curriculum vitae, (3) Research Statement, (4) Teaching Statement (5) Sample scholarship, and (6) At least three letters of recommendation. For candidates in computational neuroscience, bioinformatics and related fields, apply at: http://apply.interfolio.com/57950, and contact Dr. Daniel Leeds (dleeds at fordham.edu) for inquiries. For candidates in data science, AI, theoretic computer science and cybersecurity, apply at: http://apply.interfolio.com/98699, and contact Dr. Yanjun Li (yli at fordham.edu) for inquiries. Applications will be accepted until the position is filled, however, it is recommended that you submit your application by March 1st, 2022. The Department of Computer and Information Sciences at Fordham University offers undergraduate programs, master of science programs in Computer Science, Data Science, and Cybersecurity, and (starting Fall 2022) Ph.D. in Computer Science. The department actively participates in joint instruction and research through the interdisciplinary integrative neuroscience program, in collaboration with Biology, Natural Sciences, and Psychology departments. For information about the department, visit http://www.cis.fordham.edu. Fordham University is proud to be one of only 13 institutions specifically designated in Ambassador Luce?s bequest to receive funding in perpetuity to support women in STEM; this support includes opportunities for Fordham students as well as funding for a Clare Boothe Luce Professorship. The finalist for this faculty position may also be eligible for further consideration for a Clare Boothe Luce Professorship for beginning tenure-track faculty. Fordham is an independent, Catholic University in the Jesuit tradition in New York City committed to excellence through diversity. Fordham is an equal opportunity employer, and we especially encourage women, people of color, veterans and people with disabilities to apply. From huiyu_kang at fudan.edu.cn Fri Nov 19 01:59:21 2021 From: huiyu_kang at fudan.edu.cn (huiyu_kang at fudan.edu.cn) Date: Fri, 19 Nov 2021 14:59:21 +0800 Subject: Connectionists: Postdoc positions in neuromodulation, digital therapeutics, e-health, cognitive neuroscience and clinical neurophysiology of neurological and psychiatric disorders Message-ID: <005a01d7dd12$fb205930$f1610b90$@fudan.edu.cn> Postdoc positions in neuromodulation, digital therapeutics, e-health, cognitive neuroscience and clinical neurophysiology of neurological and psychiatric disorders Areas of Research * Biomedical Engineering * Electronic Engineering * Neurological Diseases About Neural and Intelligence Engineering Center, Fudan University Fudan University is one of the top five universities in China, and ranked as top 60th worldwide in Times Higher Education World University Ranking. ISTBI is a leading institute in brain-inspired artificial intelligence, neuroimaging, biomedical big data and neural engineering in China. The Neural and Intelligence Engineering Center conducts interdisciplinary research among neuromodulation, intelligent information and electronic engineering and neurophysiology. More information can be found from https://www.fudan.edu.cn/en/ and https://istbi.fudan.edu.cn/lnen/. Research Areas: The positions are supported by Shanghai Government Key Project of ?Brain and Brain-inspired Intelligence? funded with 840 Millions RMB. Research Area I. Neural Engineering Research fields include brain computer interface, neural signal processing and modelling, neuromodulation and relevant research in signal processing, artificial intelligence, electronics and medical devices. Research Area 2. Digital Healthcare Research fields include remote healthcare, wearables technology and medical devices, cognitive and behavioural intervention, brain diseases and relevant research in big data analysis, human-machine interaction, cognitive neuroscience, clinical neurophysiology, and neuro-rehabilitation. ? Qualifications: * Strong ability to conduct high-quality independent research * Research experience in one of the above projects, or neural signal processing, machine learning, biomedical electronics or neurophysiology in movement disorders, pain, depression, or sleep disorders. * Under 35 years old ? Employment Benefits? * Internationally competitive salary, university-owned accommodation, Hukou and children schooling according to university regulations. * Relevant research and technological platforms to support application for various talent projects and funds. * A pleasant working environment that provides first-class research facilities and space for career development. * Funding for outstanding scholars to attend academic events at home and abroad. * Postdoctoral researchers with excellent in-job performance could be recommended for open university-employed posts.? Application Procedure? * A cover letter explaining the candidate?s background, qualifications, research interests * CV summarizing education, positions and academic or industrial work, scientific publications * Send CV and cover letter to shouyan at fudan.edu.cn and niec?_istbi at fudan.edu.cn with e-mail subject Post-doc Application -------------- next part -------------- An HTML attachment was scrubbed... URL: From bei.xiao at gmail.com Thu Nov 18 15:37:05 2021 From: bei.xiao at gmail.com (bei.xiao at gmail.com) Date: Thu, 18 Nov 2021 15:37:05 -0500 Subject: Connectionists: Open Rank Faculty Position in CS at American University (still accepting applications) Message-ID: *Position Announcement: Open Rank* *Department of Computer Science* * American University* The Department of Computer Science in the College of Arts and Sciences at American University invites applications for one *full-time, open-rank, tenure-line position beginning August 1, 2022.* Members of typically marginalized groups including, but not restricted to Women, African American/Black, Hispanic/Latino, Native American/Alaska Native are welcome and strongly encouraged to apply. Applicants should have a PhD or an anticipated PhD completion by August 2022 in Computer Science or related fields. Depending on experience and qualification, the appointee to this position may be recommended for tenure at the time of hiring. Candidates can apply at the assistant, associate or full professor level and we welcome applications from both academic and non-academic organizations. The Department of Computer Science is a small but exciting department with a growing student population and strong research achievements. At last count, the student population in the department was comprised of 37.2% female, 6.6% black or African American, 7.2% Hispanic or Latino and 4.7% multiracial students. American University has identified Computer Science as one of its targets for growth. Computer Science also falls within several areas of strategic focus identified by the university president in her strategic plan, including Data Science. Along with the Department of Mathematics and Statistics, the Department of Physics, the Game Lab, and the Entrepreneurship and Innovation Incubator, the Department of Computer Science is located in the new Don Myers Technology & Innovation Building. Computer Science currently offers an undergraduate and a Master?s program with four different tracks (Applied, Data Science, Game, and Cybersecurity). A combined Ph.D. program with the Mathematics and Statistics Department is being developed. Learn more about the College of Arts and Sciences at https://www.american.edu/cas/ and about the Department of Computer Science at https://www.american.edu/cas/cs/. We are looking for candidates who are excited at the prospect of joining a growing department where they will be able to make their mark and join a friendly, collegial and highly accomplished team. Preference will be given to candidates with a record of high-quality scholarship. For candidates applying at the associate or full professor level, a record of external funding is also expected. The committee will consider candidates engaged in research in *any area of Data Analysis with an emphasis on Natural Language Processing, Graph Analysis, and general research in Deep Learning.* This includes researchers working on Fairness and Bias in Machine Learning and Machine Learning for Social Good. Excellent candidates in other research areas, especially with domains of applications compatible with those outlined in the strategic plan (e.g., Environmental Science and Health Sciences) will also be considered as we welcome researchers who cross traditional disciplinary boundaries. In addition to scholarship and teaching, responsibilities will include participation in department, school, and university service activities. Attention to Diversity, Equity and Inclusion (DEI) in all activities within the academic environment are expected. Salary and benefits are competitive. An overview of the benefits offered by American University can be found at https://www.american.edu/hr/benefits/ . Review of applications will begin on November 15. Please submit applications via Interfolio : http://apply.interfolio.com/97293. Please include a letter of application, curriculum vitae, list of three references, recent teaching evaluations (when possible), a diversity statement, and copies of recent published papers or working papers. Please contact Department Chair Nathalie Japkowicz at japkowic at american.edu if you have any questions. American University is a private institution located in the nation?s capital and within easy reach of the many centers of government, business, research, and the arts. For more information about American University, visit www.american.edu. American University is an equal opportunity, affirmative action institution that operates in compliance with applicable laws and regulations. The university does not discriminate on the basis of race, color, national origin, religion, sex (including pregnancy), age, sexual orientation, disability, marital status, personal appearance, gender identity and expression, family responsibilities, political affiliation, source of income, veteran status, an individual?s genetic information or any other bases under federal or local laws (collectively ?Protected Bases?) in its programs and activities. American University is a tobacco and smoke-free campus. -- Bei Xiao, PhD Associate Professor Computer Science & Center for Behavioral Neuroscience American University, Washington DC Homepage: https://sites.google.com/site/beixiao/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.silver at acadiau.ca Thu Nov 18 13:06:01 2021 From: danny.silver at acadiau.ca (Danny Silver) Date: Thu, 18 Nov 2021 18:06:01 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. (Maria Kesa) In-Reply-To: References: Message-ID: I agree. . There is no place for this on a scientific email list. Stick to critique of the science/work, or get your jollies on Facebook or whatever it is called today. . Danny ========================== Daniel L. Silver Professor, Jodrey School of Computer Science Director, Acadia Institute for Data Analytics Acadia University, Office 314, Carnegie Hall, Wolfville, Nova Scotia Canada B4P 2R6 t. (902) 585-1413 f. (902) 585-1067 acadiau.ca Facebook Twitter YouTube LinkedIn Flickr [id:image001.png at 01D366AF.7F868A70] From: Connectionists on behalf of Juyang Weng Date: Thursday, November 18, 2021 at 4:02 AM To: Post Connectionists Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. (Maria Kesa) CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. "My personal take and you can all kiss my ass message" This message should be blocked by the moderator of connectionists at cmu. Please remove it from the archive. I would like to thank the moderator of connectionists at cmu for his willingness to keep the transparency. This is a personal attack and also many sentences in the link provided are, amounting to violations of Robert's Rules of Order which ban personal attacks in democratic discussions. I would like to review personal attacks: Personal attack - Wiktionary https://en.wiktionary.org ? wiki ? personal_attack Basically, it is a personal attack on an arguer that brings the individual's personal circumstances, trustworthiness, or character into question. Argue about the issue on the floor, instead of the personal circumstances of an arguer. Is "Maria Kesa" a real name? -John ---- Message: 4 Date: Mon, 15 Nov 2021 11:11:44 +0100 From: Maria Kesa > To: "Randall O'Reilly" > Cc: Connectionists Connectionists > Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7490 bytes Desc: image001.png URL: From coralie.gregoire at insa-lyon.fr Fri Nov 19 02:57:51 2021 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Fri, 19 Nov 2021 08:57:51 +0100 (CET) Subject: Connectionists: [CFP DEADLINE EXTENDED] - The ACM Web Conference 2022 Special Tracks - History of The Web Message-ID: <621150526.2882516.1637308671577.JavaMail.zimbra@insa-lyon.fr> [Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] [CFP] The ACM Web Conference 2022 Special Tracks - DEADLINE EXTENDED - History of The Web Following questions we received, we want to specify that for this special track ?History of the Web? submissions are NOT anonymous! We invite contributions to this Special Track of The Web Conference 2022 (formerly known as WWW). The conference will take place online, France, on April 25-29, 2022. *Important dates: NEW* - Abstract: December 2, 2021 - Full paper: December 9, 2021 - Acceptance notification: January 13, 2022 No rebuttal is foreseen. ------------------------------------------------------------ *Special track History of the Web* Track chairs: Dame Wendy Hall (University of Southampton, UK) and Luc Mariaux (?cole Centrale de Lyon ? France (retired)) You can reach the track chairs at: www2022-history at easychair.org The World Wide Web was invented at CERN by Sir Tim Berners-Lee in 1989 and in 1993, CERN put the World Wide Web software in the public domain. In May 1994 Robert Cailliau organized the First International WWW Conference in Geneva and following that event in August 1994 he launched with Joseph Hardin the IW3C2 formally incorporated in May 1996 as a non-profit Association under Swiss law. In 2022 this conference will become The ACM Web Conference. The 2022 edition of this conference is therefore the 31st in the series and takes place on the 32nd anniversary of the Web. During this period, the Web and its applications have become widely available around the world and many new technologies have emerged. The evolution of the Web has been made of great scientific advances, but also of anecdotal events that have contributed to build the Web as we know it today. After more than thirty years, it is time to keep track of all these events, so we invite all those who participated in this collective adventure to share the information they have. We also invite those whose field of technical, sociological, or philosophical research concerns the evolution or the impact of the Web to submit their work. Three kinds of contributions are expected: - Research papers focussing on the history of the Web, - Papers explaining how the evolution of the Web has impacted our professional or private life, - Papers describing some anecdotic events related to the evolution of the Web. All submissions will be peer-reviewed and evaluated on the basis of originality, relevance, quality, and technical, sociological, or historical contribution. *Submission guidelines* For the special tracks, submissions are limited to 8 content pages, which should include all figures and tables, but excluding supplementary material and references. In addition, you can include 2 additional pages of supplementary material. The total number of pages with supplementary material and references must not exceed 12 pages. The papers must be formatted according to the instructions below. Submissions are NOT anonymous. Submissions will be handled via Easychair, at https://easychair.org/conferences/?conf=thewebconf2022. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template. For overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty. Submissions for review must be in PDF format. They must be self-contained and written in English. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. All accepted papers will be published by ACM and will be available via the ACM Digital Library. To be included in the Proceedings, at least one author of each accepted paper must register for the conference and present the paper there. ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From mcruz at uni-osnabrueck.de Fri Nov 19 03:52:34 2021 From: mcruz at uni-osnabrueck.de (mcruz at uni-osnabrueck.de) Date: Fri, 19 Nov 2021 09:52:34 +0100 Subject: Connectionists: Fwd: Online Cognitive Science Master's Program Message-ID: <0674eb07-aa4f-4d2b-b83c-0fa685fba887@Spark> Dear all, We are now accepting applications for the 2022 summer term! The Institute of Cognitive Science at Osnabr?ck University is offering a digital track to the Cognitive Science M.Sc. program. The DAAD-funded program targets international students who can complete the coursework remotely. Possible focus areas include Artificial Intelligence, Neuroinformatics, and Neuroscience. Deadline: January 15, 2022 You can find more information on our website:?https://vt.uos.de/cosmos Please do not hesitate to contact us for further questions. Best, Misha Cruz Wissenschaftliche Mitarbeiterin OS-COSMOS mishael.g.cruz at uni-osnabrueck.de Institut f?r Kognitionswissenschaften und virtUOS Universit?t Osnabr?ck -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto.antonietti at polimi.it Fri Nov 19 04:15:41 2021 From: alberto.antonietti at polimi.it (Alberto Antonietti) Date: Fri, 19 Nov 2021 10:15:41 +0100 Subject: Connectionists: Call for Papers: Reproducibility in Neuroscience Message-ID: <0e305a7c-f283-d865-efae-2a4a43c4011d@polimi.it> Are you interested in making neuroscience more reproducible? Have you already tried to replicate a study? Are you bothered by the "Data and code are available upon reasonable request"? If your answer to one of the previous questions is "yes", then you could be interested in this call for papers. With this research topic, we aim to stimulate neuroscientists from all fields to design and publish works that rigorously attempt to reproduce landmarks or controversial studies: https://www.frontiersin.org/research-topics/26709/reproducibility-in-neuroscience We are inviting papers on both: - "Results reproducibility", i.e., obtain the same results from an independent study with procedures as closely matched to the original study as possible; - "Inferential reproducibility", i.e. draw the same conclusions from either an independent replication, using different research methodologies, of a study or a reanalysis of the original data. We will consider both confirmatory and negative results, the unique criteria will be the rigorousness, fairness, and soundness of the replication study. Authors are required to make all materials and methods used to conduct their research available to other researchers. Data and codes used to analyze results must comply with FAIR principles and should preferably be uploaded to an online repository providing a global persistent identifier (e.g., OSF, Harvard Dataverse, Zenodo). Authors are also strongly encouraged to subject their codes to an independent audit at codecheck.org.uk. I?m very pleased to be launching a new article collection, Reproducibility in Neuroscience, together with an expert editorial team: - Alberto Antonietti - Blue Brain Project, Ecole polytechnique f?d?rale de Lausanne (EPFL), Geneva, Switzerland - Reeteka Sud - National Institute of Mental Health and Neurosciences (NIMHANS), Bangalore, India - Nele A Haelterman - Baylor College of Medicine, Houston, United States - Nafisa M Jadavji - Midwestern University, Glendale, AZ, United States - Denes Szucs - University of Cambridge, Cambridge, United Kingdom We?re now in the process of putting together a group of top researchers whose work we?d like to feature in this collection, and we would like you to participate. The research topic is hosted by Frontiers in Integrative Neuroscience, you can find all information here: https://www.frontiersin.org/research-topics/26709/reproducibility-in-neuroscience Important notice on Publishing Fee (APC) support: If Frontiers publishing fees are too high for your funding situation, you are eligible for full or partial APC fee support. In order to apply to fee support, please complete the fee support application form online, and allow up to two weeks for Frontiers to review and reply to your request: https://frontiers.qualtrics.com/jfe/form/SV_51IljifwFBXUzY1 More information can be found at: https://www.frontiersin.org/about/fee-policy Please get in touch if you have any questions - looking forward to hearing from you. Alberto On behalf of the Topic Editors. -- Questa email ? stata esaminata alla ricerca di virus da AVG. http://www.avg.com From janine.bijsterbosch at wustl.edu Fri Nov 19 10:14:33 2021 From: janine.bijsterbosch at wustl.edu (Bijsterbosch, Janine) Date: Fri, 19 Nov 2021 15:14:33 +0000 Subject: Connectionists: Two Connectomics Job Opportunities Message-ID: Hi, We are currently looking for two new members to join our team at Washington University in St Louis: * Research Assistant in cross-species functional connectivity research * Opportunity to work with a number of different data modalities (functional MRI, electrophysiology, oxygen polarography) to study the cellular underpinnings of functional connectivity. Depending on the candidate?s interest and expertise, the role may involve acquisition of primate fMRI, electrophysiology and oxygen recordings, and multimodal analysis of resting state network overlap. * Postdoctoral associate in structure-function connectomics research * Using UK Biobank neuroimaging data to study structure-function interactions in neuroimaging correlates of late life depression. The role will involve structural covariance analysis using non-negative matrix factorization, resting state fMRI analysis using Probabilistic Functional Modes, and structure-function mediation analyses. Full job descriptions are attached and on our website: https://sites.wustl.edu/personomics/jobs/. Please get in touch via email if you would like to find out more! Best wishes, Janine -- Janine Bijsterbosch Assistant Professor of Radiology Washington University in St Louis Radiology East Building (office #3348) 4525 Scott Avenue, Campus Box 8225 My Pronouns: she/her/hers janine.bijsterbosch at wustl.edu Website: https://sites.wustl.edu/personomics/ [A button with "Hear my name" text for name playback in email signature] ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6090 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PostdoctoralAssociateInStructure-functionConnectomicsResearch.pdf Type: application/pdf Size: 50785 bytes Desc: PostdoctoralAssociateInStructure-functionConnectomicsResearch.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ResearchAssistantInCross-speciesFunctionalConnectivityResearch.pdf Type: application/pdf Size: 56001 bytes Desc: ResearchAssistantInCross-speciesFunctionalConnectivityResearch.pdf URL: From julian at togelius.com Fri Nov 19 18:00:09 2021 From: julian at togelius.com (Julian Togelius) Date: Fri, 19 Nov 2021 18:00:09 -0500 Subject: Connectionists: Industrial research positions in physics-nnformed neural networks at OriGen.ai Message-ID: OriGen.ai is a startup focusing on physics-informed neural networks, with applications in several industries. We are currently looking to hire researchers keen to solve hard problems in modeling physics with neural networks. https://www.linkedin.com/jobs/view/2800782223/?refId=odkHpBxjSUqX85V7Y6qWVw%3D%3D -- Julian Togelius Associate Professor, New York University Department of Computer Science and Engineering mail: julian at togelius.com, web: http://julian.togelius.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Fri Nov 19 11:15:04 2021 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Fri, 19 Nov 2021 16:15:04 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> References: <33DC3654-F4D6-473C-9F95-FB99C483E89D@usi.ch> <15BAA8B8-0B89-4131-82B0-CFE4441EE55E@usi.ch> <48070117-2ABB-4CCD-ACC9-AF8C5811ED75@usi.ch> <11c3a52ca6ed4495a395ae019d8a0907@idsia.ch> <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <532DC982-9F4B-41F8-9AB4-AD21314C6472@supsi.ch> <3268601b-397d-3c44-da5a-29b330bb5cf5@rubic.rutgers.edu> <0659F820-64CD-4BF3-B7EC-727B7D146565@supsi.ch> Message-ID: <94E5C26B-C37B-4F07-8162-14BFA5DAF5E1@supsi.ch> Regarding the thoughtful comments of Pierre Baldi on cronyism and collusion: Tom Dietterich offered a somewhat more optimistic view, but pointed out that the head of NeurIPS has never changed throughout all these decades. The lack of new blood in senior governance of such conferences may very well be the root of the problem. After all, change is hard. Even in recent years the NeurIPS head has continued to promulgate a revisionist "history" of deep learning [S20] mentioned in Sec. II and XIII of the report https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html - let me cut and paste some text from there: ACM seems to be influenced by a misleading and rather self-serving "history of deep learning" propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. II, XIII). It goes more or less like this: "In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s." [S20] However, the 1969 book [M69] addressed a "problem" of shallow learning (introduced around 1800 when Gauss & Legendre started to model data through linear regression and the method of least squares [DL1-2]) that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method of 1965 [DEEP1-2][DL2]. This method was fully capable of learning internal representations in units that were not part of the input or output. Minsky was perhaps unaware of this and failed to correct it later [HIN](Sec. I). Deep learning research was alive and kicking in the 1970s, especially outside of the Anglosphere [DEEP2][BP6][CNN1][DL1-2]. ******** On 29 Oct 2021, at 21:21, Dietterich, Thomas wrote: Pierre, Regarding leadership turnover, with the exception of Terry Sejnowski, the makeup of the NIPS Foundation board turns over on a regular basis, as there are term limits for all Board members. The Board consists of previous program chairs. The IMLS Board has had term limits and regular turnover since its founding, and its members are elected. Both organizations are always seeking volunteers for the many tasks involved in running the conference, and future program chairs are drawn from the people who have served in these positions. I encourage the readers of this list who are interested to contact the members of the conference organizing committees and express your interest. I have long sought for mechanisms to broaden the set of candidates considered for leadership positions, editorial boards, etc. When people rely on their own professional networks, this naturally limits the set of names that get considered. And this combines with the "founder effect" that still exists because the field was nucleated in North America. Nonetheless, I think the machine learning community is very open compared to other fields. That said, we could do much better! --Tom Thomas G. Dietterich, Distinguished Professor Emeritus School of Electrical Engineering and Computer Science US Mail: 1148 Kelley Engineering Center Office: 2067 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 Voice: 541-737-5559; FAX: 541-737-1300 URL: http://web.engr.oregonstate.edu/~tgd/ ******** On 28 Oct 2021, at 20:05, Baldi,Pierre wrote: Besides plagiarism, this community would be well-served by taking a frank look at the remarkable levels of cronyism, collusion, and subtle--but very real--manipulation that have permeated it for several decades. In addition to self- , cross-, and suppressive citation issues, there are many other metrics to look at. To get started, one could ask the following simple questions and compute the corresponding statistics: 1) Over the past four decades, how often has the leadership of any relevant machine learning foundation changed? 2) Over the past four decades, what is the degree of over-representation by members of any particular organization in things like: a) organizing and program committees of major machine learning conferences? b) AI/ML academic department and now also AI/ML corporate departments? c) editorial boards and other centers of power and dissemination? 3) What is the degree of over-representation of any particular organization in invited talks, workshops, tutorials, or other "special events", such as official birthday celebrations or on-stage Q&A sessions with rich and famous people, at major machine learning scientific conferences? Cronyism and collusion are nothing new in human affairs, including science, and most of the time they are even legal. But how well do these serve science or society? The tip of the iceberg. --Pierre Baldi From george at cs.ucy.ac.cy Sun Nov 21 04:47:36 2021 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sun, 21 Nov 2021 11:47:36 +0200 Subject: Connectionists: ACM International Conference on Information Technology for Social Good (GoodIT 2022): Second Call for Contributions Message-ID: <8JNGZ77-E4EI-BKFP-633N-CV7VBHESPOLC@cs.ucy.ac.cy> *** Second Call for Contributions *** ACM International Conference on Information Technology for Social Good (GoodIT 2022) 7?9 September, 2022, 5* St. Raphael Resort & Marina, Limassol, Cyprus https://cyprusconferences.org/goodit2022/ Scope The ACM GoodIT conference seeks papers describing significant research contributions related to the application of information technologies (IT) to social good. Social good is typically defined as something that provides a benefit to the general public. In this context, clean air and water, Internet connection, education, and healthcare are all good examples of social goods. However, new media innovations and the explosion of online communities have added new meaning to the term. Social good is now about global citizens uniting to unlock the potential of individuals, technology, and collaboration to create a positive societal impact. GoodIT solicits papers that address important research challenges related to, but not limited to: ? Citizen science ? Civic intelligence ? Decentralized approaches to IT ? Digital solutions for Cultural Heritage ? Environmental monitoring ? Ethical computing ? Frugal solutions for IT ? Game, entertainment, and multimedia applications ? Health and social care ? IT for automotive ? IT for development ? IT for education ? IT for smart living ? Privacy, trust and ethical issues in ICT solutions ? Smart governance and e-administration ? Social informatics ? Socially responsible IT solutions ? Sustainable cities and transportation ? Sustainable IT ? Technology addressing the digital divide Main Track Paper Submission The papers should not exceed six (6) pages (US letter size) double-column, including figures, tables, and references in standard ACM format (https://cyprusconferences.org/goodit2022/index.php/authors/). They must be original works and must not have been previously published. At least one of the authors of all accepted papers must register and present the work at the conference; otherwise, the paper will not be published in the proceedings. All accepted and presented papers will be included in the conference proceedings published in the ACM Digital Library. Selected papers will be invited to submit an extended version to a special issue in the journal MDPI Sensors, where the theme of the special issue will be "Application of Information Technology (IT) to Social Good". Specifically 5 papers will be invited free of charge and another 5 papers will get a 20% discount on the publication fees. Furthermore, MDPI Sensors will sponsor a Best Paper Award with the amount of 400 CHF. Separate call-for-papers will be announced for the special tracks (more information will be available on the conference web site). Work-in-Progress and PhD Track Inside ACM GoodIT, the Work-in-Progress and PhD Track provides an opportunity to showcase interesting new work that is still at an early stage. We encourage practitioners and researchers to submit to the Work-in-Progress venue as it provides a unique opportunity for sharing valuable ideas, eliciting feedback on early-stage work, and fostering discussions and collaborations among colleagues. Moreover, this track provides a platform for PhD students to present and receive feedback on their ongoing research. Students at different stages of their research will have the opportunity to present and discuss their research questions, goals, methods and results. This is an opportunity to obtain guidance on various aspects of their research from established researchers and other PhD students working in research areas related to technologies for social good. Important: For this specific track, papers must not exceed four (4) pages (US letter size) double column, including figures, tables, and references in standard ACM format (https://cyprusconferences.org/goodit2022/index.php/authors/). Important Dates ? Submission deadline for all types of contributions: 23 May 2022 ? Notification of acceptance: 20 June 2022 ? Camera-ready submission and author registration: 11 July 2022 Program Chairs ? Costas Mourlas, University of Athens, Greece ? Diogo Pacheco, University of Exeter, UK ? Catia Prandi, University of Bologna, Italy WiP & PhD Track Chairs ? Marco Furini, University of Modena e Reggio Emilia, Italy ? Dimitris Gouscos, University of Athens, Greece ? Barbara Guidi, University of Pisa, Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From roshan.cools at gmail.com Fri Nov 19 16:06:09 2021 From: roshan.cools at gmail.com (Roshan Cools) Date: Fri, 19 Nov 2021 22:06:09 +0100 Subject: Connectionists: [RLDM] Call for abstracts - Reinforcement Learning and Decision Making 2022 - June 8-11 Message-ID: ====================================================== The 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM2022) www.rldm.org June 8-11 2022 at Brown University, Providence, USA ====================================================== Submissions to RLDM2022 are now being accepted at https://cmt3.research.microsoft.com/RLDM2022 Deadline: 22 February 2022, 11:59PM PST Notification of Acceptance: 15 March 2022 We invite extended abstracts for contributed poster presentations and oral presentations. We welcome submissions of original research related to ?learning and decision making over time to achieve a goal?, coming from any discipline or disciplines, describing empirical results from human, animal or animat experiments, and/or theoretical work, simulations and modeling. Contributions should be aimed at an interdisciplinary audience, but not at the expense of technical excellence. This is an abstract-based meeting, with no published conference proceedings. As such, work that is intended for, or has been submitted to, other conferences or journals is also welcome, provided that the intent of communication to other disciplines is clear. Submissions should consist of a summary (max 2000 characters; text only), and an extended abstract of between one and four pages (including figures and references). LaTeX and RTF templates, and sample submissions, are available here: https://rldm.org/submit/ Note: Only the summary will be made available in the (electronic) abstract booklets. The extended abstract will be used for reviewing, and will be available online only pending on authors? separate explicit permission. Online availability will have no bearing on the review process and authors are encouraged to include new, unpublished, findings which they do not want to make publicly available. To submit your abstract please go to https://cmt3.research.microsoft.com/RLDM2022 Submissions will be reviewed for relevance to the topic and for quality. Exceptional abstracts will be selected for oral presentations and for poster spotlight presentations. Best, RLDM2022 ORGANIZERS GENERAL CHAIRS Catherine Hartley Michael Littman PROGRAM CHAIRS Roshan Cools Peter Stone LOCAL CHAIRS Michael Frank George Konidaris EXECUTIVE COMMITTEE Yael Niv Peter Dayan Satinder Singh Rich Sutton Emma Brunskill Ross Otto *CONFIRMED SPEAKERS:* Josh Tenenbaum (MIT) Yunzhe Liu (UCL) Jill O?Reilly (Oxford) Nao Uchida (Harvard) Melissa Sharpe (UCLA) Alexandra Rosati (Michigan) Frederike Petzschner (Brown) Oriel Feldman-Hall (Brown) Scott Niekum (UT Austin) Satinder Singh Baveja (Michigan and DeepMind) Stephanie Tellex (Brown) Martha White (Alberta) Sonia Chernova (Georgia Tech) Jeannette Bohg (Stanford) Jakob Foerster (Facebook AI Research) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marieke.van.erp at dh.huc.knaw.nl Sun Nov 21 11:42:29 2021 From: marieke.van.erp at dh.huc.knaw.nl (Marieke van Erp) Date: Sun, 21 Nov 2021 17:42:29 +0100 Subject: Connectionists: Call for participation: K-CAP 2021 2-3 December (online) - Early bird until 26 Nov Message-ID: * Apologies for cross posting * K-CAP 2021 The Eleventh International Conference on Knowledge Capture December 2 - 3, 2021 A virtual conference https://www.k-cap.org/2021/ * Programme * Join us for two days of presentations at the intersection of knowledge representation, knowledge acquisition, intelligent user interfaces, problem-solving and reasoning, planning, agents, text extraction, and machine learning, information enrichment, visualization, and cyber-infrastructures to foster the publication, retrieval, reuse, and integration of data. K-CAP 2021 has accepted 41 papers, check them out at: https://www.k-cap.org/2021/conference.html * Invited Speakers * Ian Horrocks Oxford University Title: Knowledge Graphs: Theory, Applications and Challenges Abstract: Knowledge Graphs have rapidly become a mainstream technology that combines features of databases and AI. In this talk I will introduce Knowledge Graphs, explaining their features and the theory behind them. I will then consider some of the challenges inherent in both the theory and implementation of Knowledge Graphs and present some solutions that have made possible the development of popular language standards and robust and high-performance Knowledge Graph systems. Finally, I will illustrate the wide applicability of knowledge graph technology with example use cases including configuration management, fraud detection, semantic search & browse, and data wrangling. Leila Zia WIKIMEDIA Foundation Title: Research at the Service of Free Knowledge Abstract: With roughly 20 billion monthly pageviews, 15 million monthly edits, and almost 55 million articles across 300+ languages, Wikipedia has become a canonical part of the Free Knowledge ecosystem: enabling people to have access to knowledge and empowering them to participate in the discourse of gathering and sharing the sum of all human knowledge. By 2030, the Wikimedia projects, which include Wikipedia, aspire to break down the social, political, and technical barriers preventing people from accessing and contributing to free knowledge. In this presentation, I will talk about research in this direction. In particular, I will present our approach and research on identifying, measuring, and bridging Wikipedia's knowledge gaps. I will share some of our success stories, as well as a few of the biggest challenges we face today. I close by sharing some of the open research questions and directions. * Registration* You can now register at: https://www.k-cap.org/2021/registration.html Registration fees: Student Early $50 Late $70 Regular Early $100 Late $120 ACM Member Early $80 Late $100 Early rate applies until 26 November. If you are a student interested in attending K-CAP 2021, you may be eligible to apply for a support grant. This year, grants are funded by the ACM SIGAI. Please take a look at SIGAI's student support opportunities at: https://sigai.acm.org/activities/student_support.html Student Travel Support is also supported by the AIJ. Please apply at the following form: https://forms.gle/vZ5Bdvg7Vj12RaUq9. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustau.camps at uv.es Fri Nov 19 13:22:04 2021 From: gustau.camps at uv.es (Gustau Camps-Valls) Date: Fri, 19 Nov 2021 19:22:04 +0100 Subject: Connectionists: Postdoc positions in ML for Earth sciences at University of Valencia, Spain Message-ID: <0ee91758-963b-b49a-b310-da4546ff26a0@uv.es> Dear colleagues, We have several postdoctoral research fellow positions to work on physics-aware machine learning, explainable AI, computer vision and causal inference. Positions are open in the context of the following European research projects funded by the European Space Agency (ESA), H2020 and the European Research Council (ERC) around #MachineLearning #Deeplearning #AI & #causality for #geosciences and #climate* * *Apply by Dec 31st in http://isp.uv.es/openings* We look forward to receiving your application! Gustau ---------------------------------------------------------- Prof. Gustau Camps-Valls, IEEE Fellow, ELLIS Fellow Image Processing Laboratory (IPL) - Building E4 - Floor 4 Universitat de Val?ncia C/ Cat. Agust?n Escardino Benlloch, 9 46980 Paterna (Val?ncia). Spain phone : +34 963 544 064 web : http://isp.uv.es e-mail : gustau... at uv.es twitter : http://twitter.com/isp_uv_es ---------------------------------------------------------- --Shameless self-promotion----------------- ---------------------------------------------------------- g-scholar : https://bit.ly/2Z1jhti ML book : https://bit.ly/2Ml19Qq DL book : https://bit.ly/3n1W1DF ellis : https://ellis.eu/programs elise : https://www.elise-ai.eu/ i-aida : https://www.i-aida.org/ imiracli : http://imiracli.eu/ deepcube : https://deepcube-h2020.eu/ xaida : https://bit.ly/3lJFgxz erc : https://www.usmile-erc.eu/ ---------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From shyam at amrita.edu Sat Nov 20 03:27:13 2021 From: shyam at amrita.edu (Shyam Diwakar) Date: Sat, 20 Nov 2021 13:57:13 +0530 Subject: Connectionists: CFP: 8th Annual Conference of Cognitive Science Jan 2022, Abstract deadline: Nov 25 Message-ID: Hello All, I would like to cordially invite you and colleagues interested in or working on cognitive sciences, neurosciences or related topics to participate at ACCS8. This January 2022 event will be fully online. The abstract deadline is November 25, 2021. ACCS8: Call for contributions 8th Annual Conference of Cognitive Science (ACCS8) Amrita University, India ? 20 ? 22 January 2022 https://www.amrita.edu/accs8 The 8th edition of ACCS, to be hosted by Amrita Vishwa Vidyapeetham (Amrita University), Kerala, India will be held virtually. Registration will be free but mandatory to attend. List of our invited speakers, TPC and other info at https://www.amrita.edu/event/accs8 CONFERENCE TOPICS Like in our previous years, the ACCS8 conference is open to all topics with Cognitive Science as a discipline, and in various areas of study, including Neuroscience, Artificial Intelligence, Linguistics, Skilling, Medicine, Anthropology, Psychology, Philosophy, and Education. SUBMISSION GUIDELINES We welcome abstracts (min word limit:500) from all areas of cognitive science and from anyone worldwide. For the 2022 online event, we are accepting only abstracts containing the salient details of your study. Please copy and paste the title and abstract into the submission form. We are not accepting any paper-length submissions this year, so do include all relevant details in the abstract itself. A small number of abstracts will be selected for oral presentations/talks. Submission link: https://easychair.org/conferences/?conf=accs8 For CFP: https://easychair.org/cfp/accs8 CONFIRMED Keynote speakers: 1. Nandini Chatterjee Singh, UNESCO MGIEP, India. 2. Claudia Wheeler-Kingshott, University College London, UK. 3. Kenji Doya, Okinawa Institute of Science and Technology, Japan. 4. Egidio D'Angelo, University of Pavia, Italy 5. Ned Block, New York University, USA. 6. Bhavani Rao, Amrita University, India. IMPORTANT DATES Abstract submission deadline November 25, 2021 BEST PAPER AWARDS We are planning some best poster and oral presentations. Please stay tuned. CONTACT All questions about submissions should be emailed to accs8conference at gmail.com Previous ACCS events - https://www.amrita.edu/event/accs8/about Thank you and kind regards, Shyam Diwakar Local organizer - ACCS8 -- Prof. Shyam Diwakar, Ph.D. Director - Amrita Mind Brain Center Faculty Fellow - Amrita Center for International Programs Amrita Vishwa Vidyapeetham (Amrita University) Amritapuri, Clappana P.O. Kollam, India. Pin: 690525 Ph:+91-476-2803116 Fax:+91-476-2899722 http://amrita.edu/mindbrain [https://docs.google.com/uc?export=download&id=1_nOxaCXob6wVcuQyYI5KVk1PhMgihY8c&revid=0B234NYNYmh7xckREQjc2T3pPdUh5UGdXaytOUGlRcXFzcVFFPQ] [https://intranet.cb.amrita.edu/sig/RankingLogo.png] Disclaimer : The information transmitted in this email, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. Any views expressed in any message are those of the individual sender and may not necessarily reflect the views of Amrita Vishwa Vidyapeetham. If you received this in error, please contact the sender and destroy any copies of this information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suashdeb at gmail.com Mon Nov 22 01:56:38 2021 From: suashdeb at gmail.com (Suash Deb) Date: Mon, 22 Nov 2021 12:26:38 +0530 Subject: Connectionists: ISMSI22 online and extension of deadlines Message-ID: Dear esteemed colleagues, Warmest greetings. Trust you are all well and safe. Thanks very much for your continued support for ISMSI22. On completion of 1st round submission, the deadline had been extended till 20th Dec'21. Hope it will help you in completing and submitting your manuscripts. Also considering the current surge in pandemic cases, it has been formally decided to hold it online http://www.ismsi.org Pls. help by disseminating this piece of info among your peers and motivate them to submit too. Thanks once again and with best rgds, Suash -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ezgi.Bulca at caesar.de Mon Nov 22 04:27:43 2021 From: Ezgi.Bulca at caesar.de (Ezgi Bulca) Date: Mon, 22 Nov 2021 09:27:43 +0000 Subject: Connectionists: 10 fully-funded PhD Positions in Neurobiology of Behavior | IMPRS for Brain & Behavior | Bonn, Germany Message-ID: Dear colleagues, The International Max Planck Research School (IMPRS) for Brain & Behavior PhD applications are open with a deadline on December 1, 2021. Could you please forward our call to interested Master's students? Thank you! 10 fully-funded PhD Positions in Neurobiology of Behavior | IMPRS for Brain & Behavior Bonn, Germany Apply to our fully funded, international PhD program in the Max Planck Society! IMPRS for Brain & Behavior is a PhD program in Bonn, Germany that offers a competitive world-class PhD training and research program in the field of neuroethology. IMPRS for Brain & Behavior is a collaboration between research center caesar (a neuroethology institute of the Max Planck Society), the University of Bonn, and the German Center for Neurodegenerative Disease (DZNE) in Bonn. The Projects 20 labs with an enormous variety of research projects are seeking outstanding PhD candidates to join their research. See our website (https://imprs-brain-behavior.mpg.de/faculty_members) for further information on our faculty and possible doctoral projects. Successful candidates will work in a young and dynamic, interdisciplinary, international environment, embedded in the local scientific communities in Bonn, Germany. Your Profile * Proven track record of academic and research excellence * Master's degree in life sciences, physics, mathematics, computer science, engineering, or other relevant subject (Degree should be concluded before admission to the PhD, latest by November 2022) * Fluency in written and spoken English * Research experience is of advantage Our Offer Immersion in a stimulating scientific culture of interaction and international cooperation. State-of-the-art facilities with novel scientific technologies and advanced infrastructure. Dedicated support and mentoring personnel. Competitive salary funded for the whole duration of studies and no tuition fees. We are committed to diversity and equal opportunity for all applicants. Application deadline: December 1, 2021 https://imprs-brain-behavior.mpg.de/ You will need: CV, letter of motivation, contact info for 2 referees, academic certificates and transcripts. Only online applications accepted. Short-listed candidates will be interviewed online in February 2022. Positions must be started latest in November 2022. If you have any questions, feel free to contact the coordinator Ezgi Bulca at imprs.info at caesar.de . Kind regards from Bonn, Ezgi Ezgi Bulca IMPRS for Brain & Behavior Coordinator phone +49 228 9656 318 e-mail: ezgi.bulca at caesar.de www.imprs-brain-behavior.mpg.de https://twitter.com/IMPRSBrainBehav [IMPRS logo for email signature2] research center caesar an associate of the Max Planck Society Ludwig-Erhard-Allee 2 53175 Bonn, Germany www.caesar.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8050 bytes Desc: image001.png URL: From christos.dimitrakakis at gmail.com Mon Nov 22 05:14:26 2021 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Mon, 22 Nov 2021 11:14:26 +0100 Subject: Connectionists: PhD in Fairness, Differential Privacy or Reinforcement Learning Message-ID: <8dfe5357-b71a-2264-eabc-ac8f8dd6903b@gmail.com> We are looking for a PhD student to join our group on reinforcement learning and decision making under uncertainty more generally, at the University of Neuchatel, Switzerland ( https://www.unine.ch/ ).? We are particularly interested in candidates with a strong mathematical background. Prior research experience as documented by your Masters thesis is required. Within the area, we are looking for candidates with a strong research interest in the following fields - Reinforcement learning and decision making under uncertainty: 1. Exploration in reinforcement learning. 2. Decision making nuder partial information. 3. Representations of uncertainty in decision making. 4. Theory of reinforcement learning (e.g. PAC/regret bounds) 5. Bayesian inference and approximate Bayesian methods. - Social aspect of machine learning 1. Theory of differntial privacy. 2. Algorithms for differentially private machine learning. 3. Algorithms for fairness in machine learning. 4. Interactions between machine learning and game theory. 5. Inference of human models of fairness or privacy. The main supervisor will be Christos Dimitrakakis < https://sites.google.com/site/christosdimitrakakis > Examples of our group's past and current research can be found on arxiv: https://arxiv.org/search/?searchtype=author&query=Dimitrakakis%2C+C.? The student will have the opportunity to visit and work with other group members at the University of Oslo, Norway ( https://www.mn.uio.no/ifi/english/people/aca/chridim/index.html ) and Chalmers University of Technology, Sweden ( http://www.cse.chalmers.se/~chrdimi/ ). The PhD candidate must have a strong technical background, including: 1. Thorough knowledge of calculus and linear algebra. 2. A good theoretical background in probability and statistics/machine learning. 3. Practical experience with at least one programming language. The candidate's background will be mainly assessed through their MSc thesis and transcripts, and secondarily through an interview. >>>> Application Information <<<<< *Starting date* 1 Februrary 2022 or soon afterwards. *Application deadline* 30 November 2021. To apply sen an email to christos.dimitrakakis at gmail.com with the subject 'PhD Neuchatel'. An application must include: 1. A statement of research interests and motivation relevant to the position. 2. A CV with a list of references. 3. Your MSc thesis or another research work demonstrating your academic writing. 4. A degree transcript. Feel free to include any other additional information. From cognitivium at sciencebeam.com Mon Nov 22 08:54:09 2021 From: cognitivium at sciencebeam.com (Mary) Date: Mon, 22 Nov 2021 17:24:09 +0330 Subject: Connectionists: Online Mentoring Job Offer Message-ID: <202111221354.1AMDsCVu120348@scs-mx-02.andrew.cmu.edu> Dear all, We are pleased to inform you that ScienceBeam Institute, organizer of various Electrophysiology and Neuroscience events, webinars, and workshops, is expanding its? Scientific network. This great network is consisted of professionals, professors, and researchers, in various fields including, clinical research and treatment, psychiatry and neuropsychology, neurophysiology, neuroscience, cognitive science, and etc. Relying on our 20-year experience, we do believe in building relationship with different labs, clinics, research centers, organizations, and universities, whom we have a mutual field of interest, and sticking to it. We do believe that Science should be having no limits, and by strengthening these relationships, we all can make a change in the world of science and technology. We would be delighted to invite the interested professionals to join our team for working together on conducting online and in-person webinars and workshops about the above-mentioned topics, in order to expand the science and knowledge to all the interested people around the world with no limits. By joining our team you will be lecturing in different worldwide events and help us to spread the science. While applying for this position, please note the below points: ? Professionals in the fields of Neuroscience, psychology, psychotherapy, neuropsychology, clinical research, clinical treatment, Neuro-Biofeedback, EEG/ERP, QEEG, brain stimulation, brain disorders, and etc. are invited to apply for this position. ? Ph.D., post doc, assistant professors, professors, and lecturers are invited to apply for this position and join our team. ? Previous experience with teaching and lecturing is a plus. ? Please send your CV as well as a short message to introduce yourself and the topics that you?d be able to cover to this email address: workshop at sciencebeam.com Please share this flyer with interested people. We are waiting to hear from you. If you had any questions regarding this position, do not hesitate to contact us (workshop at sciencebeam.com ). We would love to hear from you. Also, you can use the below to get some information about some of our previously held workshops and webinars: https://sciencebeam.com/workshops/ Best Regards Mary Reae Human Neuroscience Dept. Manager mary at sciencebeam.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From el-ghazali.talbi at univ-lille.fr Mon Nov 22 11:11:07 2021 From: el-ghazali.talbi at univ-lille.fr (El-ghazali Talbi) Date: Mon, 22 Nov 2021 17:11:07 +0100 Subject: Connectionists: Research position at INRIA Bonus France Message-ID: <969f644c-046d-e32c-0614-84635f7fcc22@univ-lille.fr> Open Research Position at Inria Lille ? Nord Europe Research group: BONUS Last annual activity report:https://www.inria.fr/en/inria-ecosystem Attached the PDF version Description ----------- The INRIA research institute (https://www.inria.fr/en/inria-ecosystem) invites applications for multiple permanent researcher positions to start in the 2022-2023 academic year. High performance computing (HPC) and/or Machine Learningassisted big optimization is among its top priorities for the recruitment. This hot topic is a major focus of the BONUS research group, which is part of the INRIA Lille ? Nord Europe research center located at Lille (North of France). Qualified applicants in Computer Science are invited to apply with the objective to join the BONUS research group as a permanent researcher. Qualifications -------------- The BONUS team addresses big optimization problems (high-dimensional in decision variables and/or objectives, and/or with computationally expensive black-box functions) using mainly Machine Learning-assisted optimization and High-performance (parallel) optimization. Candidates should hold a Ph.D. or equivalent degree in Computer Science or a related discipline. BONUS is interested in applicants having a strong background in at least one of its research lines: Machine Learning-assisted optimization or parallel optimization. Candidates with only HPC-related profile, highly motivated to adapt their activities to the context of big optimization using HPC in keeping with the BONUS research program, are also invited to apply. Some benefits in addition to the salary --------------------------------------- ? Subsidized meals, ? Partial reimbursement of public transport costs, ? Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.), ? Possibility of teleworking (after 6 months of employment) and flexible organization of working hours, ? Professional equipment available (videoconferencing, loan of computer equipment, etc.) ? Social, cultural and sports events and activities, ? Access to vocational training, ? Social security coverage. Application Instructions / Contact ----------------------------------- Applicants are asked to contact the leader of the BONUS team, Prof. Nouredine Melab atNouredine.Melab at inria.fr. After this contact, if a candidate is encouraged to apply he/she will do it officially through the INRIA online application process that will be soon indicated here:https://www.inria.fr/en/talents ------------------------------ -- ********************************************************************** OLA'2022 International Conference on Optimization and Learning (SCOPUS, Springer) 18-20 July 2022, Syracuse, Sicilia, Italy http://ola2022.sciencesconf.org *********************************************************************** Prof. El-ghazali TALBI Polytech'Lille, University Lille - INRIA CRISTAL - CNRS -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bonus-Research-Position.pdf Type: application/pdf Size: 119137 bytes Desc: not available URL: From daniel.polani at gmail.com Mon Nov 22 19:54:27 2021 From: daniel.polani at gmail.com (Daniel Polani) Date: Tue, 23 Nov 2021 00:54:27 +0000 Subject: Connectionists: Readership/Principal Lectureship in Robotics and Adaptive Systems Message-ID: Readership/Principal Lectureship in Robotics and Adaptive Systems School of Physics, Engineering and Computer Science University of Hertfordshire, Hatfield, UK Closing: 6. December 2021 Applications are invited for a Reader/Principal Lectureship on emerging topics in Robotics/AI, including, but not limited to: - Robotics: embodied and/or cognitive robotics, soft robotics, adaptive or evolutionary robotic design, robot safety and ethics, sensorics and robotics, emotional/social robots, smart homes and sensors, sensor fusion, assistive robotics, human-robot interaction, agricultural robotics - Machine learning: reinforcement learning, Deep Methods, statistical methods, large scale data modelling/intelligent processing and high-performance learning algorithms - Biological and biophysical computation paradigms, systems biology, neural computation - Complex Systems: collective intelligence, adaptive, autonomous and multi-agent/robot systems, collective and swarm intelligence, social and market modelling, adaptive, evolutionary and unconventional computation #complexsystems - Mathematical Modelling: statistical modelling, information-theoretic methods, compressive sensing, intelligent data visualization, multiscale models, optimization; causality - Emerging Topics in AI: computer algebra and AI, topological methods (e.g. persistent homology), algebraic and category-theoretical methods in AI; modern topics in games and AI; quantum algorithms for AI #computeralgebra - AI and applications: financial modelling, AI and biology/physics/cognitive sciences - Foundations: fundamental questions of intelligence and computation, emergence of life/intelligence, Artificial Life #artificiallife Closing Date: 6. December 2021 To apply and for further information, please see: https://www.jobs.ac.uk/job/CKJ638/reader-principal-lecturer-in-robotics-and-adaptive-systems, Ref. No.: 032595 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eero at cns.nyu.edu Mon Nov 22 22:33:07 2021 From: eero at cns.nyu.edu (Eero Simoncelli) Date: Mon, 22 Nov 2021 22:33:07 -0500 (EST) Subject: Connectionists: Doctoral studies in Computational/Theoretical Neuroscience at NYU Message-ID: <202111230333.1AN3X7P00833@calaf.cns.nyu.edu> New York University is home to a thriving interdisciplinary community of researchers using computational and theoretical approaches in neuroscience. We are interested in exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. A listing of faculty, sorted by their primary departmental affiliation, is given below. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Nevertheless, admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests. Center for Neural Science (CNS), Graduate School of Arts & Sciences (deadline: 1 December) [https://neuroscience.nyu.edu/program.html, and https://as.nyu.edu/cns/DoctoralProgram.html] * SueYeon Chung (starting Sep 2022) - NeuroAI and geometry. * Andre A. Fenton - Molecular, neural, behavioral, and computational aspects of memory. * Paul W. Glimcher - Decision-making in humans and animals. Neuroeconomics. * David Heeger (also in Psychology) - Computational neuroscience, vision, attention. * Roozbeh Kiani - Vision and decision-making. * Wei Ji Ma (also in Psychology) - Perception, working memory, and decision-making. * Tony Movshon - Vision and visual development. * Bijan Pesaran - Neuronal dynamics and decision-making. * Alex Reyes - Functional interactions of neurons in a network. * John Rinzel (also in Mathematics) - Biophysical mechanisms and theory of neural computation. * Cristina Savin (also in the Center for Data Science) - Computational models of learning and memory, machine learning. * Robert Shapley - Visual physiology and perception. * Eero Simoncelli - Computational vision and audition. * Xiao-Jing Wang - Computational neuroscience, decision-making and working memory, neural circuits. * Alex Williams (starting Jan 2022) - Statistical analysis of neural data. Neuroscience Institute, School of Medicine (deadline: 1 December) [https://neuroscience.nyu.edu/program.html, and https://med.nyu.edu/departments-institutes/neuroscience/] * Gyorgy Buzsaki - Rhythms in neural networks. * Dmitri Chklovskii (also in the Simons Foundation) - Neural computation and connectomics. * Biyu He - Large-scale brain dynamics underlying human cognition. * Dmitry Rinberg - Sensory information processing in the behaving animal. * Shy Shoham - Methods for controlling, imaging, and analyzing neural systems. * Mario Svirsky - Auditory neural prostheses; experimental/computational studies of speech production/perception. Psychology, Cognition & Perception program (deadline: 1 December) [http://as.nyu.edu/psychology/graduate/phd-cognition-perception.html] * Todd Gureckis - Memory, learning, and decision processes. * Brendan Lake (also in the Center for Data Science) - Computational modeling of cognition, deep learning. * Michael Landy - Computational approaches to vision. * Laurence Maloney - Mathematical approaches to psychology and neuroscience. * Denis Pelli - Visual object recognition. * Jonathan Winawer - Visual perception and memory. Mathematics (deadline: 4 January) [http://math.nyu.edu/degree/phd/] * David McLaughlin - Nonlinear wave equations, computational visual neuroscience. * Aaditya Rangan - Computational neurobiology, numerical analysis. * Charles Peskin - Mathematical biology. * Daniel Tranchina - Information processing in the retina. * Lai-Sang Young - Dynamical systems, statistical physics, computational modeling and theoretical neuroscience. Data Science (deadline: 12 December) [https://cds.nyu.edu/phd-program/] * Joan Bruna (also in Computer Science) - Machine learning, signal/image processing. * Kyungyun Cho (also in Computer Science) - Machine learning, natural language processing. * Carlos Fernandez-Granda (also in Mathematics) - Optimization methods for medical imaging, neuroscience, computer vision. Physics (deadline: 18 December) [https://as.nyu.edu/physics/programs/graduate.html] * Marc Gershow - Perception, decision-making, and learning in neural circuits. Computer Science (deadline: 12 December) [http://www.cs.nyu.edu/home/phd/] * Davi Geiger - Computational vision and learning. * Yann LeCun - Machine learning, computer vision, robotics, computational neuroscience. Economics (deadline: 18 December) [https://as.nyu.edu/econ/graduate/phd.html] * Andrew Caplin - Economic theory, neurobiology of decision. * Andrew Schotter - Experimental economics, game theory, neurobiology of decision. From cgf at isep.ipp.pt Tue Nov 23 18:55:07 2021 From: cgf at isep.ipp.pt (Carlos) Date: Tue, 23 Nov 2021 23:55:07 +0000 Subject: Connectionists: CFP: International School and Conference on Network Science (NetSci-X 2022) Message-ID: <36fbf68e-134a-ce1c-efd8-922b9f9e2d36@isep.ipp.pt> ================================== International School and Conference on Network Science NetSci-X 2022 Porto, Portugal February 8-11, 2022 https://netscix.dcc.fc.up.pt/ ================================== Important Dates ----------------------------- Full Paper/Abstract Submission: November 26, 2021 (23:59:59 AoE) Author Notification: December 20, 2021 Keynote Speakers ----------------------------- Jure Leskovec, Stanford University, USA Jurgen Kurths, Humboldt University Berlin, Germany Manuela Veloso, JP Morgan AI Research & CMU, USA Stefano Boccaletti, Institute for Complex Systems, Florence, Italy Tijana Milenkovic, University of Notre Dame, USA Tiziana Di Matteo, King's College London, UK Tracks ----------------------------- We are now welcoming submissions to the Abstracts or Proceedings Track. All submissions will undergo a peer-review process. Abstracts Track: extended abstracts should not exceed 3 pages, including figures and references. Abstracts will be accepted for oral or poster presentation, and will appear in the book of abstracts only. Proceedings Track: full papers should have between 8 and 14 pages and follow the Springer Proceedings format. Accepted full papers will be presented at the conference and published by Springer. Only previously unpublished, original submissions will be accepted. Description ----------------------------- NetSci-X is the Network Science Society?s signature winter conference. It extends the popular NetSci conference series (https://netscisociety.net/events/netsci) to provide an additional forum for a growing community of academics and practitioners working on formal, computational, and application aspects of complex networks. The conference will be highly interdisciplinary, spanning the boundaries of traditional disciplines. Specific topics of interest include (but are not limited to): Models of Complex Networks Structural Network Properties Algorithms for Network Analysis Graph Mining Large-Scale Graph Analytics Epidemics Resilience and Robustness Community Structure Motifs and Subgraph Patterns Link Prediction Multilayer/Multiplex Networks Temporal and Spatial Networks Dynamics on and of Complex Networks Network Controllability Synchronization in Networks Percolation, Resilience, Phase Transitions Network Geometry Network Neuroscience Network Medicine Bioinformatics and Earth Sciences Applications Mobility and Urban Networks Computational Social Sciences Rumor and Viral Marketing Economics and Financial Networks Instructions for Submissions ----------------------- All papers and abstracts should be submitted electronically in PDF format. The website includes detailed information about the submission process. General Chairs Fernando Silva, University of Porto, Portugal Jos? Mendes, University of Aveiro, Portugal Ros?rio Laureano, Lisbon University Institute (ISCTE), Portugal Program Chair Pedro Ribeiro, Universidade do Porto Main Contact for NetSci-X 2022 netscix at dcc.fc.up.pt Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From coralie.gregoire at insa-lyon.fr Tue Nov 23 11:39:29 2021 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Tue, 23 Nov 2021 17:39:29 +0100 (CET) Subject: Connectionists: [CFP] The ACM Web Conference 2022 - Posters and Demos submissions Message-ID: <1148250002.2980129.1637685569733.JavaMail.zimbra@insa-lyon.fr> [Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] [CFP] The ACM Web Conference 2022 - Posters and Demos submissions We invite contributions to the Posters and Demos track of The Web Conference 2022 (formerly known as WWW). The conference will take place online, hosted by Lyon, France, on April 25-29, 2022. ------------------------------------------------------------ Instructions for authors of Posters and Demos submissions *Important Dates* -Papers submission: February 3rd, 2022 -Notification to authors: March 3rd, 2022 -Camera ready: March 10th, 2022 All submission deadlines are end-of-day in the Anywhere on Earth (AoE) time zone. *Posters and Demos chairs: (www2022-poster-demo at easychair.org)* -Anna Lisa Gentile (IBM Research) -Pasquale Lisena (EURECOM) The Web Conference is the premier conference focused on understanding the current state and the evolution of the Web through the lens of different disciplines, including computing, computational social science, economics and political sciences. The Posters and Demos Track is a forum to foster interactions among researchers and practitioners by allowing them to present and demonstrate their new and innovative work. In addition, the Posters and Demos track will give conference attendees an opportunity to learn novel on-going research projects through informal interactions. Demos submissions must be based on an implemented and tested system. Posters and Demos papers will be peer-reviewed by members of the Poster Committee based on originality, significance, quality, and clarity. Accepted papers will appear in the Companion conference proceedings. In addition, accepted work authors will be asked to create a digital poster to present their work during the Posters and Demos track at the conference. Submitted posters are expected to be aligned with one or more of the relevant topics of TheWebConf community, including (but not limited to): - Web-related Economics, Monetization, and Online Markets - Web Search - Web Security, Privacy, and Trust - Semantics and Knowledge - Social Network Analysis and Graph Algorithms - Social Web - Systems and Infrastructure - User Modeling, Personalization and Accessibility - Web and Society - Web Mining and Content Analysis - Web of Things, Ubiquitous and Mobile Computing - Esports and Online Gaming - History of the Web - Web for good *Submission guidelines* Posters and Demos papers are limited to four pages, including references. Submissions are NOT anonymous. It is the authors? responsibility to ensure that their submissions adhere strictly to the required format. In particular, the format cannot be modified with the objective of squeezing in more material. Submissions that do not comply with the formatting guidelines will be rejected without review. Submissions will be handled via Easychair, at: https://easychair.org/conferences/?conf=thewebconf2022, selecting the Poster-Demo track. We also highly encourage to include external material related to the poster or demo (e.g., code repository on Github or equivalent) in the submission. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template. For overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty Submissions for review must be in PDF format. They must be self-contained and written in English. Submissions that do not follow these guidelines, or do not view or print properly, will be rejected without review. *Ethical use of data and informed consent* As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM?s new Publications Policy on Research Involving Human Participants and Subjects. When appropriate, authors are encouraged to include a section on the ethical use of data and/or informed consent in their paper. Note that submitting your research for approval by the author(s)? institutional ethics review body (IRB) may not always be sufficient. Even if such research has been signed off by your IRB, the programme committee might raise additional concerns about the ethical implications of the work and include these concerns in its review. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. All accepted papers will be published by ACM and will be available via the ACM Digital Library. To be included in the Companion Proceedings, at least one author of each accepted paper must register for the conference and present the paper. You can reach the chairs at: www2022-poster-demo at easychair.org ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From max.garagnani at gmail.com Tue Nov 23 13:01:21 2021 From: max.garagnani at gmail.com (Max Garagnani) Date: Tue, 23 Nov 2021 18:01:21 +0000 Subject: Connectionists: Applications for 2022-23 entry OPEN:: MSc in Computational Cognitive Neuroscience, Goldsmiths (London, UK) Message-ID: ********************************************************************************** The MSc in COMPUTATIONAL COGNITIVE NEUROSCIENCE at Goldsmiths, University of London (UK) ********************************************************************************** is now ACCEPTING APPLICATIONS for 2022-23 ENTRY. Note that places on this programme are limited and will be allocated on a first-come first-served basis. If you are considering this MSc, we recommend applying now rather than later to avoid disappointment. The course builds on the multi-disciplinary and strong research profiles of our Computing and Psychology Departments staff. It equips students with a solid theoretical basis and experimental techniques in computational cognitive neuroscience, providing them also with an opportunity to apply their newly acquired knowledge in a practical research project, which may be carried out in collaboration with one of our industry partners (see below). Applications range from computational neuroscience and machine learning to brain-computer interfaces to experimental and clinical research. For more INFORMATION ABOUT THE COURSE please visit: https://www.gold.ac.uk/pg/msc-computational-cognitive-neuroscience/ HOW TO APPLY: ============= Submitting an online application is easy and free of cost. Simply visit https://bit.ly/2Fi86SB and follow the instructions. COURSE OUTLINE: =============== This is a one-year full-time or two-years part-time Masters programme, consisting of taught courses (120 credits) plus research project and dissertation (60 credits). (Note: students who need a Tier-4 VISA to study in the UK can only register for the full-time pathway). It is designed for students with a good degree in the biological / life sciences (psychology, neuroscience, biology, medicine, etc.) or physical sciences (computer science, mathematics, physics, engineering), however, individuals with different backgrounds but equivalent experience will also be considered. The core contents of this course include (i) fundamentals of cognitive neuroscience (cortical and subcortical mechanisms and structures underlying cognition and behaviour, plus experimental and neuroimaging techniques), and (ii) concepts and methods of computational modelling of biological neurons, simple neuronal circuits, and higher brain functions. Students are trained with a rich variety of computational and advanced methodological skills, taught in the four core modules of the course (Modelling Cognitive Functions, Cognitive Neuroscience, Cortical Modelling, and Advanced Quantitative Methods). Unlike other standard computational neuroscience programmes (which focus predominantly on modelling low-level aspects of brain function), one of the distinctive features of this course is that it includes the study of biologically constrained models of cognitive processes (including, e.g., language and decision making). The final research project can be carried out ?in house? or i! n collaboration with an external partner, either from academia or industry. For samples of previous students? MSc projects , visit: https://coconeuro.com/index.php/student-projects/ For information about funding opportunities and tuition fees, please visit: https://www.gold.ac.uk/pg/fees-funding/ LINKS WITH INDUSTRY: ==================== The programme benefits from an ongoing collaborative partnership with 5 different international companies with headquarters in UK, USA, Germany, Italy, and Japan. Carrying out your final research project with one of our industry partners will enable you to acquire cutting-edge skills which are in demand, providing you with a competitive profile on the job market and paving the way towards post-Masters internships and job opportunities. Here are examples of career pathways followed by some of our alumni, together with what they have to say this course: https://coconeuro.com/index.php/alumni/ For any other specific questions, please do not hesitate to get in touch. Kind regards, Max Garagnani -- Joint Programme Leader, MSc in Computational Cognitive Neuroscience Senior Lecturer in Computer Science Department of Computing Goldsmiths, University of London Lewisham Way, New Cross London SE14 6NW, UK https://www.gold.ac.uk/computing/people/garagnani-max/ ******************************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From zk240 at cam.ac.uk Wed Nov 24 10:56:25 2021 From: zk240 at cam.ac.uk (Zoe Kourtzi) Date: Wed, 24 Nov 2021 15:56:25 +0000 Subject: Connectionists: Post-doctoral position in Machine Learning for Brain and Mental Health Message-ID: Post-doctoral position in Machine Learning for Brain and Mental Health at the Cambridge Image Analysis Group (http://www.damtp.cam.ac.uk/research/cia/cambridge-image-analysis) and the Adaptive Brain lab (http://www.abg.psychol.cam.ac.uk), University of Cambridge, UK. Opportunity to work with our cross-disciplinary team on a project focusing on developing state-of-the-art machine learning and image analysis methods for the early diagnosis of dementia and mental health disorders. Our work programme aims to develop a fully deployable decision support system to a) aid clinicians in early diagnosis and patient management decisions, b) direct tailored interventions to individual needs, c) inform patient selection for clinical trials. This work has already gathered significant media interest (e.g. https://www.bbc.co.uk/news/health-57934589) and has the potential for transformative applications into clinical practice. The successful applicant will receive multi-disciplinary research training at the interface between machine learning, neuroscience, and clinical translation. They will work with teams at the Alan Turing Institute (https://www.turing.ac.uk), the UK?s national Institute for Artificial Intelligence and Data Science. They will also work in close collaboration the EDoN Initiative (https://edon-initiative.org) spearheaded by Alzheimer?s Research UK, and the Data Science and Neuroscience teams at AstraZeneca. For details and to apply online see: https://www.jobs.cam.ac.uk/job/32357/ For Informal enquiries please contact Prof Zoe Kourtzi (zk240 at cam.ac.uk) with CV and brief statement of background skills and research interests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zoran.tiganj at gmail.com Wed Nov 24 23:40:22 2021 From: zoran.tiganj at gmail.com (Zoran Tiganj) Date: Wed, 24 Nov 2021 23:40:22 -0500 Subject: Connectionists: Multiple Ph.D. Positions in AI and Machine Learning at Indiana University Bloomington Message-ID: The Center for Machine Learning at Indiana University brings together over 10 faculty working in areas including theory, reinforcement learning, statistical learning, speech and audio processing, robotics, planning and control, medical image processing, graphical models, computational neuroscience, computer vision, case-based reasoning, and deep learning. Please visit our Center website at https://cml.luddy.indiana.edu/people/. Together we have at least 10 PhD positions available for Fall 2022 admission. Topics include: - Algorithms for ML: scalable (e.g., parallel, distributed round/communication-efficient) algorithms for reinforcement learning, online learning, clustering. - Probabilistic ML: graphical models, efficient inference algorithms, applications, and computational/statistical learning theory. - AI: probabilistic planning, unsupervised learning, memory augmentation, reinforcement learning, and the connections between planning and probabilistic inference. - Robotics: planning, decision-making, and learning methods for autonomous robotic systems, and coordination approaches for distributed multi-robot or swarm systems. - Computer Vision: object and action recognition, 3D reconstruction, egocentric computer vision, deep learning, and graphical model inference. - ML and DL for speech/music/audio processing: speech enhancement, source separation, privacy and security for speech/audio applications, speech/audio coding, music information retrieval and music signal processing For all areas ideal candidates should have demonstrable strong math and theory skills and/or excellent programming and system building skills. Background in the relevant areas listed above is of course desirable. Applicants should have received a Bachelor?s degree in computer science, engineering, statistics, or related fields. The Center brings together faculty from different departments across the university, including Computer Science, Intelligent Systems Engineering, and Statistics. Each faculty has a home department (listed on the website above) but can advise students in other departments. Please mention the faculty with whom you would like to work in your Statement of Purpose. You may apply to multiple programs. For best consideration, please submit applications by these dates: - Computer Science: Priority deadline: December 1, 2021, Second round: January 1, 2022 - Intelligent Systems Engineering: Priority deadline: December 1, 2021, Second round: January 1, 2022 - Statistical Science: Priority deadline: January 15, 2022 For detailed information about the programs and application process, please visit: https://luddy.indiana.edu/admissions/apply/graduate.html Email inquiries: - For information about the admission process or requirements, please write goluddy at indiana.edu - For information about the Center for Machine Learning, please contact iucml at indiana.edu - For information about specific positions or projects, please write to the faculty member directly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From elio.tuci at gmail.com Thu Nov 25 02:58:35 2021 From: elio.tuci at gmail.com (Elio Tuci) Date: Thu, 25 Nov 2021 08:58:35 +0100 Subject: Connectionists: 2-years PostDoc position in collective behaviour and hypergraphs theory Message-ID: Good Morning, The Faculty of Computer Science at the University of Namur invites applications for a Postdoctoral Research Associate to work on the project ?On the analysis and design of collective behaviour for robotics swarms using hypergraphs theory?, an interdisciplinary project in which the mathematics of the networks science and hypergraphs theory is used to design and predict the collective dynamics of robotics swarms. This project is a collaboration between Prof Elio Tuci (https://directory.unamur.be/staff/etuci), from the Faculty of Computer Science at UNamur, Prof Timoteo Carletti (https://directory.unamur.be/staff/tcarlett) from the Department of Mathematics at UNamur, and Dr Andreagiovanni Reina (https://www.giovannireina.com/) a FNRS research fellow of the Artificial Intelligence Lab IRIDIA at ULB Belgium. The appointment will be for one year renewable for another year, and is based in Namur, Belgium. The candidate should have a PhD in sciences (or in a field judged equivalent by the selection committee) with some experience in programming, mathematical modelling and excellent oral/written communication skills in English. French is the official language of the University of Namur. However, knowledge of French is not required to carry out the activities of this project. The candidate should have some experience or interest in modelling collective behaviour and/or network systems. It is also important that the candidate can translate ideas between disciplines, taking ideas from mathematics and implementing them in collective and swarm robotics systems. More information are available at https://www.naxys.be/2021/11/postdoctoral-research-associate/ To apply for this position, please send by email a CV, a list of publications, a motivation letter (max length one page), and the names of three referees to Prof Elio Tuci elio.tuci at unamur.be The short-listed candidates will be contacted for an interview by videoconference. We will keep on processing applications as soon as we receive them and until a suitable candidate is found. The earliest starting date is the 01-01-2022. UNamur?s personnel management policy is geared towards diversity and equal opportunities. We recruit candidates on the basis of their skills, irrespective of age, gender, sexual orientation, origin, nationality, beliefs, disability, etc. For further information on the project, you can contact the project coordinators: Prof Elio Tuci ? elio.tuci at unamur.be Prof Timoteo Carletti - timoteo.carletti at unamur.be Dr Andreagiovanni Reina - andreagiovanni.reina at gmail.com From kaiolae at ifi.uio.no Thu Nov 25 05:07:14 2021 From: kaiolae at ifi.uio.no (Kai Olav Ellefsen) Date: Thu, 25 Nov 2021 10:07:14 +0000 Subject: Connectionists: Open PhD positions In-Reply-To: <5955a2d008b84650bdcb13355f1d3470@ifi.uio.no> References: <5955a2d008b84650bdcb13355f1d3470@ifi.uio.no> Message-ID: PhD opportunities at the University of Oslo There are 17 PhD scholarships in areas of astronomy, physics, chemistry, geoscience, bioscience, and mathematics and statistics available in the CompSci Cofund MSCA program. The application deadline is February 1st and the positions start in August/September 2022. The program provides all students with an initial intensive course in computing and data science before starting on their research projects in a disciplinary research group. You will be part of a cohort of international students, learn essential digital skills, build an interdisciplinary network, and study with leading researchers. You can learn more about the program, employment conditions and learn about living and working in Oslo at the program website: https://www.mn.uio.no/compsci/english/ . Perhaps of particular relevance to this mailing list are the projects in "Biosciences and AI", as seen here: https://www.mn.uio.no/compsci/english/phd_programme/projects/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.grosse-wentrup at univie.ac.at Thu Nov 25 10:17:01 2021 From: moritz.grosse-wentrup at univie.ac.at (Moritz Grosse-Wentrup) Date: Thu, 25 Nov 2021 16:17:01 +0100 Subject: Connectionists: Doctoral position on causal inference and explainable AI at the University of Vienna Message-ID: We have an opening for a doctoral position in the Neuroinformatics Group at the University of Vienna (https://ni.cs.univie.ac.at/). The project's goal is to develop causal inference methods that generate human-interpretable concepts for explainable AI. These methods will be applied in close collaboration with other group members to study the link between neuronal activity and cognition in artificial and biological intelligent systems. We are looking for an intrinsically motivated student with a Master (or equivalent) degree in Computer Science, Neural Engineering, Electrical Engineering, Mathematics, Physics, or a related field. We expect strong competencies in machine learning and programming, as well as a passion for interdisciplinary research in general and learning about the brain in particular. The ability to speak German would be beneficial but is not required. Employment is according to the collective bargaining agreement ?48 VwGr. B1 Grundstufe (praedoc, 75%), see http://personalwesen.univie.ac.at/en/jobs-recruiting/job-center/salary-scheme/ The duration of employment is three years and can be extended to a maximum of four years. The initial contract will be limited to 1.5 years and automatically renewed for another 1.5 years at the end of the 12-month probation period. Job duties involve research as well as teaching (in English or German). The Neuroinformatics Group is part of the Faculty of Computer Science at the University of Vienna, the oldest and largest public research university in the German-speaking world (https://en.wikipedia.org/wiki/University_of_Vienna). Research in our group focuses on two topics: Causal inference for bridging the gap between neuronal activity and cognition and AI/ML algorithms for brain-computer interfacing. Our offices are located in downtown Vienna, a city that has been repeatedly ranked as having the world's highest quality of living. We strive to be a diverse research group and particularly welcome (and, given equal qualifications, give preference to) applications from groups currently underrepresented in academia! To apply, please set up an account (https://zid.univie.ac.at/en/setting-up-a-uaccount/) and submit the following documents via https://servicedesk.univie.ac.at/plugins/servlet/desk/portal/89/create/1277: * Letter of motivation (~one page) * CV * Abstract of Master thesis * Degree certificates * Contact information of two referees * List of publications and evidence of teaching experience (if any) The application deadline is the 13th of December, 2021. Please choose reference number "#06 PC2: Neuroinformatics, Prof. Grosse-Wentrup" when submitting your application. Because there are two separate projects linked to this call, please indicate in your motivation letter whether you are applying to the BCI or the causal inference project. If you have any questions prior to applying, please feel free to contact me via moritz.grosse-wentrup at univie.ac.at. Best, Moritz Grosse-Wentrup -- Univ.-Prof. Dr.-Ing. Moritz Grosse-Wentrup Research Group Neuroinformatics Faculty of Computer Science University of Vienna H?rlgasse 6, A-1090 Wien, Austria moritz.grosse-wentrup at univie.ac.at +43-1-4277-79610 http://neural.engineering/ From moritz.grosse-wentrup at univie.ac.at Thu Nov 25 09:50:59 2021 From: moritz.grosse-wentrup at univie.ac.at (Moritz Grosse-Wentrup) Date: Thu, 25 Nov 2021 15:50:59 +0100 Subject: Connectionists: Doctoral position on AI / ML algorithms for invasive brain-computer interfacing at the University of Vienna Message-ID: We have an opening for a doctoral position to develop artificial intelligence / machine learning algorithms for invasive brain-computer interfacing (BCI) in the Neuroinformatics Group at the University of Vienna (https://ni.cs.univie.ac.at/). The project's goal is to develop a language BCI based on invasive, single-cell recordings in a human subject. The project will be carried out in close collaboration with Simon Jacob at the Technical University Munich (http://simonjacob.de/). We are looking for an intrinsically motivated student with a Master's (or equivalent) degree in Computer Science, Neural Engineering, Electrical Engineering, Mathematics, Physics, or a related field. We expect strong competencies in machine learning and programming, as well as a passion for interdisciplinary research in general and learning about the brain in particular. The ability to speak German would be beneficial but is not required. Employment is according to the collective bargaining agreement ?48 VwGr. B1 Grundstufe (praedoc, 75%), see http://personalwesen.univie.ac.at/en/jobs-recruiting/job-center/salary-scheme/ The duration of employment is three years and can be extended to a maximum of four years. The initial contract will be limited to 1.5 years and automatically renewed for another 1.5 years at the end of the 12-month probation period. Job duties involve research as well as teaching (in English or German). The Neuroinformatics Group is part of the Faculty of Computer Science at the University of Vienna, the oldest and largest public research university in the German-speaking world (https://en.wikipedia.org/wiki/University_of_Vienna). Research in our group focuses on two topics: Causal inference for bridging the gap between neuronal activity and cognition and AI/ML algorithms for brain-computer interfacing. Our offices are located in downtown Vienna, a city that has been repeatedly ranked as having the world's highest quality of living. We strive to be a diverse research group and particularly welcome (and, given equal qualifications, give preference to) applications from groups currently underrepresented in academia. To apply, please set up an account (https://zid.univie.ac.at/en/setting-up-a-uaccount/) and submit the following documents via https://servicedesk.univie.ac.at/plugins/servlet/desk/portal/89/create/1277: * Letter of motivation (~one page) * CV * Abstract of Master thesis * Degree certificates * Contact information of two referees * List of publications and evidence of teaching experience (if any) The application deadline is the 13th of December, 2021. Please choose reference number "#06 PC2: Neuroinformatics, Prof. Grosse-Wentrup" when submitting your application. Because there are two separate projects linked to this call, please indicate in your motivation letter whether you are applying to the BCI or the causal inference project. If you have any questions prior to applying, please feel free to contact me via moritz.grosse-wentrup at univie.ac.at. Best, Moritz Grosse-Wentrup -- Univ.-Prof. Dr.-Ing. Moritz Grosse-Wentrup Research Group Neuroinformatics Faculty of Computer Science University of Vienna H?rlgasse 6, A-1090 Wien, Austria moritz.grosse-wentrup at univie.ac.at +43-1-4277-79610 http://neural.engineering/ From coralie.gregoire at insa-lyon.fr Fri Nov 26 03:00:25 2021 From: coralie.gregoire at insa-lyon.fr (Coralie Gregoire) Date: Fri, 26 Nov 2021 09:00:25 +0100 (CET) Subject: Connectionists: [CFP] The ACM Web Conference 2022 - Call for Web Developer and W3C Track Message-ID: <1856868521.1032227.1637913625511.JavaMail.zimbra@insa-lyon.fr> [Apologies for the cross-posting, this call is sent to numerous lists you may have subscribed to] The ACM Web Conference 2022 [CFP] The ACM Web Conference 2022 - Call for Web Developer and W3C Track We invite contributions to the Web Developer and W3C track of The Web Conference 2022 (formerly known as WWW). The conference will take place online, hosted by Lyon, France, on April 25-29, 2022. ------------------------------------------------------------ Call for WebConf 2022 Developer and W3C Track *Important dates* Papers submission: February 3rd, 2022 Notification to authors: March 3rd, 2022 Camera ready: March 10th, 2022 All submission deadlines are end-of-day in the Anywhere on Earth (AoE) time zone. The Web Conference 2022 Developer and W3C Track is part of The Web Conference 2022 in Lyon, France. Participation in the developers track will require registration of at least one author for the conference. The Web Conference 2022 Developer and W3C Track presents an opportunity to share the latest developments across the technical community, both in terms of technologies and in terms of tooling. We will be running as a regular track with live (preferred) or pre-recorded (as an alternative format) presentations during the conference, showcasing community expertise and progress. The event will take place according to the CET time zone (Paris time). We will do our best to accommodate speakers in as convenient as possible time slots for their local time zones. While we are open to any contributions that are relevant for the Web space, here are a few areas that we are particularly interested in: - Tools and methodologies to measure and reduce the environmental impact of the Web. - New usage patterns enabled by Progressive Web Apps and new browser APIs. - Work-arounds, data-based quantification and identification of Web compatibility issues. - Web tooling and developer experience, in particular towards reducing the complexity for newcomers: How can we get closer to the magic of hitting "view source" as a way to get people started? - Tools and frameworks that enable the convergence of real-time communication and streaming media. - Decentralized architectures for the Web, such as those emerging from projects and movements such as Solid, or "Web3". - Peer to peer architectures and protocols. - Identity management (DID, WebAuthN, Federated Credential Management). *Submission guidelines* Submissions can take several form and authors can choose one or multiple submission entry among the following choices: - Papers: papers are limited to 6 pages, including references. Submissions are NOT anonymous. It is the authors? responsibility to ensure that their submissions adhere strictly to the required format. In particular, the format cannot be modified with the objective of squeezing in more material. Papers will be published in the ACM The Web Conference Companion Proceedings archived by the ACM Digital Library, as open access, if the authors wish so. - Links to code repositories on GitHub (with sufficient description and documentation). - Links to recorded demos (as a complement to the above, ideally following established best practices as proposed by the W3C). - Any other resource reachable on the Web. Submissions will be handled via Easychair, at https://easychair.org/conferences/?conf=thewebconf2022, selecting the Web Developer and W3C track. *Formatting the submissions* Submissions must adhere to the ACM template and format published in the ACM guidelines at https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords and use the template in traditional double-column format to prepare your submissions. For example, Word users may use the Word Interim template, and LaTeX users may use the sample-sigconf template. For Overleaf users, you may want to use https://www.overleaf.com/latex/templates/association-for-computing-machinery-acm-sig-proceedings-template/bmvfhcdnxfty. Submissions for review must be in PDF format. They must be self-contained and written in English. Submissions that do not follow these guidelines, or do not view or print properly, will be rejected without review. *Ethical use of data and informed consent* As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM?s new Publications Policy on Research Involving Human Participants and Subjects. When appropriate, authors are encouraged to include a section on the ethical use of data and/or informed consent in their paper. Note that submitting your research for approval by the author(s)? institutional ethics review body (IRB) may not always be sufficient. Even if such research has been signed off by your IRB, the programme committee might raise additional concerns about the ethical implications of the work and include these concerns in its review. *Publication policy* Accepted papers will require a further revision in order to meet the requirements and page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. Web Developer and W3C Track Chairs: Contact us at - Dominique Hazael-Massieux (W3C) - Tom Steiner (Google Inc.) ============================================================ Contact us: contact at thewebconf.org - Facebook: https://www.facebook.com/TheWebConf - Twitter: https://twitter.com/TheWebConf - LinkedIn: https://www.linkedin.com/showcase/18819430/admin/ - Website: https://www2022.thewebconf.org/ ============================================== From boris.gutkin at gmail.com Fri Nov 26 03:50:55 2021 From: boris.gutkin at gmail.com (Boris Gutkin) Date: Fri, 26 Nov 2021 09:50:55 +0100 Subject: Connectionists: =?utf-8?q?UCL-HSE_Online_Symposium_CCCP=2721_?= =?utf-8?q?=C2=AB_Challenges_and_new_approaches_in_Cognitive_Comput?= =?utf-8?q?ational_=28neuro=29Psychiatry_=C2=BB?= Message-ID: Dec 8 2021 13:00 MSK time. The 2021 virtual edition of the symposia focuses this year on cognitive and computational approaches in psychiatry. The symposium brings together a variety of researchers who work on understanding mechanisms and variability of different mental health conditions across different population https://neuro.hse.ru/en/announcements/530420252.html To attend the symposium please *register *through EventBrite (free tickets but mandatory registration): https://www.eventbrite.co.uk/e/challenges-and-new-approaches-in-cognitive-computational-neuropsychiatry-tickets-211071629927 Program: *Moscow Time * *London Time* *Paris Time * *Speaker* *Title * 13.00 -13.10 10.00 -10.10 11.00 -11.10 *Tobias U. Hauser, Ph.D. * UCL, London *Introduction* *Computational psychiatry: approaches and challenges * 13.10 -13.35 10.10 -10.35 11.10 -11.35 *Maria Herrojo Ruiz, Ph.D. * Goldsmith University, London, Higher Scool of Economics, Moscow *Neural oscillatory correlates of biased belief updating in volatile environments in anxiety* 13.35 -14.00 10.35 -11.00 11.35 -12.00 *Tricia Seow, Ph.D. * UCL, London *Compulsivity and the mental model* 14.00 -14.25 11.00 -11.25 12.00 -12.25 *Ksenia Panidi, Ph.D.* Higher School of Economics, Moscow *Temporal discounting factor in the monetary domain is associated with excessive sugar consumption* 14.25 -14.50 11.25 -11.50 12.25 -12.50 *Zoe Koopmans, * ENS, Paris *?* *The elusive relation between task elicited reinforcement learning parameters and psychiatric symptoms* *Break* 15.00 -15.25 12.00 -12.25 13.00 -13.25 *Magda Dubois, * UCL, London *Impulsivity and individual differences in exploration-exploitation strategies* 15.25 -15.50 12.25 -12.50 13.25 -13.50 *Anush Ghambaryan, * HSE, Moscow *Additively combining utilities and beliefs in uncertain and changing environment for decision making* 15.50 -16.15 12.50 -13.15 13.50 -14.15 *Vasilisa Skvortsova, Ph.D. * UCL, London *Towards crowd-science and smartphone-based experimentation* 16.15 -16.25 13.15 -13.25 14.15 -14.25 *Panel Discussion & Closing remarks* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.o.sousa at inesctec.pt Fri Nov 26 07:06:05 2021 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Fri, 26 Nov 2021 12:06:05 +0000 Subject: Connectionists: Text2Story@ECIR'22 Narrative Extraction from Texts Workshop CFP Message-ID: *** Apologies for cross-posting *** ++ CALL FOR PAPERS ++ **************************************************************************** Fifth International Workshop on Narrative Extraction from Texts (Text2Story'22) Held in conjunction with the 44th European Conference on Information Retrieval (ECIR'22) April 10th, 2022 - Stavanger, Norway Website: https://text2story22.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission deadline: January 24th, 2022 - Acceptance Notification Date: March 1st, 2022 - Camera-ready copies: March 18th, 2022 - Workshop: April 10th, 2022 ++ Overview ++ Although information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of constructing consistent narrative structures is yet to be solved. ++ List of Topics ++ In the fifth edition of the Text2Story workshop, we aim to foster the discussion of recent advances in the link between Information Retrieval (IR) and formal narrative understanding and representation of texts. Specifically, we aim to provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: * Narrative Representation Language * Story Evolution and Shift Detection * Temporal Relation Identification * Temporal Reasoning and ordering of events * Causal Relation Extraction and Arrangement * Narrative Summarization * Multi-modal Summarization * Automatic Timeline Generation * Storyline Visualization * Comprehension of Generated Narratives and Timelines * Big data applied to Narrative Extraction * Personalization and Recommendation of Narratives * User Profiling and User Behavior Modeling * Sentiment and Opinion Detection in Texts * Argumentation Analysis * Models for detection and removal of bias in generated stories * Ethical and fair narrative generation * Misinformation and Fact Checking * Bots Influence * Information Retrieval Models based on Story Evolution * Narrative-focused Search in Text Collections * Event and Entity importance Estimation in Narratives * Multilinguality: multilingual and cross-lingual narrative analysis * Evaluation Methodologies for Narrative Extraction * Resources and Dataset showcase * Dataset annotation and annotation schemas * Applications in social media (e.g. narrative generation during a natural disaster) ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We invite four kinds of submissions: * Research papers (max 7 pages + references) * Demos and position papers (max 5 pages + references) * Work in progress and project description papers (max 4 pages + references) * Nectar papers with a summary of own work published in other conferences or journals that is worthwhile sharing with the Text2Story community, by emphasizing how it can be applied for narrative extraction, processing or storytelling, adding some more insights or discussions; novel aspects, results or case studies (max 3 pages + references) Papers must be submitted electronically in PDF format through EasyChair (https://easychair.org/conferences/?conf=text2story2022). All submissions must be in English and formatted according to the one-column CEUR-ART style with no page numbers. Templates, either in Word or LaTeX, can be found in the following zip folder: http://ceur-ws.org/Vol-XXX/CEURART.zip. There is also an Overleaf page for LaTeX users, available at: https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/hpvjjzhjxzjk. Submissions will be peer-reviewed by at least two members of the program committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (usually indexed on DBLP). ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC; University of Beira Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Ant?nio Horta Branco (University of Lisbon) Arian Pasquali (CitizenLab) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST and INESC-ID - Instituto Superior T?cnico, University of Lisbon) Demian Gholipour (University College Dublin) Daniel Gomes (FCT/Arquivo.pt) Daniel Loureiro (University of Porto) Denilson Barbosa (University of Alberta) Deya Banisakher (Defense Threat Reduction Agency (DTRA), Ft. Belvior, VA, USA.) Dhruv Gupta (Norwegian University of Science and Technology (NTNU), Trondheim, Norway) Dwaipayan Roy (ISI Kolkata, India) Dyaa Albakour (Signal) Evelin Amorim (INESC TEC) Florian Boudin (Universit? de Nantes) Grigorios Tsoumakas (Aristotle University of Thessaloniki) Henrique Lopes Cardoso (University of Porto) Hugo Sousa (INESC TEC) Ismail Sengor Altingovde (Middle East Technical University) Jeffery Ansah (BHP) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Kumar Bandeli (Walmart Inc.) Ludovic Moncla (INSA Lyon) Marc Spaniol (Universit? de Caen Normandie) Nina Tahmasebi (University of Gothenburg) Pablo Gamallo (University of Santiago de Compostela) Paulo Quaresma (Universidade de ?vora) Pablo Gerv?s (Universidad Complutense de Madrid) Paul Rayson (Lancaster University) Preslav Nakov (Qatar Computing Research Institute (QCRI)) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Udo Kruschwitz (University of Regensburg) Yihong Zhang (Kyoto University) ++ Contacts ++ Website: https://text2story22.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2022 at easychair.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Fri Nov 26 16:09:34 2021 From: cgf at isep.ipp.pt (Carlos) Date: Fri, 26 Nov 2021 21:09:34 +0000 Subject: Connectionists: CFP: NetSciX 2022 - Deadline Extension: December 3 Message-ID: <654064d0-01d3-bbe9-93f6-0b98d7c446a6@isep.ipp.pt> ** On multiple requests we have extended the submission deadline one more week until the 3rd of December, 2021 ** ================================== International School and Conference on Network Science NetSci-X 2022 Porto, Portugal February 8-11, 2022 https://netscix.dcc.fc.up.pt/ ================================== Important Dates ----------------------------- Full Paper/Abstract Submission: December 3, 2021 (23:59:59 AoE) Author Notification: December 23, 2021 Keynote Speakers ----------------------------- Jure Leskovec, Stanford University, USA Jurgen Kurths, Humboldt University Berlin, Germany Manuela Veloso, JP Morgan AI Research & CMU, USA Stefano Boccaletti, Institute for Complex Systems, Florence, Italy Tijana Milenkovic, University of Notre Dame, USA Tiziana Di Matteo, King's College London, UK Invited Speakers ----------------------------- Ang?lica Sousa de Mata, Universidade Federal de Lavras, Brazil Francisco Santos, University of Lisbon, Portugal Marcus Kaiser, University of Nottingham, UK Maria Angeles Serrano, University of Barcelona, Spain Marta C. Gonz?lez, University of California, Berkeley, USA Sune Lehmann, TU Denmark / University of Copenhagen, Denmark Tracks ----------------------------- We are welcoming submissions to the Abstracts or Proceedings Track. All submissions will undergo a peer-review process. Abstracts Track: extended abstracts should not exceed 3 pages, including figures and references. Abstracts will be accepted for oral or poster presentation, and will appear in the book of abstracts only. Proceedings Track: full papers should have between 8 and 14 pages and follow the Springer Proceedings format. Accepted full papers will be presented at the conference and published by Springer. Only previously unpublished, original submissions will be accepted. Description ----------------------------- NetSci-X is the Network Science Society?s signature winter conference. It extends the popular NetSci conference series (https://netscisociety.net/events/netsci) to provide an additional forum for a growing community of academics and practitioners working on formal, computational, and application aspects of complex networks. The conference will be highly interdisciplinary, spanning the boundaries of traditional disciplines. Specific topics of interest include (but are not limited to): Models of Complex Networks Structural Network Properties Algorithms for Network Analysis Graph Mining Large-Scale Graph Analytics Epidemics Resilience and Robustness Community Structure Motifs and Subgraph Patterns Link Prediction Multilayer/Multiplex Networks Temporal and Spatial Networks Dynamics on and of Complex Networks Network Controllability Synchronization in Networks Percolation, Resilience, Phase Transitions Network Geometry Network Neuroscience Network Medicine Bioinformatics and Earth Sciences Applications Mobility and Urban Networks Computational Social Sciences Rumor and Viral Marketing Economics and Financial Networks Instructions for Submissions ----------------------- All papers and abstracts should be submitted electronically in PDF format. The website includes detailed information about the submission process. General Chairs Fernando Silva, University of Porto, Portugal Jos? Mendes, University of Aveiro, Portugal Ros?rio Laureano, Lisbon University Institute (ISCTE), Portugal Program Chair Pedro Ribeiro, Universidade do Porto, Portugal School Chairs Manuel Pita, CICANT, Universidade Lusofona, Portugal Andreia Sofia Teixeira, University of Lisbon, Portugal Main Contact for NetSci-X 2022 netscix at dcc.fc.up.pt Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From david at irdta.eu Sat Nov 27 10:16:03 2021 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 27 Nov 2021 16:16:03 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Summer: early registration December 11 Message-ID: <1449536652.658016.1638026163163@webmail.strato.com> ****************************************************************** 6th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2022 Summer Las Palmas de Gran Canaria, Spain July 25-29, 2022 https://irdta.eu/deeplearn/2022su/ ***************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: December 11, 2021 ****************************************************************** SCOPE: DeepLearn 2022 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Bournemouth, and Guimar?es. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/index.php?option=com_k2&view=item&layout=item&id=360&Itemid=896 STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wahid Bhimji (Lawrence Berkeley National Laboratory), Deep Learning on Supercomputers for Fundamental Science Rich Caruana (Microsoft Research), Friends Don?t Let Friends Deploy Black-box Models: The Importance of Interpretable Neural Nets in Machine Learning Kate Saenko (Boston University), Overcoming Dataset Bias in Deep Learning PROFESSORS AND COURSES: T?lay Adal? (University of Maryland Baltimore County), [intermediate] Data Fusion Using Matrix and Tensor Factorizations Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning: From Theory to Applications in the Natural Sciences Arindam Banerjee (University of Illinois Urbana-Champaign), [intermediate/advanced] Deep Generative and Dynamical Models Mikhail Belkin (University of California San Diego), [intermediate/advanced] Modern Machine Learning and Deep Learning through the Prism of Interpolation Dumitru Erhan (Google), [intermediate/advanced] Visual Self-supervised Learning and World Models Arthur Gretton (University College London), [intermediate/advanced] Probability Divergences and Generative Models Phillip Isola (Massachusetts Institute of Technology), [intermediate] Deep Generative Models Mohit Iyyer (University of Massachusetts Amherst), [intermediate/advanced] Natural Language Generation Irwin King (Chinese University of Hong Kong), [intermediate/advanced] Deep Learning on Graphs Vincent Lepetit (Paris Institute of Technology), [intermediate] Deep Learning and 3D Reasoning for 3D Scene Understanding Yan Liu (University of Southern California), [introductory/intermediate] Deep Learning for Time Series Dimitris N. Metaxas (Rutgers, The State University of New Jersey), [intermediate/advanced] Model-based, Explainable, Semisupervised and Unsupervised Machine Learning for Dynamic Analytics in Computer Vision and Medical Image Analysis Sean Meyn (University of Florida), [introductory/intermediate] Reinforcement Learning: Fundamentals, and Roadmaps for Successful Design Louis-Philippe Morency (Carnegie Mellon University), [intermediate/advanced] Multimodal Machine Learning Wojciech Samek (Fraunhofer Heinrich Hertz Institute), [introductory/intermediate] Explainable AI: Concepts, Methods and Applications Clara I. S?nchez (University of Amsterdam), [introductory/intermediate] Mechanisms for Trustworthy AI in Medical Image Analysis and Healthcare Bj?rn W. Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Jonathon Shlens (Google), [introductory/intermediate] Introduction to Deep Learning in Computer Vision Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines Csaba Szepesv?ri (University of Alberta), [intermediate/advanced] Tools and Techniques of Reinforcement Learning to Overcome Bellman's Curse of Dimensionality 1. Murat Tekalp (Ko? University), [intermediate/advanced] Deep Learning for Image/Video Restoration and Compression Alexandre Tkatchenko (University of Luxembourg), [introductory/intermediate] Machine Learning for Physics and Chemistry Li Xiong (Emory University), [introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning Ming Yuan (Columbia University), [intermediate/advanced] Low Rank Tensor Methods in High Dimensional Data Analysis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 17, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. ORGANIZING COMMITTEE: Marisol Izquierdo (Las Palmas de Gran Canaria, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://irdta.eu/deeplearn/2022su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Nov 27 10:17:47 2021 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 27 Nov 2021 16:17:47 +0100 (CET) Subject: Connectionists: DeepLearn 2022 Spring: early registration December 15 Message-ID: <842887471.658133.1638026267191@webmail.strato.com> ****************************************************************** 5th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Spring Guimar?es, Portugal April 18-22, 2022 https://irdta.eu/deeplearn/2022sp/ ***************** Co-organized by: Algoritmi Center University of Minho, Guimar?es Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: December 15, 2021 ****************************************************************** SCOPE: DeepLearn 2022 Spring will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, and Bournemouth. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Spring is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Spring will take place in Guimar?es, in the north of Portugal, listed as UNESCO World Heritage Site and often referred to as the birthplace of the country. The venue will be: Hotel de Guimar?es Eduardo Manuel de Almeida 202 4810-440 Guimar?es http://www.hotel-guimaraes.com/ STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full in vivo online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Kate Smith-Miles (University of Melbourne), Stress-testing Algorithms via Instance Space Analysis Mihai Surdeanu (University of Arizona), Explainable Deep Learning for Natural Language Processing Zhongming Zhao (University of Texas, Houston), Deep Learning Approaches for Predicting Virus-Host Interactions and Drug Response PROFESSORS AND COURSES: Eneko Agirre (University of the Basque Country), [introductory/intermediate] Natural Language Processing in the Pretrained Language Model Era Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision Altan ?ak?r (Istanbul Technical University), [introductory] Introduction to Deep Learning with Apache Spark Rylan Conway (Amazon), [introductory/intermediate] Deep Learning for Digital Assistants Jifeng Dai (SenseTime Research), [intermediate] AutoML for Generic Computer Vision Tasks Jianfeng Gao (Microsoft Research), [introductory/intermediate] An Introduction to Conversational Information Retrieval Daniel George (JPMorgan Chase), [introductory] An Introductory Course on Machine Learning and Deep Learning with Mathematica/Wolfram Language Bohyung Han (Seoul National University), [introductory/intermediate] Robust Deep Learning Lina J. Karam (Lebanese American University), [introductory/intermediate] Deep Learning for Quality Robust Visual Recognition Xiaoming Liu (Michigan State University), [intermediate] Deep Learning for Trustworthy Biometrics Jennifer Ngadiuba (Fermi National Accelerator Laboratory), [intermediate] Ultra Low-latency and Low-area Machine Learning Inference at the Edge Lucila Ohno-Machado (University of California, San Diego), [introductory] Use of Predictive Models in Medicine and Biomedical Research Bhiksha Raj (Carnegie Mellon University), [introductory] Quantum Computing and Neural Networks Bart ter Haar Romenij (Eindhoven University of Technology), [intermediate] Deep Learning and Perceptual Grouping Kaushik Roy (Purdue University), [intermediate] Re-engineering Computing with Neuro-inspired Learning: Algorithms, Architecture, and Devices Walid Saad (Virginia Polytechnic Institute and State University), [intermediate/advanced] Machine Learning for Wireless Communications: Challenges and Opportunities Yvan Saeys (Ghent University), [introductory/intermediate] Interpreting Machine Learning Models Martin Schultz (J?lich Research Centre), [intermediate] Deep Learning for Air Quality, Weather and Climate Richa Singh (Indian Institute of Technology, Jodhpur), [introductory/intermediate] Trusted AI Sofia Vallecorsa (European Organization for Nuclear Research), [introductory/intermediate] Deep Generative Models for Science: Example Applications in Experimental Physics Michalis Vazirgiannis (?cole Polytechnique), [intermediate/advanced] Machine Learning with Graphs and Applications Guowei Wei (Michigan State University), [introductory/advanced] Integrating AI and Advanced Mathematics with Experimental Data for Forecasting Emerging SARS-CoV-2 Variants Xiaowei Xu (University of Arkansas, Little Rock), [intermediate/advanced] Deep Learning for NLP and Causal Inference Guoying Zhao (University of Oulu), [introductory/intermediate] Vision-based Emotion AI OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by April 10, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by April 10, 2022. ORGANIZING COMMITTEE: Dalila Dur?es (Braga, co-chair) Jos? Machado (Braga, co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) Paulo Novais (Braga, co-chair) David Silva (London, co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022sp/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will get exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022sp/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Centro Algoritmi, University of Minho, Guimar?es School of Engineering, University of Minho Intelligent Systems Associate Laboratory, University of Minho Rovira i Virgili University Municipality of Guimar?es Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From junfeng989 at gmail.com Sun Nov 28 09:50:06 2021 From: junfeng989 at gmail.com (Jun Feng) Date: Sun, 28 Nov 2021 09:50:06 -0500 Subject: Connectionists: Call for Nominations - IEEE TCSC Award for Excellence for Early Career Researchers - 2021 Message-ID: Call for Nominations - IEEE TCSC Award for Excellence for Early Career Researchers - 2021 ============= The IEEE TCSC (Technical Committee on Scalable Computing) Award for Excellence in Scalable Computing (Early Career Researchers) recognizes up to 5 individuals who have made outstanding, influential, and potentially long-lasting contributions in the field of scalable computing. Typically the candidates are within 5 years of receiving their PhD degree as of January 01 of the year of the award. ============= Nominations: A candidate may be nominated by members of the community.An individual may nominate at most one candidate for this award. Nomination must be submitted via email to the selection committee chair. A nomination application (as a single PDF file) should contain the following details: 1. Name/email of person making the nomination (self-nominations are not eligible); 2. Name/email of candidate for whom the award is recommended; 3. A statement by the nominator (maximum of 500 words) as to why the nominee is highly deserving of the award both on excellence and in relation to IEEE TCSC. 4. CV of the nominee; 5. Up three support letters from persons other than the nominator ? these should be collected by the nominator and included in the nomination. Members of selection committee cannot be nominators or referees. ============= Important Dates: Nomination Deadline: December 07, 2021 Results Notification: December 15, 2021 ============= Award Selection Committee: Bernady O. Apduhan, Kyushu Sangyo University, Japan, bob at is.kyusan-u.ac.jp Jinjun Chen (Chair), Swinburne University of Technology, Australia, jchen at swin.edu.au Beniamino Di Martino, Universita' della Campania Luigi Vanvitelli, Italy, beniamino.dimartino at unina.it Didier El Baz, LAAS-CNRS, France, elbaz at laas.fr ============= Award & Presentation Note: Awardees will be presented a plaque and will be recognized by IEEE TCSC in its website, newsletter and archives. The awards for 2021 will be presented at a selected IEEE TCSC sponsored conference IEEE HPCC 2021. -------------- next part -------------- An HTML attachment was scrubbed... URL: From junfeng989 at gmail.com Sun Nov 28 09:57:55 2021 From: junfeng989 at gmail.com (Jun Feng) Date: Sun, 28 Nov 2021 09:57:55 -0500 Subject: Connectionists: Call for Nominations - IEEE TCSC MCR Awards - 2021 Message-ID: Dear Colleagues: Call for Nominations - IEEE TCSC Middle Career Researcher (MCR) Awards - 2021 ============= The IEEE Technical Committee on Scalable Computing (TCSC) is a technical committee within IEEE Computer Society, aimed at fostering research and education in scalable computing with applications. The committee solicits nominations of Middle Career Researcher (MCR) Award. The award includes an award plaque that will be presented at the annual IEEE HPCC conference, along with a public citation for the award on the IEEE TCSC website. ============= IEEE TCSC Middle Career Researcher Award IEEE TCSC Award for Excellence in Scalable Computing (Middle Career Researcher) recognizes up to 3 individuals who have made distinguished, influential, and on-going yet towards long-lasting contributions in the field of scalable computing with applications. Typically the candidates are within 5 to 15 years of receiving their PhD degree as of January 01 of the year of the award. ============= Nomination Materials for the MCR Award: A candidate must be nominated by members of the community. Nominations must be submitted via email to the selection committee chair. A nomination application (as a single PDF file) must consist of the following materials: (1) Name/email of person making the nomination (self-nominations are not eligible) (2) Name/email of the nominee (3) A statement by the nominator (maximum of 500 words) as to why the nominee is highly deserving of the award both on excellence and in relation to IEEE TCSC. (4) CV of the nominee (5) Up to three support letters from persons other than the nominator - these should be collected by the nominator and included in the nomination. Members of selection committee cannot be nominators or referees. ============= Important Dates: - Nomination Deadline: December 07, 2021 - Results Notification: December 15, 2021 ============= Award Selection Committee: - Bernady O. Apduhan, Kyushu Sangyo University, Japan, bob at is.kyusan-u.ac.jp - Jinjun Chen (Chair), Swinburne University of Technology, Australia, jchen at swin.edu.au - Beniamino Di Martino, Universita' della Campania Luigi Vanvitelli, Italy, beniamino.dimartino at unicampania.it - Didier El Baz, LAAS-CNRS, France, elbaz at laas.fr ============= Award & Presentation Note: Awardees will be presented a plaque and will be recognized by IEEE TCSC in its website, newsletter and archives. The awards for 2021 will be presented at IEEE HPCC-2021, Dec 2021 in Hainan, China. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mengyu_Wang at meei.harvard.edu Sun Nov 28 21:33:23 2021 From: Mengyu_Wang at meei.harvard.edu (Wang, Mengyu) Date: Mon, 29 Nov 2021 02:33:23 +0000 Subject: Connectionists: Harvard Postdoctoral Fellowship in Machine Learning Modeling for Eye Diseases Message-ID: A postdoctoral position is available in Harvard Ophthalmology AI Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in March 2022. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience. In the course of this interdisciplinary project, you will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. You will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, diabetic retinopathy. You will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. The successful applicant will: 1. possess or be on track to complete a PhD or MD with background in mathematics, computational science, computer science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential. 1. have strong programming skills (C++, Python, R, MATLAB, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus. 1. have a strong and productive publication record. 1. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required. Your application should include: 1. curriculum vitae 1. statement of past research accomplishments, career goal and how this position will help you achieve your goals 1. two representative publications 1. contact information for three references The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject "Postdoctoral Application in Harvard Ophthalmology AI Lab". The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Mass General Brigham Compliance HelpLine at http://www.massgeneralbrigham.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. Please note that this e-mail is not secure (encrypted). If you do not wish to continue communication over unencrypted e-mail, please notify the sender of this message immediately. Continuing to send or respond to e-mail after receiving this message means you understand and accept this risk and wish to continue to communicate over unencrypted e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donatello.conte at univ-tours.fr Sun Nov 28 16:25:24 2021 From: donatello.conte at univ-tours.fr (Donatello Conte) Date: Sun, 28 Nov 2021 22:25:24 +0100 Subject: Connectionists: CfP for SS on Graphs for Pattern Recognition: Representations, Theory and Applications at the 3rd ICPRAI in Paris, France Message-ID: <002401d7e49e$75231c80$5f695580$@univ-tours.fr> --------------------------------------------------------- Apologies for multiples copies --------------------------------------------------------- Call for Papers Graphs for Pattern Recognition: Representations, Theory and Applications Special Session at the 3rd International Conference on Pattern Recognition and Artificial Intelligence June 1- 3, 2022 https://icprai2022.sciencesconf.org/ Important Dates Paper submission deadline: December 15th, 2021 Author notification: March 8th, 2022 Camera ready deadline: March 22th, 2022 Early bird registration deadline: April 1st, 2022 Time of the conference: June 1st to 3rd, 2022 Scientific Program Committee Isabelle Bloch (FR) Luc Brun (FR) Vincenzo Carletti (IT) Donatello Conte (FR) H. Edelsbrunner (A) Benoit Ga?zere (FR) Rocio Gonzalez-Diaz (Spain) Marco Gori (IT) Yll Haxhimusa (A) Walter G. Kropatsch (A) Xiaoyi Jiang (G) J.Y. Ramel (FR) Luca Rossi (UK) Francesc Serratosa (Spain) Ali Shokoufandeh (US) Mario Vento (IT) Pasquale Foggia (IT) Motivations and topics Graphs have gained a lot of attention in the pattern recognition community thanks to their ability to encode both topological, geometrical, and semantic information. Despite their invaluable descriptive power and their invariance to diverse geometric deformations, their arbitrarily complex structured nature poses serious challenges when they are involved in Pattern Recognition and Artificial Intelligence. Some challenging Problems are: a non-unique representation of data, heterogeneous attributes (symbolic, numeric, etc.), highly complex algorithms like (sub-)graph matching. This Special Session intends to focus on all aspects of graph-based representations in Pattern Recognition and Artificial Intelligence, from theoretical to applications concerns. It spans, but is not limited to, the following topics: ? Dynamic, spatial and temporal graphs ? Graph representations and methods in computer vision ? Geometry and Topology in Graphs ? Graph Neural Networks ? Benchmarks for Graphs in Pattern Recognition ? Graph Learning and Classification ? Graph Matching ? Social Networks Analysis ? Graph Representation Learning Track Chairs Walter G. Kropatsch (TU Wien) Donatello Conte (University of Tours) Vincenzo Carletti (University of Salerno) -------------- next part -------------- An HTML attachment was scrubbed... URL: From malchiodi at di.unimi.it Sun Nov 28 13:16:06 2021 From: malchiodi at di.unimi.it (malchiodi) Date: Sun, 28 Nov 2021 19:16:06 +0100 Subject: Connectionists: [CfP] (Special Session @ IPMU 2022) MIIXAI: Managing Imprecise Information for XAI Message-ID: <53d00e6c5ce2b1f8a90affbb82422e6b@di.unimi.it> (apologies for cross postings) People have an exceptional ability in managing imprecise information in forms that are well captured by several theories within the Granular Computing paradigm, such as Fuzzy Set Theory, Rough Set Theory, Interval Computing and hybrid theories among others. Endowing XAI systems with the ability of dealing with the many forms of imprecision, is therefore a key challenge that can push forward current XAI technologies towards more trustworthy systems based on imprecise information (II) and full collaborative intelligence. The Special Session will gather recent advancements in topics like foundational, theoretical and methodological aspects of imprecision management in XAI, new technologies for representing and processing imprecision in XAI systems, as well as real-world applications that demonstrate explainability improvements through imprecision management. Topics include but are not limited to: - Design of explainable II-based systems - Evaluation of explainability in models for II - Hybrid systems dealing with different forms of imprecision - Successful applications of explainable II-based systems - Induction of explainable models from II - Theoretical aspects of explainability in II-based systems Important dates: - Submission of Full Papers: Friday, 14 January 2022 - Notification of Acceptance: Tuesday, 1 March 2022 - Camera-ready Submission: Friday, 15 April 2022 - Conference: 11-15 July 2022, Milan, Italy Submission instructions and more information at IPMU website: https://ipmu2022.disco.unimib.it/ Organizers: - Dario Malchiodi, Dept. of Computer Science, Universit? degli Studi di Milano, Italy, dario.malchiodi at unimi.it - Corrado Mencar, Dept. of Computer Science, University of Bari Aldo Moro, Italy, corrado.mencar at uniba.it From janet.hsiao at gmail.com Mon Nov 29 07:35:37 2021 From: janet.hsiao at gmail.com (Janet Hsiao) Date: Mon, 29 Nov 2021 20:35:37 +0800 Subject: Connectionists: Postdoctoral Position: Explainable AI/Computer Vision, University of Hong Kong Message-ID: *Postdoctoral Position: Explainable AI/Computer Vision* *University of Hong Kong & Huawei Hong Kong Research Center * Applicants are invited for appointment as a *Post-doctoral Fellow *to work with both the Attention Brain and Cognition Lab at the Department of Psychology, University of Hong Kong, and Huawei Hong Kong Research Center (HKRC), to commence as soon as possible for a period of 1 year, with the possibility of renewal. Applicants must have a Ph.D. degree in Computer Science, Cognitive Science, or related fields. Applicants should have excellent programming and algorithmic skills, be proficient in at least one of the mainstream programming languages including Matlab/Python etc., and be curious, self-motivated and willing to participate in R&D over innovative and interdisciplinary topics. Familiarity with topics in explainable AI, deep learning methods, or computer vision is a plus. The appointee will work with Dr. Janet Hsiao (Department of Psychology, University of Hong Kong) on projects related to explainable AI in collaboration with Huawei HKRC. Information about the research in the lab can be obtained at http://abc.psy.hku.hk/. The partnering team from Huawei HKRC consists of researchers from various research units, whose research areas include deep learning framework, trustworthy AI software, fundamental AI theory and so on. For more information about the position, please contact Dr. Janet Hsiao at jhsiao at hku.hk. A highly competitive salary commensurate with qualifications and experience will be offered, in addition to annual leave and medical benefits. Applicants should send a completed application with a cover letter, an up-to-date C.V. including academic qualifications, research experience, publications, and three letters of reference to Dr. Janet Hsiao at jhsiao at hku.hk, with the subject line ?Post-doctoral Position?. *Review of applications will start immediately and continue until* *the position is filled*. We thank applicants for their interest, but advise that only candidates shortlisted for interviews will be notified of the application result. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Mon Nov 29 07:38:12 2021 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Mon, 29 Nov 2021 14:38:12 +0200 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Marios_Polyca?= =?utf-8?q?rpou=3A_=E2=80=9CIntelligent_Monitoring_and_Control_of_I?= =?utf-8?q?nterconnected_Cyber-Physical_Systems=E2=80=9D=2C_7th_Dec?= =?utf-8?q?ember_2021_17=3A00-18=3A00_CET=2E_Upcoming_AIDA_AI_excel?= =?utf-8?q?lence_lectures?= References: <045c01d7e4fc$b9ba3a00$2d2eae00$@csd.auth.gr> <003b01d7e502$0a8fcad0$1faf6070$@csd.auth.gr> Message-ID: <00df01d7e51d$f999af90$eccd0eb0$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Lecture by Prof. Marios Polycarpou (University of Cyprus, Cyprus), a prominent AI researcher internationally, will deliver the e-lecture: ?Intelligent Monitoring and Control of Interconnected Cyber-Physical Systems?, on Tuesday 7th December 2021 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/event_cat/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/s/95610140172 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR and VISION, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. Other upcoming lectures: 1. Prof. Bernhard Rinner (Universit?t Klagenfurt, Austria), 11th January 2021 17:00 ? 18:00 CET. More lecture infos in: https://www.i-aida.org/event_cat/ai-lectures/?type=future The lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpavone at dmi.unict.it Mon Nov 29 13:34:45 2021 From: mpavone at dmi.unict.it (Mario Pavone) Date: Mon, 29 Nov 2021 19:34:45 +0100 Subject: Connectionists: 14th Metaheuristics International Conference, 11-14 July 2022, Ortigia-Syracuse, Italy Message-ID: <20211129193445.Horde.944xAOph4B9hpR1FzxqSmWA@mbox.dmi.unict.it> Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. ********************************************************* MIC 2022 - 14th Metaheuristics International Conference 11-14 July 2022, Ortigia-Syracuse, Italy https://www.ANTs-lab.it/mic2022/ mic2022 at ANTs-lab.it ********************************************************* ** Submission deadline: 30th March 2022 ** NEWS ** Plenary Speaker: Kalyanmoy Deb, Michigan State University ** Proceedings will be published in LNCS Volume, Springer ** Special Issue in ITOR journal *Scope of the Conference ======================== The?Metaheuristics International Conference?(MIC) conference series was established in 1995 and this is its 14th edition!? MIC is nowadays the main event focusing on the progress of the area of Metaheuristics and their applications. As in all previous editions, provides an opportunity to the international research community in Metaheuristics to discuss recent research results, to develop new ideas and collaborations, and to meet old and make new friends in a friendly and relaxed atmosphere.? Considering the particular moment,?the conference will be held in presence and online mode. Of course, in case the conference will be held in presence, the organizing committee will ensure compliance of all safety conditions. MIC 2022 is focus on presentations that cover different aspects of metaheuristic research such as new algorithmic developments, high-impact and original applications, new research challenges, theoretical developments, implementation issues, and in-depth experimental studies.? MIC 2022? strives a high-quality program that will be completed by a number of invited talks, tutorials, workshops and special sessions. *Plenary Speakers ======================== + Christian Blum, Artificial Intelligence Research Institute (IIIA), Spanish National Research Council (CSIC) + Kalyanmoy Deb, Michigan State University, USA + Holger H. Hoos, Leiden University, The Netherlands Important Dates ================ Submission deadline???????????? March 30th, 2022 Notification of acceptance????May 10th, 2022 Camera ready copy?????????????? May 25th, 2022 Early registration????????????? May 25th , 2022 Submission Details =================== MIC 2022 accepts submissions in three different formats: ??S1) Regular paper: novel and original research contributions of a maximum of 15 pages? (LNCS format) ??S2) Short paper: extended abstract of novel research works of 6 pages (LNCS format) ??S3) Oral/Poster presentation: high-quality manuscripts that have recently, within the last year, been submitted or accepted for journal publication. All papers must be prepared using Lecture Notes in Computer Science (LNCS) template, and must be submitted in PDF at the link: https://www.easychair.org/conferences/?conf=mic2022 Proceedings and special issue ============================ Accepted papers in categories S1 and S2 will be published as post-proceedings in Lecture Notes in Computer Science series by Springer.? Accepted contributions of category S3 will be considered for oral or poster presentations at the conference based on the number received and the slots available, and will not be included into the LNCS proceedings. An electronic book instead will be prepared by the MIC 2022 organizing committee, and made available on the website. In addition, a post-conference special issue in International Transactions in Operational Research (ITOR) will be considered for the significantly extended and revised versions of selected accepted papers from categories S1 and S2. Conference Location ==================== MIC 2022 will be held in the beautiful Ortigia island, the historical centre of the city of Syracuse, Sicily-Italy. Syracuse is very famous for its ancient ruins, with particular reference to the Roman Amphitheater, Greek Theatre, and the Orecchio di Dionisio (Ear of Dionisio) that is a limestone cave shaped like a human ear. Syracuse is also the city where the greatest mathematician Archimede was born. https://www.siracusaturismo.net/multimedia_lista.asp MIC'2022 Conference Chairs ============================== Conference Chairs - Luca Di Gaspero, University of Undine, Italy - Paola Festa, University of Naples, Italy - Amir Nakib, Universit? Paris Est Cr?teil, France - Mario Pavone, University of Catania, Italy From antona at alleninstitute.org Sun Nov 28 19:43:25 2021 From: antona at alleninstitute.org (Anton Arkhipov) Date: Mon, 29 Nov 2021 00:43:25 +0000 Subject: Connectionists: Positions in modeling at the Allen Institute Message-ID: Dear colleagues, We are hiring for multiple positions in bio-realistic modeling at the Allen Institute. Please apply soon! For example, see this Scientist position: https://alleninstitute.hrmdirect.com/employment/job-opening.php?req=1770679 If you don?t have a PhD (perhaps just finishing a BS or MS degree?), we would love to hear from you too. Please contact me at antona at alleninstitute.org. We are developing biologically realistic models of cortical circuits in the mouse brain at the level of a cortical area and the whole cortex. In collaboration with the group of Dr. Li-Huei Tsai at MIT, we investigate the cell type and circuit mechanisms of neuronal entrainment to periodic sensory stimulation at different frequencies. The project leverages the Allen Institute?s unique multi-modal data: https://portal.brain-map.org. Additional relevant resources and publications: * https://portal.brain-map.org/explore/models/mv1-all-layers * https://portal.brain-map.org/explore/connectivity * Iaccarino et al., Nature (2016); Adaikkan et al., Neuron (2019); Martorell et al., Cell (2019) The Allen Institute believes that team science significantly benefits from the participation of diverse voices, experiences and backgrounds. High-quality science can only be produced when it includes different perspectives. We are committed to increasing diversity across every team and encourage people from all backgrounds to apply for these roles. Best wishes, Anton. Anton Arkhipov Associate Investigator T: 206.548.8414 E: antona at alleninstitute.org alleninstitute.org brain-map.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From shyam at amrita.edu Tue Nov 30 00:38:36 2021 From: shyam at amrita.edu (Shyam Diwakar) Date: Tue, 30 Nov 2021 11:08:36 +0530 Subject: Connectionists: ACCS8 conference abstract deadline extended - FREE registration Message-ID: Dear All, We would like to cordially invite you and colleagues interested in or working on cognitive sciences, neurosciences or related topics to participate at ACCS8 to be held online, hosted by Amrita University, India. We have extended the abstract submission deadline to December 22, 2021 after many requests. ACCS8: Call for contributions 8th Annual Conference of Cognitive Science (ACCS8) Amrita University, India ? 20 ? 22 January 2022 ABSTRACT submission deadline extended to December 22, 2021. https://www.amrita.edu/accs8 The 8th edition of ACCS, to be hosted by Amrita Vishwa Vidyapeetham (Amrita University), Kollam, India will be held virtually. Registration is mandatory to attend but the event will be held free of cost. List of our invited speakers, TPC and other info will soon be made available at https://www.amrita.edu/event/accs8 CONFERENCE TOPICS Like in our previous years, the ACCS8 conference is open to all topics with Cognitive Science as a discipline, and in various areas of study, including Neuroscience, Artificial Intelligence, Linguistics, Skilling, Medicine, Anthropology, Psychology, Philosophy, and Education. SUBMISSION GUIDELINES We welcome abstracts (min word limit:500) from all areas of cognitive science and from anyone worldwide. For the 2022 online event, we are accepting only abstracts containing the salient details of your study. Please copy and paste the title and abstract into the submission form. We are not accepting any paper-length submissions this year, so do include all relevant details in the abstract itself. A small number of abstracts will be selected for oral presentations/talks. Submission link: https://easychair.org/conferences/?conf=accs8 For CFP: https://easychair.org/cfp/accs8 CONFIRMED Keynote speakers: 1. Nandini Chatterjee Singh, UNESCO MGIEP, India. 2. Claudia Wheeler-Kingshott, University College London, UK. 3. Kenji Doya, Okinawa Institute of Science and Technology, Japan. 4. Egidio D'Angelo, University of Pavia, Italy 5. Ned Block, New York University, USA. 6. Bhavani Rao, Amrita University, India. FREE REGISTRATION Attendance will be free but registration will be mandatory. REGISTER: https://www.amrita.edu/event/accs8/ IMPORTANT DATES Abstract registration deadline December 22, 2021 (hard deadline) Submission deadline December 22, 2021 BEST PAPER AWARDS We are planning some best poster and oral presentations. Please stay tuned. CONTACT All questions about submissions should be emailed to accs8conference at gmail.com Previous ACCS events - https://www.amrita.edu/event/accs8/about Thank you and kind regards, Shyam Diwakar Local organizer - ACCS8 -- Prof. Shyam Diwakar, Ph.D. Director - Amrita Mind Brain Center Faculty Fellow - Amrita Center for International Programs Amrita Vishwa Vidyapeetham (Amrita University) Amritapuri, Clappana P.O. Kollam, India. Pin: 690525 Ph:+91-476-2803116 Fax:+91-476-2899722 http://amrita.edu/mindbrain [https://docs.google.com/uc?export=download&id=1_nOxaCXob6wVcuQyYI5KVk1PhMgihY8c&revid=0B234NYNYmh7xckREQjc2T3pPdUh5UGdXaytOUGlRcXFzcVFFPQ] [https://intranet.cb.amrita.edu/sig/RankingLogo.png] Disclaimer : The information transmitted in this email, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. Any views expressed in any message are those of the individual sender and may not necessarily reflect the views of Amrita Vishwa Vidyapeetham. If you received this in error, please contact the sender and destroy any copies of this information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From llong at simonsfoundation.org Mon Nov 29 11:19:13 2021 From: llong at simonsfoundation.org (Laura Long) Date: Mon, 29 Nov 2021 11:19:13 -0500 Subject: Connectionists: SCGB Virtual Postdoc/Student Meeting: Wednesday, December 1 by Olivia Gozel Message-ID: The Simons Collaboration on the Global Brain (SCGB) hosts postdoc/student meetings to bring together trainees interested in neural coding and dynamics to discuss ideas and data. In addition to regional meetings in New York, Boston, and the Bay Area, SCGB holds a Global virtual series to connect systems and computational neuroscientists across the world. We would love to see you at our next Global meeting! Please see event details and the Zoom link below. SCGB Global Postdoc/Student Meeting: https://www.eventbrite.com/e/scgb-global-postdocstudent-meeting-tickets-209429538387 Wednesday, December 1st, 12pm Eastern Time https://simonsfoundation.zoom.us/j/94767192583?pwd=TXBKNXJQYzdGTmRUZTFlUUYzNnkvUT09 Passcode: 429535 *Olivia Gozel* Postdoctoral Researcher, Doiron Laboratory The University of Chicago *Between-area communication through the lens of within-area neuronal dynamics* Neuronal dynamics range from asynchronous spiking to richly patterned spatio-temporal activity and are modulated by external and internal sources. Numerous experimental datasets show that the shared variability can be well described by a small number of latent variables, indicating coordinated trial-to-trial fluctuations within the population. Besides, cortical areas are connected through long-range excitatory projections, and it has been shown that there exists a communication subspace between connected but distinct brain areas that predicts spiking activity in a downstream area using upstream activity. However, little is known about the effect of neuronal dynamics on interactions between brain areas. Using a layered spiking network with within- and between-layer spatially structured connectivity, we show that pattern formation decreases within-area dimensionality similarly when spatio-temporal patterns emerge within a population or when they are inherited from a connected population. Yet, the fidelity of communication from an upstream to a downstream area, as estimated by a linear reduced-rank regression measure, is affected by the origin of pattern formation. Specifically, downstream activity is poorly predicted by upstream activity when spatio-temporal patterns emerge downstream, while it is particularly well predicted when shared fluctuations are mostly inherited from the upstream area. Interestingly, examination of spiking activity reveals that, even in the scenario with apparent disrupted communication, the downstream area is effectively driven by upstream activity, as expected from the strong feedforward connection strengths between layers. A mismatch in within-area dimensionality between upstream and downstream areas appears to underlie the seemingly weak communication. These results expose the limitations of linear measures when analyzing the flow of information in brain circuits with diverse neuronal dynamics. *Please note that this meeting is open to all neuroscience postdocs and PhD students, regardless of location or SCGB affiliation (sorry, no PIs). *After Q&A with the speaker, we will open breakout rooms for anyone interested in staying to chat, network, or further discuss the talk. In addition to these breakouts, SCGB Scientific Staff will be available for "office hours" to chat and answer questions about SCGB programs and support. Registration on EventBrite is encouraged but not required: https://www.eventbrite.com/e/scgb-global-postdocstudent-meeting-tickets-209429538387 Please contact Laura Long at llong at simonsfoundation.org with any questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abasov at hse.ru Tue Nov 30 05:08:17 2021 From: abasov at hse.ru (=?koi8-r?B?4sHTz9cg4c7Uz84g6czYyd4=?=) Date: Tue, 30 Nov 2021 10:08:17 +0000 Subject: Connectionists: Assistant Professor in Computer Science position Message-ID: <945f6aae71f04d46ae7edeac1aaf33cf@hse.ru> Assistant Professor in Computer Science Faculty of Computer Science, HSE University, Moscow, Russia HSE University's Faculty of Computer Science welcomes applications for full-time, tenure-track positions of assistant professor in all areas of computer science including but not limited to machine learning and neural networks. Requirements ? Candidates must hold a recent (awarded over the last five years) PhD in computer science, mathematics or related fields by an internationally recognized university that has been assessed as having the potential to pursue research; ? Teaching experience at leading foreign universities is strongly desirable; ? Knowledge of Russian is not required, fluent English is obligatory. Conditions Tenure track positions are only available on a full-time, residential basis in Moscow, Russia. Overall, appointments are usually made for an initial three-year period (starting September 2022) and, upon successful completion of an interim review, contracts are normally extended for a further three years until the tenure review. HSE University provides excellent opportunities for international faculty including a competitive salary, health insurance, travel and research support and other benefits. Application Please provide a CV, a statement of research interest and a recent research paper submitted via an online application form. At least two letters of recommendation should be sent directly to the Review Committee at International Faculty Recruitment at iri at hse.ru before the application deadline. -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at incf.org Tue Nov 30 09:50:52 2021 From: info at incf.org (INCF) Date: Tue, 30 Nov 2021 15:50:52 +0100 Subject: Connectionists: Support open neuroscience this Giving Tuesday Message-ID: [image: Giving Tuesday.png] Support INCF?s mission to push global neuroscience further and faster Scientific progress is critical for providing our global community with cures and treatments for illnesses that cause pain and disability in so many lives. Neurological diseases are the leading cause of disability and the second leading cause of death worldwide, and mental health issues are increasing every year. While neuroscience has made significant progress in the last couple of decades, there is still much to do. A major barrier is that neuroscience is time-consuming and expensive. One way to remedy this is through the reuse and pooling of data and sharing of tools, which speeds discovery and increases sensitivity to the subtleties of brain function and disease. INCF facilitates this process by endorsing practical approaches and solutions for the application of open, FAIR, and citable neuroscience, and ensures impact through advocacy and efforts in training the next generation of neuroscientists in FAIR research management approaches. You can support our efforts by making a direct donation, or by sponsoring a membership for a student or colleague. Your gift will enable us to support open science for the global neuroscience community! *DONATE * [image: GivingTuesday logo.png] /The INCF Team ---------------------------- International Neuroinformatics Coordinating Facility Secretariat Karolinska Institutet. Nobels v?g 15A, SE-171 77 Stockholm. Sweden Email: communications at incf.org incf.org neuroinformatics.incf.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GivingTuesday logo.png Type: image/png Size: 26991 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Giving Tuesday.png Type: image/png Size: 50525 bytes Desc: not available URL: From philipp.raggam at univie.ac.at Tue Nov 30 11:22:13 2021 From: philipp.raggam at univie.ac.at (Philipp Raggam) Date: Tue, 30 Nov 2021 17:22:13 +0100 Subject: Connectionists: 3rd BCI-UC: Submission closes on Friday, December 3rd Message-ID: Dear Colleagues, The *abstract submission* for the 3rd Brain-Computer Interface Un-Conference (BCI-UC) *closes on Friday, December 3rd*, *at 11:59 pm (CET)* and the *voting period on submitted abstracts begins on Monday, December 6th* [1]. Abstract submissions are text-only. The length and format of the submissions are up to the authors - use whatever format you consider best for attracting up-votes! Important dates: - *Submission deadline: *December 3rd, 2021 at 11:59 pm (CET) - *Voting period:* December 6th, 2021 - December 12th, 2021 at 11:59 pm (CET) - *BCI-UC event:* January 27th, 2022, from 03:00 pm to 09:00 pm (CET) We will invite the *authors of the top-voted abstracts to present* at the un-conference. The 3rd BCI-UC will feature *keynotes* by *Cynthia Chestek* (University of Michigan) [2] and *Thorsten Zander* (TU Brandenburg) [3]. BCI-UC is an online un-conference that provides rapid dissemination of novel research results in the BCI community. Participants can *submit abstracts* to apply for *20 minutes presentation* slots. Abstracts are not reviewed by a program committee. Rather, all registered participants can vote on which of the submitted abstracts they would like to see presented at the un-conference. Because the BCI-UC does not publish conference proceedings, submitted abstracts can report novel as well as already published work. *Registration and attendance are free of charge*. Presentations of the 1st and 2nd BCI-UC can be re-watched at [4] and [5]. *All presentations will be streamed via Crowdcast* [6] and can be accessed free of charge. Join us in supporting this novel community-driven dissemination of research results! The BCI-UC committee: Moritz Grosse-Wentrup Anja Meunier Philipp Raggam Jiachen Xu Links: [1] https://bciunconference.univie.ac.at/3rd-bci-uc/ [2] https://scholar.google.com/citations?user=36sxAZEAAAAJ&hl=de [3] https://scholar.google.com/citations?hl=en&user=0E49HxYAAAAJ [4] https://bciunconference.univie.ac.at/past-events/1st-bci-uc/ [5] https://bciunconference.univie.ac.at/past-events/2nd-bci-uc/ [6] https://www.crowdcast.io/ -- DI Philipp Raggam, BSc Research Group Neuroinformatics Faculty of Computer Science University of Vienna H?rlgasse 6, A-1090 Wien, Austria -------------- next part -------------- An HTML attachment was scrubbed... URL: From Donald.Adjeroh at mail.wvu.edu Tue Nov 30 21:32:55 2021 From: Donald.Adjeroh at mail.wvu.edu (Donald Adjeroh) Date: Wed, 1 Dec 2021 02:32:55 +0000 Subject: Connectionists: Call for Participation -- IEEE BIBM-LncRNA'21; Limited Workshop Fellowships still available In-Reply-To: References: , , , , Message-ID: Apologies if you receive multiple copies ... --------------------------------------------------------------------------------------------------- The program for the 2021 IEEE BIBM LncRNA Workshop is now available. Please check the workshop webpage at: http://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ ---------------------------------------------------------------------------------------------------- Highlights: We have an exciting array of speakers for the workshop -- both In-Person presenters in Dubai, UAE, and online/remote presenters !! 4 Keynotes: * George A Calin, MD, PhD, MD Anderson Cancer Center, Houston, TX, USA (online) * Jeffrey Loeb, PhD, University of Illinois at Chicago, Chicago, IL, USA (In-person) * Thomas Derrien, MD, PhD, University of Rennes 1, Rennes, France (online) * Nadya Dimitrova, Ph.D, Yale University, New Haven, CT, USA (online) 2 Plenary Talks: * Christopher E. Mason, PhD, Weill Cornell Medicine, New York, USA (online) * Isidore Rigoutsos, PhD, Thomas Jefferson University, Philadelphia, USA (online) * 4 Invited Speakers: * Tatiana Shkurat, ScD, Southern Federal University, Rostov-on-Don, Russia Federation (in-person) * Ivan Martinez, PhD, West Virginia University, Morgantown, WV, USA (in-person) * Ekaterina Derevyanchuk, PhD, Southern Federal University, Rostov-on-Don, Russia Federation (in-person) * Ranjan Perer, PhD, Johns Hopkins University, Baltimore, MD, USA (online) One "round-table" panel of experts in the field Presentation of accepted papers Limited Travel Fellowships still available !! We are also organizing a journal special issue based on the LncRNA Workshop for publication in MDPI Non-Coding RNA. See our website: BIBM- LncRNA'2021: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ More details below .... The IEEE BIBM 2021 Workshop on Long Non-Coding RNAs: Mechanism, Function, and Computational Analysis (BIBM-LncRNA) will be held in conjunction with the 2021 IEEE International Conference on Bioinformatics and Biomedicine (IEEE BIBM 2021), Dec. 9 - 12, 2021. Though the BIBM conference will be virtual/online, the LncRNA workshop will be held in a mixed mode -- both virtual/remote and face-to-face in Dubai, UAE. BIBM-LncRNA'2021 will be held on Dec 11 and Dec 12, 2021. Please see the website for the program schedule and speakers: BIBM-LncRNA'2021: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ IEEE BIBM 2021: https://ieeebibm.org/BIBM2021/ Fellowships: Funds are available for limited fellowships to support the participation of students, and of researchers from underrepresented minority groups in the workshop. We aim at supporting at least one author for each accepted paper, depending on number of papers, and on availability of funds. Please apply on or before Dec. 5, 2021. Journal Special Issue: Authors of selected submissions will be invited to extend their papers for submission for review and possible publication in a special issue of the journal -- Non-Coding RNA. https://www.mdpi.com/journal/ncrna Important Dates: Dec. 5, 2021: Deadline for Workshop Fellowship application Dec 9-12, 2021: IEEE BIBM conference and workshops Dec: 11-12, BIBM-LncRNA Workshop BIBM-LncRNA'21 Workshop home page: https://community.wvu.edu/~daadjeroh/workshops/LNCRNA2021/ -------------- next part -------------- An HTML attachment was scrubbed... URL: